title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 86. Decision engine queries and live queries | Chapter 86. Decision engine queries and live queries You can use queries with the decision engine to retrieve fact sets based on fact patterns as they are used in rules. The patterns might also use optional parameters. To use queries with the decision engine, you add the query definitions in DRL files and then obtain the matching results in your application code. While a query iterates over a result collection, you can use any identifier that is bound to the query to access the corresponding fact or fact field by calling the get() method with the binding variable name as the argument. If the binding refers to a fact object, you can retrieve the fact handle by calling getFactHandle() with the variable name as the parameter. Example query definition in a DRL file Example application code to obtain and iterate over query results QueryResults results = ksession.getQueryResults( "people under the age of 21" ); System.out.println( "we have " + results.size() + " people under the age of 21" ); System.out.println( "These people are under the age of 21:" ); for ( QueryResultsRow row : results ) { Person person = ( Person ) row.get( "person" ); System.out.println( person.getName() + "\n" ); } Invoking queries and processing the results by iterating over the returned set can be difficult when you are monitoring changes over time. To alleviate this difficulty with ongoing queries, Red Hat Decision Manager provides live queries , which use an attached listener for change events instead of returning an iterable result set. Live queries remain open by creating a view and publishing change events for the contents of this view. To activate a live query, start your query with parameters and monitor changes in the resulting view. You can use the dispose() method to terminate the query and discontinue this reactive scenario. Example query definition in a DRL file Example application code with an event listener and a live query final List updated = new ArrayList(); final List removed = new ArrayList(); final List added = new ArrayList(); ViewChangedEventListener listener = new ViewChangedEventListener() { public void rowUpdated(Row row) { updated.add( row.get( "USDprice" ) ); } public void rowRemoved(Row row) { removed.add( row.get( "USDprice" ) ); } public void rowAdded(Row row) { added.add( row.get( "USDprice" ) ); } }; // Open the live query: LiveQuery query = ksession.openLiveQuery( "colors", new Object[] { "red", "blue" }, listener ); ... ... // Terminate the live query: query.dispose() | [
"query \"people under the age of 21\" USDperson : Person( age < 21 ) end",
"QueryResults results = ksession.getQueryResults( \"people under the age of 21\" ); System.out.println( \"we have \" + results.size() + \" people under the age of 21\" ); System.out.println( \"These people are under the age of 21:\" ); for ( QueryResultsRow row : results ) { Person person = ( Person ) row.get( \"person\" ); System.out.println( person.getName() + \"\\n\" ); }",
"query colors(String USDcolor1, String USDcolor2) TShirt(mainColor = USDcolor1, secondColor = USDcolor2, USDprice: manufactureCost) end",
"final List updated = new ArrayList(); final List removed = new ArrayList(); final List added = new ArrayList(); ViewChangedEventListener listener = new ViewChangedEventListener() { public void rowUpdated(Row row) { updated.add( row.get( \"USDprice\" ) ); } public void rowRemoved(Row row) { removed.add( row.get( \"USDprice\" ) ); } public void rowAdded(Row row) { added.add( row.get( \"USDprice\" ) ); } }; // Open the live query: LiveQuery query = ksession.openLiveQuery( \"colors\", new Object[] { \"red\", \"blue\" }, listener ); // Terminate the live query: query.dispose()"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/engine-queries-con_decision-engine |
Chapter 10. Application failover between managed clusters | Chapter 10. Application failover between managed clusters This section provides instructions on how to failover the busybox sample application. The failover method for Regional-DR is application based. Each application that is to be protected in this manner must have a corresponding DRPlacementControl resource and a PlacementRule resource created in the application namespace as shown in the Create Sample Application for DR testing section. Procedure On the Hub cluster navigate to Installed Operators and then click Openshift DR Hub Operator . Click DRPlacementControl tab. Click DRPC busybox-drpc and then the YAML view. Add the action and failoverCluster details as shown in below screenshot. The failoverCluster should be the ACM cluster name for the Secondary managed cluster. DRPlacementControl add action Failover Click Save . Verify that the application busybox is now running in the Secondary managed cluster, the failover cluster ocp4perf2 specified in the YAML file. Example output: Verify that busybox is no longer running on the Primary managed cluster. Example output: Important Be aware of known Regional-DR issues as documented in Known Issues section of Release Notes. If you need assistance with developer preview features, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. | [
"oc get pods,pvc -n busybox-sample",
"NAME READY STATUS RESTARTS AGE pod/busybox 1/1 Running 0 35s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/busybox-pvc Bound pvc-79f2a74d-6e2c-48fb-9ed9-666b74cfa1bb 5Gi RWO ocs-storagecluster-ceph-rbd 35s",
"oc get pods,pvc -n busybox-sample",
"No resources found in busybox-sample namespace."
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/configuring_openshift_data_foundation_for_regional-dr_with_advanced_cluster_management/application-failover-between-managed-clusters_rhodf |
Chapter 6. Running Ansible playbooks with automation content navigator | Chapter 6. Running Ansible playbooks with automation content navigator As a content creator, you can execute your Ansible playbooks with automation content navigator and interactively delve into the results of each play and task to verify or troubleshoot the playbook. You can also execute your Ansible playbooks inside an execution environment and without an execution environment to compare and troubleshoot any problems. 6.1. Executing a playbook from automation content navigator You can run Ansible playbooks with the automation content navigator text-based user interface to follow the execution of the tasks and delve into the results of each task. Prerequisites A playbook. A valid inventory file if not using localhost or an inventory plugin. Procedure Start automation content navigator USD ansible-navigator Run the playbook. USD :run Optional: type ansible-navigator run simple-playbook.yml -i inventory.yml to run the playbook. Verify or add the inventory and any other command line parameters. INVENTORY OR PLAYBOOK NOT FOUND, PLEASE CONFIRM THE FOLLOWING βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Path to playbook: /home/ansible-navigator_demo/simple_playbook.yml Inventory source: /home/ansible-navigator-demo/inventory.yml Additional command line parameters: Please provide a value (optional) ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Submit Cancel Tab to Submit and hit Enter. You should see the tasks executing. Type the number to a play to step into the play results, or type :<number> for numbers above 9. Notice failed tasks show up in red if you have colors enabled for automation content navigator. Type the number to a task to review the task results, or type :<number> for numbers above 9. Optional: type :doc bring up the documentation for the module or plugin used in the task to aid in troubleshooting. ANSIBLE.BUILTIN.PACKAGE_FACTS (MODULE) 0β--- 1βdoc: 2β author: 3β - Matthew Jones (@matburt) 4β - Brian Coca (@bcoca) 5β - Adam Miller (@maxamillion) 6β collection: ansible.builtin 7β description: 8β - Return information about installed packages as facts. <... output omitted ...> 11β module: package_facts 12β notes: 13β - Supports C(check_mode). 14β options: 15β manager: 16β choices: 17β - auto 18β - rpm 19β - apt 20β - portage 21β - pkg 22β - pacman <... output truncated ...> Additional resources ansible-playbook Ansible playbooks 6.2. Reviewing playbook results with an automation content navigator artifact file Automation content navigator saves the results of the playbook run in a JSON artifact file. You can use this file to share the playbook results with someone else, save it for security or compliance reasons, or review and troubleshoot later. You only need the artifact file to review the playbook run. You do not need access to the playbook itself or inventory access. Prerequisites A automation content navigator artifact JSON file from a playbook run. Procedure Start automation content navigator with the artifact file. USD ansible-navigator replay simple_playbook_artifact.json Review the playbook results that match when the playbook ran. You can now type the number to the plays and tasks to step into each to review the results, as you would after executing the playbook. Additional resources ansible-playbook Ansible playbooks | [
"ansible-navigator",
":run",
"INVENTORY OR PLAYBOOK NOT FOUND, PLEASE CONFIRM THE FOLLOWING βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Path to playbook: /home/ansible-navigator_demo/simple_playbook.yml Inventory source: /home/ansible-navigator-demo/inventory.yml Additional command line parameters: Please provide a value (optional) ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ Submit Cancel",
"ANSIBLE.BUILTIN.PACKAGE_FACTS (MODULE) 0β--- 1βdoc: 2β author: 3β - Matthew Jones (@matburt) 4β - Brian Coca (@bcoca) 5β - Adam Miller (@maxamillion) 6β collection: ansible.builtin 7β description: 8β - Return information about installed packages as facts. <... output omitted ...> 11β module: package_facts 12β notes: 13β - Supports C(check_mode). 14β options: 15β manager: 16β choices: 17β - auto 18β - rpm 19β - apt 20β - portage 21β - pkg 22β - pacman <... output truncated ...>",
"ansible-navigator replay simple_playbook_artifact.json"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_content_navigator/assembly-execute-playbooks-navigator_ansible-navigator |
Part V. Known Issues | Part V. Known Issues This part describes known issues in Red Hat Enterprise Linux 7.1. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/part-red_hat_enterprise_linux-7.1_release_notes-known_issues |
Chapter 1. Accessing Red Hat Satellite | Chapter 1. Accessing Red Hat Satellite After Red Hat Satellite has been installed and configured, use the Satellite web UI interface to log in to Satellite for further configuration. 1.1. Installing the Katello Root CA Certificate The first time you log on to Satellite, you might see a warning informing you that you are using the default self-signed certificate and you might not be able to connect this browser to Satellite until the root CA certificate is installed in the browser. Use the following procedure to locate the root CA certificate on Satellite and to install it in your browser. Prerequisites Your Red Hat Satellite is installed and configured. Procedure Identify the fully qualified domain name of your Satellite Server: Access the pub directory on your Satellite Server using a web browser pointed to the fully qualified domain name: When you access Satellite for the first time, an untrusted connection warning displays in your web browser. Accept the self-signed certificate and add the Satellite URL as a security exception to override the settings. This procedure might differ depending on the browser being used. Ensure that the Satellite URL is valid before you accept the security exception. Select katello-server-ca.crt . Import the certificate into your browser as a certificate authority and trust it to identify websites. Importing the Katello Root CA Certificate Manually From the Satellite CLI, copy the katello-server-ca.crt file to the machine you use to access the Satellite web UI: In the browser, import the katello-server-ca.crt certificate as a certificate authority and trust it to identify websites. 1.2. Logging on to Satellite Use the web user interface to log on to Satellite for further configuration. Prerequisites Ensure that the Katello root CA certificate is installed in your browser. For more information, see Section 1.1, "Installing the Katello Root CA Certificate" . Procedure Access Satellite Server using a web browser pointed to the fully qualified domain name: Enter the user name and password created during the configuration process. If a user was not created during the configuration process, the default user name is admin . If you have problems logging on, you can reset the password. For more information, see Section 1.5, "Resetting the Administrative User Password" . 1.3. Navigation Tabs in the Satellite web UI Use the navigation tabs to browse the Satellite web UI. Navigation Tabs Description Any Context Clicking this tab changes the organization and location. If no organization or location is selected, the default organization is Any Organization and the default location is Any Location . Use this tab to change to different values. Monitor Provides summary dashboards and reports. Content Provides content management tools. This includes Content Views, Activation Keys, and Life Cycle Environments. Hosts Provides host inventory and provisioning configuration tools. Configure Provides general configuration tools and data including Host Groups and Puppet data. Infrastructure Provides tools on configuring how Satellite interacts with the environment. User Name Provides user administration where users can edit their personal information. Provides event notifications to keep administrators informed of important environment changes. Administer Provides advanced configuration for settings such as Users and RBAC, as well as general settings. 1.4. Changing the Password These steps show how to change your password. Procedure Click your user name at the top right corner. Select My Account from the menu. In the Current Password field, enter the current password. In the Password field, enter a new password. In the Verify field, enter the new password again. Click the Submit button to save your new password. 1.5. Resetting the Administrative User Password Use the following procedures to reset the administrative password to randomly generated characters or to set a new administrative password. To Reset the Administrative User Password Log on to the base operating system where Satellite Server is installed. Enter the following command to reset the password: Use this password to reset the password in the Satellite web UI. Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. To Set a New Administrative User Password Log on to the base operating system where Satellite Server is installed. To set the password, enter the following command: Edit the ~/.hammer/cli.modules.d/foreman.yml file on Satellite Server to add the new password: Unless you update the ~/.hammer/cli.modules.d/foreman.yml file, you cannot use the new password with Hammer CLI. 1.6. Setting a Custom Message on the Login Page Procedure In the Satellite web UI, navigate to Administer > Settings , and click the General tab. Click the edit button to Login page footer text , and enter the desired text to be displayed on the login page. For example, this text may be a warning message required by your company. Click Save . Log out of the Satellite web UI and verify that the custom text is now displayed on the login page below the Satellite version number. | [
"hostname -f",
"https:// satellite.example.com /pub",
"scp /var/www/html/pub/katello-server-ca.crt username@hostname:remotefile",
"https:// satellite.example.com /",
"foreman-rake permissions:reset Reset to user: admin, password: qwJxBptxb7Gfcjj5",
"vi ~/.hammer/cli.modules.d/foreman.yml",
"foreman-rake permissions:reset password= new_password",
"vi ~/.hammer/cli.modules.d/foreman.yml"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/accessing_server_admin |
Chapter 9. Visualizing external entities | Chapter 9. Visualizing external entities Important Visualizing external entities is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Understanding the interactions between your cluster and external entities is essential for incident response and network policy management. With the Visualizing external entities feature, you can view the external IP addresses that interact with your cluster. You can view external entities in the Network Graph by selecting the External Entities graph node or query external entities by using the API. Note Visualizing external entities is an opt-in feature that is disabled by default. To enable this feature, you must enable external IP collection in Central and secured clusters, as described in the following sections. 9.1. Enabling external IP collection in Central There are two environmental variables that control the collection of external IP in Central: ROX_EXTERNAL_IPS and ROX_NETWORK_GRAPH_EXTERNAL_IPS . You must enable ROX_EXTERNAL_IPS in Central to enable external IP collection and to query external entities by using the API. After that, you can use ROX_NETWORK_GRAPH_EXTERNAL_IPS to display collected external IPs in the Network Graph. Procedure If you installed RHACS by using the RHACS Operator, insert the following customization in the Central custom resource definition (CRD): spec: customize: envVars: - name: ROX_EXTERNAL_IPS 1 value: 'true' 1 Additionally, you can also specify ROX_NETWORK_GRAPH_EXTERNAL_IPS . If you installed RHACS by using Helm, add the following annotations to your values-public.yaml file: customize: # Extra environment variables for all containers in all objects. envVars: ROX_EXTERNAL_IPS: "true" 1 1 Additionally, you can also specify ROX_NETWORK_GRAPH_EXTERNAL_IPS . 9.2. Enabling external IP collection in secured clusters To enable external IP collection in secured clusters, you must individually configure each secured cluster's runtime configuration. You can have some clusters with the functionality enabled while others remain disabled. In this case, external IP information is only available for the clusters where you have enabled the feature. You can use a ConfigMap object to enable the external IP collection in secured clusters. Procedure Create a ConfigMap object called collector-config with the following content: apiVersion: v1 kind: ConfigMap metadata: name: collector-config namespace: stackrox data: runtime_config.yaml: | 1 networking: externalIps: enabled: ENABLED 2 1 RHACS mounts this file at /etc/stackrox/runtime_config.yaml . 2 networking.externalIps.enable was changed to networking.externalIps.enabled in RHACS 4.7. It is an enum and can be set to ENABLED or DISABLED . When you create or update the ConfigMap object, the collector refreshes the runtime configuration. When you delete the ConfigMap object, the settings revert to the default runtime configuration values. For more information, see Using Collector runtime configuration . 9.3. Querying external IP addresses by using the API You can get information about the external IP addresses associated with a specific cluster by using the following endpoints: /v1/networkgraph/cluster/{clusterId}/externalentities : This endpoint returns a list of external entities for a given cluster ID. Each entity includes the following information: Name : The name of the external entity. CIDR block : The CIDR block associated with the entity. Default entity : Indicates that the entity is a CIDR-block definition provided by the system. Discovered : If true , indicates that the external IP address does not match any specified CIDR block. /v1/networkgraph/cluster/{clusterId}/externalentities/{entityId}/flows : This endpoint reports the flows to and from an external entity for a given cluster ID and entity ID. Use this endpoint to analyze network traffic patterns and gain insights into the interactions between your cluster and external entities. /v1/networkgraph/cluster/{clusterId}/externalentities/metadata : This endpoint reports statistics about the external flows for a given cluster ID. It reports details about each entity, as well as the number of flows associated with it. 9.4. Known limitations The following are some known limitations of the Visualizing external entities feature: When you enable external IP collection for a cluster, Collector in those clusters report more information to Sensor and to Central. This might create scalability issues if the workload in the cluster communicates with a large number of distinct external peers. It is recommended that you do not enable this feature on clusters with communication patterns involving more than 10,000 distinct external entities. You cannot see external IP addresses if they are part of CIDR blocks. When you enable external IP collection, external IP addresses might appear in a deployment's network baseline. | [
"spec: customize: envVars: - name: ROX_EXTERNAL_IPS 1 value: 'true'",
"customize: # Extra environment variables for all containers in all objects. envVars: ROX_EXTERNAL_IPS: \"true\" 1",
"apiVersion: v1 kind: ConfigMap metadata: name: collector-config namespace: stackrox data: runtime_config.yaml: | 1 networking: externalIps: enabled: ENABLED 2"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/visualizing-external-entities |
Chapter 10. Using XML Documents | Chapter 10. Using XML Documents Abstract The pure XML payload format provides an alternative to the SOAP binding by allowing services to exchange data using straight XML documents without the overhead of a SOAP envelope. XML binding namespace The extensions used to describe XML format bindings are defined in the namespace http://cxf.apache.org/bindings/xformat . Apache CXF tools use the prefix xformat to represent the XML binding extensions. Add the following line to your contracts: Hand editing To map an interface to a pure XML payload format do the following: Add the namespace declaration to include the extensions defining the XML binding. See the section called "XML binding namespace" . Add a standard WSDL binding element to your contract to hold the XML binding, give the binding a unique name , and specify the name of the WSDL portType element that represents the interface being bound. Add an xformat:binding child element to the binding element to identify that the messages are being handled as pure XML documents without SOAP envelopes. Optionally, set the xformat:binding element's rootNode attribute to a valid QName. For more information on the effect of the rootNode attribute see the section called "XML messages on the wire" . For each operation defined in the bound interface, add a standard WSDL operation element to hold the binding information for the operation's messages. For each operation added to the binding, add the input , output , and fault children elements to represent the messages used by the operation. These elements correspond to the messages defined in the interface definition of the logical operation. Optionally add an xformat:body element with a valid rootNode attribute to the added input , output , and fault elements to override the value of rootNode set at the binding level. Note If any of your messages have no parts, for example the output message for an operation that returns void, you must set the rootNode attribute for the message to ensure that the message written on the wire is a valid, but empty, XML document. XML messages on the wire When you specify that an interface's messages are to be passed as XML documents, without a SOAP envelope, you must take care to ensure that your messages form valid XML documents when they are written on the wire. You also need to ensure that non-Apache CXF participants that receive the XML documents understand the messages generated by Apache CXF. A simple way to solve both problems is to use the optional rootNode attribute on either the global xformat:binding element or on the individual message's xformat:body elements. The rootNode attribute specifies the QName for the element that serves as the root node for the XML document generated by Apache CXF. When the rootNode attribute is not set, Apache CXF uses the root element of the message part as the root element when using doc style messages, or an element using the message part name as the root element when using rpc style messages. For example, if the rootNode attribute is not set the message defined in Example 10.1, "Valid XML Binding Message" would generate an XML document with the root element lineNumber . Example 10.1. Valid XML Binding Message For messages with one part, Apache CXF will always generate a valid XML document even if the rootNode attribute is not set. However, the message in Example 10.2, "Invalid XML Binding Message" would generate an invalid XML document. Example 10.2. Invalid XML Binding Message Without the rootNode attribute specified in the XML binding, Apache CXF will generate an XML document similar to Example 10.3, "Invalid XML Document" for the message defined in Example 10.2, "Invalid XML Binding Message" . The generated XML document is invalid because it has two root elements: pairName and entryNum . Example 10.3. Invalid XML Document If you set the rootNode attribute, as shown in Example 10.4, "XML Binding with rootNode set" Apache CXF will wrap the elements in the specified root element. In this example, the rootNode attribute is defined for the entire binding and specifies that the root element will be named entrants. Example 10.4. XML Binding with rootNode set An XML document generated from the input message would be similar to Example 10.5, "XML Document generated using the rootNode attribute" . Notice that the XML document now only has one root element. Example 10.5. XML Document generated using the rootNode attribute Overriding the binding's rootNode attribute setting You can also set the rootNode attribute for each individual message, or override the global setting for a particular message, by using the xformat:body element inside of the message binding. For example, if you wanted the output message defined in Example 10.4, "XML Binding with rootNode set" to have a different root element from the input message, you could override the binding's root element as shown in Example 10.6, "Using xformat:body " . Example 10.6. Using xformat:body | [
"xmlns:xformat=\"http://cxf.apache.org/bindings/xformat\"",
"<type ... > <element name=\"operatorID\" type=\"xsd:int\"/> </types> <message name=\"operator\"> <part name=\"lineNumber\" element=\"ns1:operatorID\"/> </message>",
"<types> <element name=\"pairName\" type=\"xsd:string\"/> <element name=\"entryNum\" type=\"xsd:int\"/> </types> <message name=\"matildas\"> <part name=\"dancing\" element=\"ns1:pairName\"/> <part name=\"number\" element=\"ns1:entryNum\"/> </message>",
"<pairName> Fred&Linda </pairName> <entryNum> 123 </entryNum>",
"<portType name=\"danceParty\"> <operation name=\"register\"> <input message=\"tns:matildas\" name=\"contestant\"/> </operation> </portType> <binding name=\"matildaXMLBinding\" type=\"tns:dancingMatildas\"> <xmlformat:binding rootNode=\"entrants\"/> <operation name=\"register\"> <input name=\"contestant\"/> <output name=\"entered\"/> </binding>",
"<entrants> <pairName> Fred&Linda <entryNum> 123 </entryNum> </entrants>",
"<binding name=\"matildaXMLBinding\" type=\"tns:dancingMatildas\"> <xmlformat:binding rootNode=\"entrants\"/> <operation name=\"register\"> <input name=\"contestant\"/> <output name=\"entered\"> <xformat:body rootNode=\"entryStatus\" /> </output> </operation> </binding>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/fusecxfxmlbinding |
Logging | Logging OpenShift Container Platform 4.9 OpenShift Logging installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: \"vector\" vector: {}",
"oc delete pod -l component=collector",
"oc delete pod -l component=collector",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage_class_name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 0/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 0/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 0/1 1 0 6m44s",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc create -f <file-name>.yaml",
"oc create -f eo-namespace.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging annotations: openshift.io/node-selector: \"\" labels: openshift.io/cluster-monitoring: \"true\"",
"oc create -f <file-name>.yaml",
"oc create -f olo-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-operators-redhat namespace: openshift-operators-redhat 1 spec: {}",
"oc create -f <file-name>.yaml",
"oc create -f eo-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: \"elasticsearch-operator\" namespace: \"openshift-operators-redhat\" 1 spec: channel: \"stable-5.1\" 2 installPlanApproval: \"Automatic\" 3 source: \"redhat-operators\" 4 sourceNamespace: \"openshift-marketplace\" name: \"elasticsearch-operator\"",
"oc create -f <file-name>.yaml",
"oc create -f eo-sub.yaml",
"oc get csv --all-namespaces",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE default elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-node-lease elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-public elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded kube-system elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-apiserver elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication-operator elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded openshift-authentication elasticsearch-operator.5.1.0-202007012112.p0 OpenShift Elasticsearch Operator 5.1.0-202007012112.p0 Succeeded",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: targetNamespaces: - openshift-logging 2",
"oc create -f <file-name>.yaml",
"oc create -f olo-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: \"stable\" 2 name: cluster-logging source: redhat-operators 3 sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f olo-sub.yaml",
"oc get csv -n openshift-logging",
"NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-logging clusterlogging.5.1.0-202007012112.p0 OpenShift Logging 5.1.0-202007012112.p0 Succeeded",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" spec: managementState: \"Managed\" 2 logStore: type: \"elasticsearch\" 3 retentionPolicy: 4 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 5 storage: storageClassName: \"<storage-class-name>\" 6 size: 200G resources: 7 limits: memory: \"16Gi\" requests: memory: \"16Gi\" proxy: 8 resources: limits: memory: 256Mi requests: memory: 256Mi redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" 9 kibana: replicas: 1 collection: logs: type: \"fluentd\" 10 fluentd: {}",
"oc get deployment",
"cluster-logging-operator 1/1 1 1 18h elasticsearch-cd-x6kdekli-1 1/1 1 0 6m54s elasticsearch-cdm-x6kdekli-1 1/1 1 1 18h elasticsearch-cdm-x6kdekli-2 1/1 1 0 6m49s elasticsearch-cdm-x6kdekli-3 1/1 1 0 6m44s",
"oc create -f <file-name>.yaml",
"oc create -f olo-instance.yaml",
"oc get pods -n openshift-logging",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-66f77ffccb-ppzbg 1/1 Running 0 7m elasticsearch-cdm-ftuhduuw-1-ffc4b9566-q6bhp 2/2 Running 0 2m40s elasticsearch-cdm-ftuhduuw-2-7b4994dbfc-rd2gc 2/2 Running 0 2m36s elasticsearch-cdm-ftuhduuw-3-84b5ff7ff8-gqnm2 2/2 Running 0 2m4s collector-587vb 1/1 Running 0 2m26s collector-7mpb9 1/1 Running 0 2m30s collector-flm6j 1/1 Running 0 2m33s collector-gn4rn 1/1 Running 0 2m26s collector-nlgb6 1/1 Running 0 2m30s collector-snpkt 1/1 Running 0 2m28s kibana-d6d5668c5-rppqm 2/2 Running 0 2m39s",
"oc auth can-i get pods/log -n <project>",
"yes",
"oc adm pod-network join-projects --to=openshift-operators-redhat openshift-logging",
"oc label namespace openshift-operators-redhat project=openshift-operators-redhat",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring-ingress-operators-redhat spec: ingress: - from: - podSelector: {} - from: - namespaceSelector: matchLabels: project: \"openshift-operators-redhat\" - from: - namespaceSelector: matchLabels: name: \"openshift-monitoring\" - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" 1 namespace: \"openshift-logging\" 2 spec: managementState: \"Managed\" 3 logStore: type: \"elasticsearch\" 4 retentionPolicy: application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3 resources: limits: memory: 16Gi requests: cpu: 500m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: 5 type: \"kibana\" kibana: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi replicas: 1 collection: 6 logs: type: \"fluentd\" fluentd: resources: limits: memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc get pods --selector component=collector -o wide -n openshift-logging",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES fluentd-8d69v 1/1 Running 0 134m 10.130.2.30 master1.example.com <none> <none> fluentd-bd225 1/1 Running 0 134m 10.131.1.11 master2.example.com <none> <none> fluentd-cvrzs 1/1 Running 0 134m 10.130.0.21 master3.example.com <none> <none> fluentd-gpqg2 1/1 Running 0 134m 10.128.2.27 worker1.example.com <none> <none> fluentd-l9j7j 1/1 Running 0 134m 10.129.2.31 worker2.example.com <none> <none>",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: collection: logs: fluentd: resources: limits: 1 memory: 736Mi requests: cpu: 100m memory: 736Mi",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: forwarder: fluentd: buffer: chunkLimitSize: 8m 1 flushInterval: 5s 2 flushMode: interval 3 flushThreadCount: 3 4 overflowAction: throw_exception 5 retryMaxInterval: \"300s\" 6 retryType: periodic 7 retryWait: 1s 8 totalLimitSize: 32m 9",
"oc get pods -l component=collector -n openshift-logging",
"oc extract configmap/fluentd --confirm",
"<buffer> @type file path '/var/lib/fluentd/default' flush_mode interval flush_interval 5s flush_thread_count 3 retry_type periodic retry_wait 1s retry_max_interval 300s retry_timeout 60m queued_chunks_limit_size \"#{ENV['BUFFER_QUEUE_LIMIT'] || '32'}\" total_limit_size 32m chunk_limit_size 8m overflow_action throw_exception </buffer>",
"outputRefs: - default",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" collection: logs: type: \"fluentd\" fluentd: {}",
"oc get pods -l component=collector -n openshift-logging",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: pipelines: 1 - name: all-to-default inputRefs: - infrastructure - application - audit outputRefs: - default",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch-insecure type: \"elasticsearch\" url: http://elasticsearch-insecure.messaging.svc.cluster.local insecure: true - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch-secure.messaging.svc.cluster.local secret: name: es-audit - name: secureforward-offcluster type: \"fluentdForward\" url: https://secureforward.offcluster.com:24224 secret: name: secureforward pipelines: - name: container-logs inputRefs: - application outputRefs: - secureforward-offcluster - name: infra-logs inputRefs: - infrastructure outputRefs: - elasticsearch-insecure - name: audit-logs inputRefs: - audit outputRefs: - elasticsearch-secure - default 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" retentionPolicy: 1 application: maxAge: 1d infra: maxAge: 7d audit: maxAge: 7d elasticsearch: nodeCount: 3",
"apiVersion: \"logging.openshift.io/v1\" kind: \"Elasticsearch\" metadata: name: \"elasticsearch\" spec: indexManagement: policies: 1 - name: infra-policy phases: delete: minAge: 7d 2 hot: actions: rollover: maxAge: 8h 3 pollInterval: 15m 4",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 4s elasticsearch-im-audit */15 * * * * False 0 <none> 4s elasticsearch-im-infra */15 * * * * False 0 <none> 4s",
"oc edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: 1 resources: limits: 2 memory: \"32Gi\" requests: 3 cpu: \"1\" memory: \"16Gi\" proxy: 4 resources: limits: memory: 100Mi requests: memory: 100Mi",
"resources: limits: 1 memory: \"32Gi\" requests: 2 cpu: \"8\" memory: \"32Gi\"",
"oc edit clusterlogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: logStore: type: \"elasticsearch\" elasticsearch: redundancyPolicy: \"SingleRedundancy\" 1",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"gp2\" size: \"200G\"",
"spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}",
"oc project openshift-logging",
"oc get pods -l component=elasticsearch-",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"false\"}}}}}'",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"oc exec -c elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_flush/synced\" -XPOST",
"{\"_shards\":{\"total\":4,\"successful\":4,\"failed\":0},\".security\":{\"total\":2,\"successful\":2,\"failed\":0},\".kibana_1\":{\"total\":2,\"successful\":2,\"failed\":0}}",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"primaries\" } }'",
"{\"acknowledged\":true,\"persistent\":{\"cluster\":{\"routing\":{\"allocation\":{\"enable\":\"primaries\"}}}},\"transient\":",
"oc rollout resume deployment/<deployment-name>",
"oc rollout resume deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 resumed",
"oc get pods -l component=elasticsearch-",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6k 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-2-f799564cb-l9mj7 2/2 Running 0 22h elasticsearch-cdm-5ceex6ts-3-585968dc68-k7kjr 2/2 Running 0 22h",
"oc rollout pause deployment/<deployment-name>",
"oc rollout pause deployment/elasticsearch-cdm-0-1",
"deployment.extensions/elasticsearch-cdm-0-1 paused",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=_cluster/health?pretty=true",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"yellow\", 1 \"timed_out\" : false, \"number_of_nodes\" : 3, \"number_of_data_nodes\" : 3, \"active_primary_shards\" : 8, \"active_shards\" : 16, \"relocating_shards\" : 0, \"initializing_shards\" : 0, \"unassigned_shards\" : 1, \"delayed_unassigned_shards\" : 0, \"number_of_pending_tasks\" : 0, \"number_of_in_flight_fetch\" : 0, \"task_max_waiting_in_queue_millis\" : 0, \"active_shards_percent_as_number\" : 100.0 }",
"oc exec <any_es_pod_in_the_cluster> -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"oc exec elasticsearch-cdm-5ceex6ts-1-dcd6c4c7c-jpw6 -c elasticsearch -- es_util --query=\"_cluster/settings\" -XPUT -d '{ \"persistent\": { \"cluster.routing.allocation.enable\" : \"all\" } }'",
"{ \"acknowledged\" : true, \"persistent\" : { }, \"transient\" : { \"cluster\" : { \"routing\" : { \"allocation\" : { \"enable\" : \"all\" } } } } }",
"oc -n openshift-logging patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"nodeSelector\":{\"logging-infra-collector\": \"true\"}}}}}'",
"oc get service elasticsearch -o jsonpath={.spec.clusterIP} -n openshift-logging",
"172.30.183.229",
"oc get service elasticsearch -n openshift-logging",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.183.229 <none> 9200/TCP 22h",
"oc exec elasticsearch-cdm-oplnhinv-1-5746475887-fj2f8 -n openshift-logging -- curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://172.30.183.229:9200/_cat/health\"",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 29 100 29 0 0 108 0 --:--:-- --:--:-- --:--:-- 108",
"oc project openshift-logging",
"oc extract secret/elasticsearch --to=. --keys=admin-ca",
"admin-ca",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: elasticsearch namespace: openshift-logging spec: host: to: kind: Service name: elasticsearch tls: termination: reencrypt destinationCACertificate: | 1",
"cat ./admin-ca | sed -e \"s/^/ /\" >> <file-name>.yaml",
"oc create -f <file-name>.yaml",
"route.route.openshift.io/elasticsearch created",
"token=USD(oc whoami -t)",
"routeES=`oc get route elasticsearch -o jsonpath={.spec.host}`",
"curl -tlsv1.2 --insecure -H \"Authorization: Bearer USD{token}\" \"https://USD{routeES}\"",
"{ \"name\" : \"elasticsearch-cdm-i40ktba0-1\", \"cluster_name\" : \"elasticsearch\", \"cluster_uuid\" : \"0eY-tJzcR3KOdpgeMJo-MQ\", \"version\" : { \"number\" : \"6.8.1\", \"build_flavor\" : \"oss\", \"build_type\" : \"zip\", \"build_hash\" : \"Unknown\", \"build_date\" : \"Unknown\", \"build_snapshot\" : true, \"lucene_version\" : \"7.7.0\", \"minimum_wire_compatibility_version\" : \"5.6.0\", \"minimum_index_compatibility_version\" : \"5.0.0\" }, \"<tagline>\" : \"<for search>\" }",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"oc edit ClusterLogging instance",
"oc edit ClusterLogging instance apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" . spec: visualization: type: \"kibana\" kibana: replicas: 1 1",
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 resources: 1 limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: storageClassName: \"gp2\" size: \"200G\" redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: resources: 2 limits: memory: 1Gi requests: cpu: 500m memory: 1Gi proxy: resources: 3 limits: memory: 100Mi requests: cpu: 100m memory: 100Mi replicas: 2 collection: logs: type: \"fluentd\" fluentd: resources: 4 limits: memory: 736Mi requests: cpu: 200m memory: 736Mi",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: openshift-logging spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 tolerations: 1 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 16Gi requests: cpu: 200m memory: 16Gi storage: {} redundancyPolicy: \"ZeroRedundancy\" visualization: type: \"kibana\" kibana: tolerations: 2 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi replicas: 1 collection: logs: type: \"fluentd\" fluentd: tolerations: 3 - key: \"logging\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 6000 resources: limits: memory: 2Gi requests: cpu: 100m memory: 1Gi",
"tolerations: - effect: \"NoExecute\" key: \"node.kubernetes.io/disk-pressure\" operator: \"Exists\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 elasticsearch=node:NoExecute",
"logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 1 tolerations: - key: \"elasticsearch\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 kibana=node:NoExecute",
"visualization: type: \"kibana\" kibana: tolerations: - key: \"kibana\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"tolerations: - key: \"node-role.kubernetes.io/master\" operator: \"Exists\" effect: \"NoExecute\"",
"oc adm taint nodes <node-name> <key>=<value>:<effect>",
"oc adm taint nodes node1 collector=node:NoExecute",
"collection: logs: type: \"fluentd\" fluentd: tolerations: - key: \"collector\" 1 operator: \"Exists\" 2 effect: \"NoExecute\" 3 tolerationSeconds: 6000 4",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s",
"variant: openshift version: 4.9.0 metadata: name: 40-worker-custom-journald labels: machineconfiguration.openshift.io/role: \"worker\" storage: files: - path: /etc/systemd/journald.conf mode: 0644 1 overwrite: true contents: inline: | Compress=yes 2 ForwardToConsole=no 3 ForwardToSyslog=no MaxRetentionSec=1month 4 RateLimitBurst=10000 5 RateLimitIntervalSec=30s Storage=persistent 6 SyncIntervalSec=1s 7 SystemMaxUse=8G 8 SystemKeepFree=20% 9 SystemMaxFileSize=10M 10",
"butane 40-worker-custom-journald.bu -o 40-worker-custom-journald.yaml",
"oc apply -f 40-worker-custom-journald.yaml",
"oc describe machineconfigpool/worker",
"Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Conditions: Message: Reason: All nodes are updating to rendered-worker-913514517bcea7c93bd446f4830bc64e",
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc logs -f <pod_name> -c <container_name>",
"oc logs ruby-58cd97df55-mww7r",
"oc logs -f ruby-57f7f4855b-znl92 -c ruby",
"oc logs <object_type>/<resource_name> 1",
"oc logs deployment/ruby",
"oc auth can-i get pods/log -n <project>",
"yes",
"oc auth can-i get pods/log -n <project>",
"yes",
"{ \"_index\": \"infra-000001\", \"_type\": \"_doc\", \"_id\": \"YmJmYTBlNDkZTRmLTliMGQtMjE3NmFiOGUyOWM3\", \"_version\": 1, \"_score\": null, \"_source\": { \"docker\": { \"container_id\": \"f85fa55bbef7bb783f041066be1e7c267a6b88c4603dfce213e32c1\" }, \"kubernetes\": { \"container_name\": \"registry-server\", \"namespace_name\": \"openshift-marketplace\", \"pod_name\": \"redhat-marketplace-n64gc\", \"container_image\": \"registry.redhat.io/redhat/redhat-marketplace-index:v4.7\", \"container_image_id\": \"registry.redhat.io/redhat/redhat-marketplace-index@sha256:65fc0c45aabb95809e376feb065771ecda9e5e59cc8b3024c4545c168f\", \"pod_id\": \"8f594ea2-c866-4b5c-a1c8-a50756704b2a\", \"host\": \"ip-10-0-182-28.us-east-2.compute.internal\", \"master_url\": \"https://kubernetes.default.svc\", \"namespace_id\": \"3abab127-7669-4eb3-b9ef-44c04ad68d38\", \"namespace_labels\": { \"openshift_io/cluster-monitoring\": \"true\" }, \"flat_labels\": [ \"catalogsource_operators_coreos_com/update=redhat-marketplace\" ] }, \"message\": \"time=\\\"2020-09-23T20:47:03Z\\\" level=info msg=\\\"serving registry\\\" database=/database/index.db port=50051\", \"level\": \"unknown\", \"hostname\": \"ip-10-0-182-28.internal\", \"pipeline_metadata\": { \"collector\": { \"ipaddr4\": \"10.0.182.28\", \"inputname\": \"fluent-plugin-systemd\", \"name\": \"fluentd\", \"received_at\": \"2020-09-23T20:47:15.007583+00:00\", \"version\": \"1.7.4 1.6.0\" } }, \"@timestamp\": \"2020-09-23T20:47:03.422465+00:00\", \"viaq_msg_id\": \"YmJmYTBlNDktMDMGQtMjE3NmFiOGUyOWM3\", \"openshift\": { \"labels\": { \"logging\": \"infra\" } } }, \"fields\": { \"@timestamp\": [ \"2020-09-23T20:47:03.422Z\" ], \"pipeline_metadata.collector.received_at\": [ \"2020-09-23T20:47:15.007Z\" ] }, \"sort\": [ 1600894023422 ] }",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-secure 3 type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: elasticsearch - name: elasticsearch-insecure 4 type: \"elasticsearch\" url: http://elasticsearch.insecure.com:9200 - name: kafka-app 5 type: \"kafka\" url: tls://kafka.secure.com:9093/app-topic inputs: 6 - name: my-app-logs application: namespaces: - my-project pipelines: - name: audit-logs 7 inputRefs: - audit outputRefs: - elasticsearch-secure - default parse: json 8 labels: secure: \"true\" 9 datacenter: \"east\" - name: infrastructure-logs 10 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: datacenter: \"west\" - name: my-app 11 inputRefs: - my-app-logs outputRefs: - default - inputRefs: 12 - application outputRefs: - kafka-app labels: datacenter: \"south\"",
"oc create secret generic -n openshift-logging <my-secret> --from-file=tls.key=<your_key_file> --from-file=tls.crt=<your_crt_file> --from-file=ca-bundle.crt=<your_bundle_file> --from-literal=username=<your_username> --from-literal=password=<your_password>",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: elasticsearch-insecure 3 type: \"elasticsearch\" 4 url: http://elasticsearch.insecure.com:9200 5 - name: elasticsearch-secure type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 6 secret: name: es-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - elasticsearch-secure 10 - default 11 parse: json 12 labels: myLabel: \"myValue\" 13 - name: infrastructure-audit-logs 14 inputRefs: - infrastructure outputRefs: - elasticsearch-insecure labels: logs: \"audit-infra\"",
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Secret metadata: name: openshift-test-secret data: username: dGVzdHVzZXJuYW1lCg== password: dGVzdHBhc3N3b3JkCg==",
"oc create secret -n openshift-logging openshift-test-secret.yaml",
"kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: elasticsearch type: \"elasticsearch\" url: https://elasticsearch.secure.com:9200 secret: name: openshift-test-secret",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' pipelines: - name: forward-to-fluentd-secure 7 inputRefs: 8 - application - audit outputRefs: - fluentd-server-secure 9 - default 10 parse: json 11 labels: clusterId: \"C1234\" 12 - name: forward-to-fluentd-insecure 13 inputRefs: - infrastructure outputRefs: - fluentd-server-insecure labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"input { tcp { codec => fluent { nanosecond_precision => true } port => 24114 } } filter { } output { stdout { codec => rubydebug } }",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: rsyslog-east 3 type: syslog 4 syslog: 5 facility: local0 rfc: RFC3164 payloadKey: message severity: informational url: 'tls://rsyslogserver.east.example.com:514' 6 secret: 7 name: syslog-secret - name: rsyslog-west type: syslog syslog: appName: myapp facility: user msgID: mymsg procID: myproc rfc: RFC5424 severity: debug url: 'udp://rsyslogserver.west.example.com:514' pipelines: - name: syslog-east 8 inputRefs: 9 - audit - application outputRefs: 10 - rsyslog-east - default 11 parse: json 12 labels: secure: \"true\" 13 syslog: \"east\" - name: syslog-west 14 inputRefs: - infrastructure outputRefs: - rsyslog-west - default labels: syslog: \"west\"",
"oc create -f <file-name>.yaml",
"spec: outputs: - name: syslogout syslog: addLogSource: true facility: user payloadKey: message rfc: RFC3164 severity: debug tag: mytag type: syslog url: tls://syslog-receiver.openshift-logging.svc:24224 pipelines: - inputRefs: - application name: test-app outputRefs: - syslogout",
"<15>1 2020-11-15T17:06:14+00:00 fluentd-9hkb4 mytag - - - {\"msgcontent\"=>\"Message Contents\", \"timestamp\"=>\"2020-11-15 17:06:09\", \"tag_key\"=>\"rec_tag\", \"index\"=>56}",
"<15>1 2020-11-16T10:49:37+00:00 crc-j55b9-master-0 mytag - - - namespace_name=clo-test-6327,pod_name=log-generator-ff9746c49-qxm7l,container_name=log-generator,message={\"msgcontent\":\"My life is my message\", \"timestamp\":\"2020-11-16 10:49:36\", \"tag_key\":\"rec_tag\", \"index\":76}",
"apiVersion: v1 kind: Secret metadata: name: cw-secret namespace: openshift-logging data: aws_access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK aws_secret_access_key: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo=",
"oc apply -f cw-secret.yaml",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: cw 3 type: cloudwatch 4 cloudwatch: groupBy: logType 5 groupPrefix: <group prefix> 6 region: us-east-2 7 secret: name: cw-secret 8 pipelines: - name: infra-logs 9 inputRefs: 10 - infrastructure - audit - application outputRefs: - cw 11",
"oc create -f <file-name>.yaml",
"oc get Infrastructure/cluster -ojson | jq .status.infrastructureName \"mycluster-7977k\"",
"oc run busybox --image=busybox -- sh -c 'while true; do echo \"My life is my message\"; sleep 3; done' oc logs -f busybox My life is my message My life is my message My life is my message",
"oc get ns/app -ojson | jq .metadata.uid \"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: cw type: cloudwatch cloudwatch: groupBy: logType region: us-east-2 secret: name: cw-secret pipelines: - name: all-logs inputRefs: - infrastructure - audit - application outputRefs: - cw",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.application\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.application | jq .logStreams[].logStreamName \"kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.audit | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.k8s-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.linux-audit.log\" \"ip-10-0-131-228.us-east-2.compute.internal.openshift-audit.log\"",
"aws --output json logs describe-log-streams --log-group-name mycluster-7977k.infrastructure | jq .logStreams[].logStreamName \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-69f9fd9b58-zqzw5_openshift-oauth-apiserver_oauth-apiserver-453c5c4ee026fe20a6139ba6b1cdd1bed25989c905bf5ac5ca211b7cbb5c3d7b.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-ce51532df7d4e4d5f21c4f4be05f6575b93196336be0027067fd7d93d70f66a4.log\" \"ip-10-0-131-228.us-east-2.compute.internal.kubernetes.var.log.containers.apiserver-797774f7c5-lftrx_openshift-apiserver_openshift-apiserver-check-endpoints-82a9096b5931b5c3b1d6dc4b66113252da4a6472c9fff48623baee761911a9ef.log\"",
"aws logs get-log-events --log-group-name mycluster-7977k.application --log-stream-name kubernetes.var.log.containers.busybox_app_busybox-da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76.log { \"events\": [ { \"timestamp\": 1629422704178, \"message\": \"{\\\"docker\\\":{\\\"container_id\\\":\\\"da085893053e20beddd6747acdbaf98e77c37718f85a7f6a4facf09ca195ad76\\\"},\\\"kubernetes\\\":{\\\"container_name\\\":\\\"busybox\\\",\\\"namespace_name\\\":\\\"app\\\",\\\"pod_name\\\":\\\"busybox\\\",\\\"container_image\\\":\\\"docker.io/library/busybox:latest\\\",\\\"container_image_id\\\":\\\"docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60\\\",\\\"pod_id\\\":\\\"870be234-90a3-4258-b73f-4f4d6e2777c7\\\",\\\"host\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"labels\\\":{\\\"run\\\":\\\"busybox\\\"},\\\"master_url\\\":\\\"https://kubernetes.default.svc\\\",\\\"namespace_id\\\":\\\"794e1e1a-b9f5-4958-a190-e76a9b53d7bf\\\",\\\"namespace_labels\\\":{\\\"kubernetes_io/metadata_name\\\":\\\"app\\\"}},\\\"message\\\":\\\"My life is my message\\\",\\\"level\\\":\\\"unknown\\\",\\\"hostname\\\":\\\"ip-10-0-216-3.us-east-2.compute.internal\\\",\\\"pipeline_metadata\\\":{\\\"collector\\\":{\\\"ipaddr4\\\":\\\"10.0.216.3\\\",\\\"inputname\\\":\\\"fluent-plugin-systemd\\\",\\\"name\\\":\\\"fluentd\\\",\\\"received_at\\\":\\\"2021-08-20T01:25:08.085760+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-20T01:25:04.178986+00:00\\\",\\\"viaq_index_name\\\":\\\"app-write\\\",\\\"viaq_msg_id\\\":\\\"NWRjZmUyMWQtZjgzNC00MjI4LTk3MjMtNTk3NmY3ZjU4NDk1\\\",\\\"log_type\\\":\\\"application\\\",\\\"time\\\":\\\"2021-08-20T01:25:04+00:00\\\"}\", \"ingestionTime\": 1629422744016 },",
"cloudwatch: groupBy: logType groupPrefix: demo-group-prefix region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"demo-group-prefix.application\" \"demo-group-prefix.audit\" \"demo-group-prefix.infrastructure\"",
"cloudwatch: groupBy: namespaceName region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.app\" \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"cloudwatch: groupBy: namespaceUUID region: us-east-2",
"aws --output json logs describe-log-groups | jq .logGroups[].logGroupName \"mycluster-7977k.794e1e1a-b9f5-4958-a190-e76a9b53d7bf\" // uid of the \"app\" namespace \"mycluster-7977k.audit\" \"mycluster-7977k.infrastructure\"",
"apiVersion: \"logging.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: loki-insecure 3 type: \"loki\" 4 url: http://loki.insecure.com:3100 5 - name: loki-secure type: \"loki\" url: https://loki.secure.com:3100 6 secret: name: loki-secret 7 pipelines: - name: application-logs 8 inputRefs: 9 - application - audit outputRefs: - loki-secure 10 loki: tenantKey: kubernetes.namespace_name 11 labelKeys: kubernetes.labels.foo 12",
"oc create -f <file-name>.yaml",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded (limit: 8388608 bytes/sec) while attempting to ingest '2140' lines totaling '3285284' bytes 429 Too Many Requests Ingestion rate limit exceeded' or '500 Internal Server Error rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5277702 vs. 4194304)'",
",\\nentry with timestamp 2021-08-18 05:58:55.061936 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\\\"flush_thread_0\\\", log_type=\\\"audit\\\"},\\nentry with timestamp 2021-08-18 06:01:18.290229 +0000 UTC ignored, reason: 'entry out of order' for stream: {fluentd_thread=\"flush_thread_0\", log_type=\"audit\"}",
"auth_enabled: false server: http_listen_port: 3100 grpc_listen_port: 9096 grpc_server_max_recv_msg_size: 8388608 ingester: wal: enabled: true dir: /tmp/wal lifecycler: address: 127.0.0.1 ring: kvstore: store: inmemory replication_factor: 1 final_sleep: 0s chunk_idle_period: 1h # Any chunk not receiving new logs in this time will be flushed chunk_target_size: 8388608 max_chunk_age: 1h # All chunks will be flushed when they hit this age, default is 1h chunk_retain_period: 30s # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m) max_transfer_retries: 0 # Chunk transfers disabled schema_config: configs: - from: 2020-10-24 store: boltdb-shipper object_store: filesystem schema: v11 index: prefix: index_ period: 24h storage_config: boltdb_shipper: active_index_directory: /tmp/loki/boltdb-shipper-active cache_location: /tmp/loki/boltdb-shipper-cache cache_ttl: 24h # Can be increased for faster performance over longer query periods, uses more disk space shared_store: filesystem filesystem: directory: /tmp/loki/chunks compactor: working_directory: /tmp/loki/boltdb-shipper-compactor shared_store: filesystem limits_config: reject_old_samples: true reject_old_samples_max_age: 12h ingestion_rate_mb: 8 ingestion_burst_size_mb: 16 chunk_store_config: max_look_back_period: 0s table_manager: retention_deletes_enabled: false retention_period: 0s ruler: storage: type: local local: directory: /tmp/loki/rules rule_path: /tmp/loki/rules-temp alertmanager_url: http://localhost:9093 ring: kvstore: store: inmemory enable_api: true",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: outputs: - name: fluentd-server-secure 3 type: fluentdForward 4 url: 'tls://fluentdserver.security.example.com:24224' 5 secret: 6 name: fluentd-secret - name: fluentd-server-insecure type: fluentdForward url: 'tcp://fluentdserver.home.example.com:24224' inputs: 7 - name: my-app-logs application: namespaces: - my-project pipelines: - name: forward-to-fluentd-insecure 8 inputRefs: 9 - my-app-logs outputRefs: 10 - fluentd-server-insecure parse: json 11 labels: project: \"my-project\" 12 - name: forward-to-fluentd-secure 13 inputRefs: - application - audit - infrastructure outputRefs: - fluentd-server-secure - default labels: clusterId: \"C1234\"",
"oc create -f <file-name>.yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance 1 namespace: openshift-logging 2 spec: pipelines: - inputRefs: [ myAppLogData ] 3 outputRefs: [ default ] 4 parse: json 5 inputs: 6 - name: myAppLogData application: selector: matchLabels: 7 environment: production app: nginx namespaces: 8 - app1 - app2 outputs: 9 - default",
"- inputRefs: [ myAppLogData, myOtherAppLogData ]",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=collector",
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"{\"message\":\"{\\\"level\\\":\\\"info\\\",\\\"name\\\":\\\"fred\\\",\\\"home\\\":\\\"bedrock\\\"\", \"more fields...\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: <application> outputRefs: default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=collector",
"kind: Template apiVersion: template.openshift.io/v1 metadata: name: eventrouter-template annotations: description: \"A pod forwarding kubernetes events to OpenShift Logging stack.\" tags: \"events,EFK,logging,cluster-logging\" objects: - kind: ServiceAccount 1 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} - kind: ClusterRole 2 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader rules: - apiGroups: [\"\"] resources: [\"events\"] verbs: [\"get\", \"watch\", \"list\"] - kind: ClusterRoleBinding 3 apiVersion: rbac.authorization.k8s.io/v1 metadata: name: event-reader-binding subjects: - kind: ServiceAccount name: eventrouter namespace: USD{NAMESPACE} roleRef: kind: ClusterRole name: event-reader - kind: ConfigMap 4 apiVersion: v1 metadata: name: eventrouter namespace: USD{NAMESPACE} data: config.json: |- { \"sink\": \"stdout\" } - kind: Deployment 5 apiVersion: apps/v1 metadata: name: eventrouter namespace: USD{NAMESPACE} labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" spec: selector: matchLabels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" replicas: 1 template: metadata: labels: component: \"eventrouter\" logging-infra: \"eventrouter\" provider: \"openshift\" name: eventrouter spec: serviceAccount: eventrouter containers: - name: kube-eventrouter image: USD{IMAGE} imagePullPolicy: IfNotPresent resources: requests: cpu: USD{CPU} memory: USD{MEMORY} volumeMounts: - name: config-volume mountPath: /etc/eventrouter volumes: - name: config-volume configMap: name: eventrouter parameters: - name: IMAGE 6 displayName: Image value: \"registry.redhat.io/openshift-logging/eventrouter-rhel8:v0.4\" - name: CPU 7 displayName: CPU value: \"100m\" - name: MEMORY 8 displayName: Memory value: \"128Mi\" - name: NAMESPACE displayName: Namespace value: \"openshift-logging\" 9",
"oc process -f <templatefile> | oc apply -n openshift-logging -f -",
"oc process -f eventrouter.yaml | oc apply -n openshift-logging -f -",
"serviceaccount/eventrouter created clusterrole.authorization.openshift.io/event-reader created clusterrolebinding.authorization.openshift.io/event-reader-binding created configmap/eventrouter created deployment.apps/eventrouter created",
"oc get pods --selector component=eventrouter -o name -n openshift-logging",
"pod/cluster-logging-eventrouter-d649f97c8-qvv8r",
"oc logs <cluster_logging_eventrouter_pod> -n openshift-logging",
"oc logs cluster-logging-eventrouter-d649f97c8-qvv8r -n openshift-logging",
"{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"namespace\":\"openshift-service-catalog-removed\",\"selfLink\":\"/api/v1/namespaces/openshift-service-catalog-removed/events/openshift-service-catalog-controller-manager-remover.1632d931e88fcd8f\",\"uid\":\"787d7b26-3d2f-4017-b0b0-420db4ae62c0\",\"resourceVersion\":\"21399\",\"creationTimestamp\":\"2020-09-08T15:40:26Z\"},\"involvedObject\":{\"kind\":\"Job\",\"namespace\":\"openshift-service-catalog-removed\",\"name\":\"openshift-service-catalog-controller-manager-remover\",\"uid\":\"fac9f479-4ad5-4a57-8adc-cb25d3d9cf8f\",\"apiVersion\":\"batch/v1\",\"resourceVersion\":\"21280\"},\"reason\":\"Completed\",\"message\":\"Job completed\",\"source\":{\"component\":\"job-controller\"},\"firstTimestamp\":\"2020-09-08T15:40:26Z\",\"lastTimestamp\":\"2020-09-08T15:40:26Z\",\"count\":1,\"type\":\"Normal\"}}",
"oc get pod -n openshift-logging --selector component=elasticsearch",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m",
"oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health",
"{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }",
"oc project openshift-logging",
"oc get cronjob",
"NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s",
"oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices",
"Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0",
"oc get ds collector -o json | grep collector",
"\"containerName\": \"collector\"",
"oc get kibana kibana -o json",
"[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]",
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging . status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0",
"eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3",
"Total number of Namespaces. es_index_namespaces_total 5",
"es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5",
"message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v",
"-n openshift-logging get pods -l component=elasticsearch",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v",
"logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty",
"-n openshift-logging get po -o wide",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"-n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/logging/cluster-logging-deploying |
13.3. Installing in Text Mode | 13.3. Installing in Text Mode Text mode installation offers an interactive, non-graphical interface for installing Red Hat Enterprise Linux. This can be useful on systems with no graphical capabilities; however, you should always consider the available alternatives before starting a text-based installation. Text mode is limited in the amount of choices you can make during the installation. Important Red Hat recommends that you install Red Hat Enterprise Linux using the graphical interface. If you are installing Red Hat Enterprise Linux on a system that lacks a graphical display, consider performing the installation over a VNC connection - see Chapter 25, Using VNC . The text mode installation program will prompt you to confirm the use of text mode if it detects that a VNC-based installation is possible. If your system has a graphical display, but graphical installation fails, try booting with the inst.xdriver=vesa option - see Chapter 23, Boot Options . Alternatively, consider a Kickstart installation. See Chapter 27, Kickstart Installations for more information. Figure 13.1. Text Mode Installation Installation in text mode follows a pattern similar to the graphical installation: There is no single fixed progression; you can configure many settings in any order you want using the main status screen. Screens which have already been configured, either automatically or by you, are marked as [x] , and screens which require your attention before the installation can begin are marked with [!] . Available commands are displayed below the list of available options. Note When related background tasks are being run, certain menu items can be temporarily unavailable or display the Processing... label. To refresh to the current status of text menu items, use the r option at the text mode prompt. At the bottom of the screen in text mode, a green bar is displayed showing five menu options. These options represent different screens in the tmux terminal multiplexer; by default you start in screen 1, and you can use keyboard shortcuts to switch to other screens which contain logs and an interactive command prompt. For information about available screens and shortcuts to switch to them, see Section 13.2.1, "Accessing Consoles" . Limits of interactive text mode installation include: The installer will always use the English language and the US English keyboard layout. You can configure your language and keyboard settings, but these settings will only apply to the installed system, not to the installation. You cannot configure any advanced storage methods (LVM, software RAID, FCoE, zFCP and iSCSI). It is not possible to configure custom partitioning; you must use one of the automatic partitioning settings. You also cannot configure where the boot loader will be installed. You cannot select any package add-ons to be installed; they must be added after the installation finishes using the Yum package manager. To start a text mode installation, boot the installation with the inst.text boot option used either at the boot command line in the boot menu, or in your PXE server configuration. See Chapter 12, Booting the Installation on IBM Power Systems for information about booting and using boot options. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-text-mode-ppc |
Chapter 23. Securing endpoints in Red Hat OpenStack Platform | Chapter 23. Securing endpoints in Red Hat OpenStack Platform The process of engaging with an OpenStack cloud begins by querying an API endpoint. While there are different challenges for public and private endpoints, these are high value assets that can pose a significant risk if compromised. This chapter recommends security enhancements for both public and private-facing API endpoints. 23.1. Internal API communications OpenStack provides both public-facing, internal admin, and private API endpoints. By default, OpenStack components use the publicly defined endpoints. The recommendation is to configure these components to use the API endpoint within the proper security domain. The internal admin endpoint allows further elevated access to keystone, so it might be desirable to further isolate this. Services select their respective API endpoints based on the OpenStack service catalog. These services might not obey the listed public or internal API endpoint values. This can lead to internal management traffic being routed to external API endpoints. 23.2. Configure internal URLs in the Identity service catalog The Identity service catalog should be aware of your internal URLs. While this feature is not used by default, it may be available through configuration. In addition, it should be forward-compatible with expectant changes once this behavior becomes the default. Consider isolating the configured endpoints from a network level, given that they have different levels of access. The Admin endpoint is intended for access by cloud administrators, as it provides elevated access to keystone operations not available on the internal or public endpoints. The internal endpoints are intended for uses internal to the cloud (for example, by OpenStack services), and usually would not be accessible outside of the deployment network. The public endpoints should be TLS-enabled, and the only API endpoints accessible outside of the deployment for cloud users to operate on. Registration of an internal URL for an endpoint is automated by director. For more information, see https://github.com/openstack/tripleo-heat-templates/blob/a7857d6dfcc875eb2bc611dd9334104c18fe8ac6/network/endpoints/build_endpoint_map.py . 23.3. Configure applications for internal URLs You can force some services to use specific API endpoints. As a result, it is recommended that any OpenStack service that contacts the API of another service must be explicitly configured to access the proper internal API endpoint. Each project might present an inconsistent way of defining target API endpoints. Future releases of OpenStack seek to resolve these inconsistencies through consistent use of the Identity service catalog. 23.4. Paste and middleware Most API endpoints and other HTTP services in OpenStack use the Python Paste Deploy library. From a security perspective, this library enables manipulation of the request filter pipeline through the application's configuration. Each element in this chain is referred to as middleware. Changing the order of filters in the pipeline or adding additional middleware might have unpredictable security impact. Commonly, implementers add middleware to extend OpenStack's base functionality. Consider giving careful consideration to the potential exposure introduced by the addition of non-standard software components to the HTTP request pipeline. 23.5. Secure metadef APIs In Red Hat OpenStack Platform (RHOSP), cloud administrators can define key value pairs and tag metadata with metadata definition (metadef) APIs. There is no limit on the number of metadef namespaces, objects, properties, resources, or tags that cloud administrators can create. Image service policies control metadef APIs. By default, only cloud administrators can create, update, or delete (CUD) metadef APIs. This limitation prevents metadef APIs from exposing information to unauthorized users and mitigates the risk of a malicious user filling the Image service (glance) database with unlimited resources, which can create a Denial of Service (DoS) style attack. However, cloud administrators can override the default policy. 23.6. Enabling metadef API access for cloud users Cloud administrators with users who depend on write access to metadata definition (metadef) APIs can make those APIs accessible to all users by overriding the default admin-only policy. In this type of configuration, however, there is the potential to unintentionally leak sensitive resource names, such as customer names and internal projects. Administrators must audit their systems to identify previously created resources that might be vulnerable even if only read-access is enabled for all users. Procedure As a cloud administrator, log in to the undercloud and create a file for policy overrides. For example: Configure the policy override file to allow metadef API read-write access to all users: Note You must configure all metadef policies to use rule:metadef_default . For information about policies and policy syntax, see this Policies chapter. Include the new policy file in the deployment command with the -e option when you deploy the overcloud: 23.7. Changing the SSL/TLS cipher and rules for HAProxy If you enabled SSL/TLS in the overcloud, consider hardening the SSL/TLS ciphers and rules that are used with the HAProxy configuration. By hardening the SSL/TLS ciphers, you help avoid SSL/TLS vulnerabilities, such as the POODLE vulnerability . Create a heat template environment file called tls-ciphers.yaml : In the environment file, use the ExtraConfig hook to apply values to the tripleo::haproxy::ssl_options hieradata. Include the tls-ciphers.yaml environment file with the overcloud deploy command when deploying the overcloud: 23.8. Network policy API endpoints will typically span multiple security zones, so you must pay particular attention to the separation of the API processes. For example, at the network design level, you can consider restricting access to specified systems only. See the guidance on security zones for more information. With careful modeling, you can use network ACLs and IDS technologies to enforce explicit point-to-point communication between network services. As a critical cross domain service, this type of explicit enforcement works well for OpenStack's message queue service. To enforce policies, you can configure services, host-based firewalls (such as iptables), local policy (SELinux), and optionally global network policy. 23.9. Mandatory access controls You should isolate API endpoint processes from each other and other processes on a machine. The configuration for those processes should be restricted to those processes by Discretionary Access Controls (DAC) and Mandatory Access Controls (MAC). The goal of these enhanced access controls is to aid in the containment of API endpoint security breaches. 23.10. API endpoint rate-limiting Rate Limiting is a means to control the frequency of events received by a network based application. When robust rate limiting is not present, it can result in an application being susceptible to various denial of service attacks. This is especially true for APIs, which by their nature are designed to accept a high frequency of similar request types and operations. It is recommended that all endpoints (especially public) are give an extra layer of protection, for example, using physical network design, a rate-limiting proxy, or web application firewall. It is key that the operator carefully plans and considers the individual performance needs of users and services within their OpenStack cloud when configuring and implementing any rate limiting functionality. Note For Red Hat OpenStack Platform deployments, all services are placed behind load balancing proxies. | [
"cat open-up-glance-api-metadef.yaml",
"GlanceApiPolicies: { glance-metadef_default: { key: 'metadef_default', value: '' }, glance-get_metadef_namespace: { key: 'get_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_namespaces: { key: 'get_metadef_namespaces', value: 'rule:metadef_default' }, glance-modify_metadef_namespace: { key: 'modify_metadef_namespace', value: 'rule:metadef_default' }, glance-add_metadef_namespace: { key: 'add_metadef_namespace', value: 'rule:metadef_default' }, glance-delete_metadef_namespace: { key: 'delete_metadef_namespace', value: 'rule:metadef_default' }, glance-get_metadef_object: { key: 'get_metadef_object', value: 'rule:metadef_default' }, glance-get_metadef_objects: { key: 'get_metadef_objects', value: 'rule:metadef_default' }, glance-modify_metadef_object: { key: 'modify_metadef_object', value: 'rule:metadef_default' }, glance-add_metadef_object: { key: 'add_metadef_object', value: 'rule:metadef_default' }, glance-delete_metadef_object: { key: 'delete_metadef_object', value: 'rule:metadef_default' }, glance-list_metadef_resource_types: { key: 'list_metadef_resource_types', value: 'rule:metadef_default' }, glance-get_metadef_resource_type: { key: 'get_metadef_resource_type', value: 'rule:metadef_default' }, glance-add_metadef_resource_type_association: { key: 'add_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-remove_metadef_resource_type_association: { key: 'remove_metadef_resource_type_association', value: 'rule:metadef_default' }, glance-get_metadef_property: { key: 'get_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_properties: { key: 'get_metadef_properties', value: 'rule:metadef_default' }, glance-modify_metadef_property: { key: 'modify_metadef_property', value: 'rule:metadef_default' }, glance-add_metadef_property: { key: 'add_metadef_property', value: 'rule:metadef_default' }, glance-remove_metadef_property: { key: 'remove_metadef_property', value: 'rule:metadef_default' }, glance-get_metadef_tag: { key: 'get_metadef_tag', value: 'rule:metadef_default' }, glance-get_metadef_tags: { key: 'get_metadef_tags', value: 'rule:metadef_default' }, glance-modify_metadef_tag: { key: 'modify_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tag: { key: 'add_metadef_tag', value: 'rule:metadef_default' }, glance-add_metadef_tags: { key: 'add_metadef_tags', value: 'rule:metadef_default' }, glance-delete_metadef_tag: { key: 'delete_metadef_tag', value: 'rule:metadef_default' }, glance-delete_metadef_tags: { key: 'delete_metadef_tags', value: 'rule:metadef_default' } }",
"openstack overcloud deploy -e open-up-glance-api-metadef.yaml",
"touch ~/templates/tls-ciphers.yaml",
"parameter_defaults: ExtraConfig: # TLSv1.3 configuration tripleo::haproxy::ssl_options:: 'ssl-min-ver TLSv1.3'",
"openstack overcloud deploy --templates -e /home/stack/templates/tls-ciphers.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_securing-endpoints-in-rhosp_security_and_hardening |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 1-0.2 2015-02-25 Laura Bailey Rebuild with sort_order Revision 1-0.1.400 2013-10-31 Rudiger Landmann Rebuild with publican 4.0.0 Revision 1-0.1 Tue Dec 6 2011 Martin Prpic Release of the Red Hat Enterprise Linux 6.2 Release Notes | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/appe-6.2_release_notes-revision_history |
function::caller | function::caller Name function::caller - Return name and address of calling function Synopsis Arguments None Description This function returns the address and name of the calling function. This is equivalent to calling: sprintf( " s 0x x " , symname( caller_addr ), caller_addr ) | [
"caller:string()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-caller |
Chapter 87. Decision engine event listeners and debug logging | Chapter 87. Decision engine event listeners and debug logging The decision engine generates events when performing activities such as fact insertions and rule executions. If you register event listeners, the decision engine calls every listener when an activity is performed. Event listeners have methods that correspond to different types of activities. The decision engine passes an event object to each method; this object contains information about the specific activity. Your code can implement custom event listeners and you can also add and remove registered event listeners. In this way, your code can be notified of decision engine activity, and you can separate logging and auditing work from the core of your application. The decision engine supports the following event listeners with the following methods: Agenda event listener public interface AgendaEventListener extends EventListener { void matchCreated(MatchCreatedEvent event); void matchCancelled(MatchCancelledEvent event); void beforeMatchFired(BeforeMatchFiredEvent event); void afterMatchFired(AfterMatchFiredEvent event); void agendaGroupPopped(AgendaGroupPoppedEvent event); void agendaGroupPushed(AgendaGroupPushedEvent event); void beforeRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void afterRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void beforeRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); void afterRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); } Rule runtime event listener public interface RuleRuntimeEventListener extends EventListener { void objectInserted(ObjectInsertedEvent event); void objectUpdated(ObjectUpdatedEvent event); void objectDeleted(ObjectDeletedEvent event); } For the definitions of event classes, see the GitHub repository . Red Hat Process Automation Manager includes default implementations of these listeners: DefaultAgendaEventListener and DefaultRuleRuntimeEventListener . You can extend each of these implementations to monitor specific events. For example, the following code extends DefaultAgendaEventListener to monitor the AfterMatchFiredEvent event and attaches this listener to a KIE session. The code prints pattern matches when rules are executed (fired): Example code to monitor and print AfterMatchFiredEvent events in the agenda ksession.addEventListener( new DefaultAgendaEventListener() { public void afterMatchFired(AfterMatchFiredEvent event) { super.afterMatchFired( event ); System.out.println( event ); } }); Red Hat Process Automation Manager also includes the following decision engine agenda and rule runtime event listeners for debug logging: DebugAgendaEventListener DebugRuleRuntimeEventListener These event listeners implement the same supported event-listener methods and include a debug print statement by default. You can add additional monitoring code for a specific supported event. For example, the following code uses the DebugRuleRuntimeEventListener event listener to monitor and print all working memory (rule runtime) events: Example code to monitor and print all working memory events ksession.addEventListener( new DebugRuleRuntimeEventListener() ); 87.1. Practices for development of event listeners The decision engine calls event listeners during rule processing. The calls block the execution of the decision engine. Therefore, the event listener can affect the performance of the decision engine. To ensure minimal disruption, follow the following guidelines: Any action must be as short as possible. A listener class must not have a state. The decision engine can destroy and re-create a listener class at any time. Do not use logic that relies on the order of execution of different event listeners. Do not include interactions with different entities outside the decision engine within a listener. For example, do not include REST calls for notification of events. An exception is the output of logging information; however, a logging listener must be as simple as possible. You can use a listener to modify the state of the decision engine, for example, to change the values of variables. | [
"public interface AgendaEventListener extends EventListener { void matchCreated(MatchCreatedEvent event); void matchCancelled(MatchCancelledEvent event); void beforeMatchFired(BeforeMatchFiredEvent event); void afterMatchFired(AfterMatchFiredEvent event); void agendaGroupPopped(AgendaGroupPoppedEvent event); void agendaGroupPushed(AgendaGroupPushedEvent event); void beforeRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void afterRuleFlowGroupActivated(RuleFlowGroupActivatedEvent event); void beforeRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); void afterRuleFlowGroupDeactivated(RuleFlowGroupDeactivatedEvent event); }",
"public interface RuleRuntimeEventListener extends EventListener { void objectInserted(ObjectInsertedEvent event); void objectUpdated(ObjectUpdatedEvent event); void objectDeleted(ObjectDeletedEvent event); }",
"ksession.addEventListener( new DefaultAgendaEventListener() { public void afterMatchFired(AfterMatchFiredEvent event) { super.afterMatchFired( event ); System.out.println( event ); } });",
"ksession.addEventListener( new DebugRuleRuntimeEventListener() );"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/engine-event-listeners-con_decision-engine |
Chapter 9. Certified SAP applications on RHEL 8 | Chapter 9. Certified SAP applications on RHEL 8 SAP Max DB 7.9.10.02 and later (See SAP Note 1444241 ) SAP ASE 16 (See SAP Note 2489781 ) SAP HANA 2.0 SPS04 and later (See SAP Note 2235581 ) SAP BI 4.3 and later (See SAP Note 1338845 ) SAP NetWeaver (See SAP Note 2772999 ) In general, SAP documents support of their products for certain versions of Red Hat Linux Enterprise in their SAP Product Availability Matrix . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/8.x_release_notes/certified_sap_applications_8.x_release_notes |
Chapter 15. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] | Chapter 15. OverlappingRangeIPReservation [whereabouts.cni.cncf.io/v1alpha1] Description OverlappingRangeIPReservation is the Schema for the OverlappingRangeIPReservations API Type object Required spec 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation 15.1.1. .spec Description OverlappingRangeIPReservationSpec defines the desired state of OverlappingRangeIPReservation Type object Required podref Property Type Description containerid string ifname string podref string 15.2. API endpoints The following API endpoints are available: /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations GET : list objects of kind OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations DELETE : delete collection of OverlappingRangeIPReservation GET : list objects of kind OverlappingRangeIPReservation POST : create an OverlappingRangeIPReservation /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} DELETE : delete an OverlappingRangeIPReservation GET : read the specified OverlappingRangeIPReservation PATCH : partially update the specified OverlappingRangeIPReservation PUT : replace the specified OverlappingRangeIPReservation 15.2.1. /apis/whereabouts.cni.cncf.io/v1alpha1/overlappingrangeipreservations Table 15.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 15.2. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty 15.2.2. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations Table 15.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 15.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OverlappingRangeIPReservation Table 15.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OverlappingRangeIPReservation Table 15.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.8. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservationList schema 401 - Unauthorized Empty HTTP method POST Description create an OverlappingRangeIPReservation Table 15.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.10. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 15.11. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 202 - Accepted OverlappingRangeIPReservation schema 401 - Unauthorized Empty 15.2.3. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/overlappingrangeipreservations/{name} Table 15.12. Global path parameters Parameter Type Description name string name of the OverlappingRangeIPReservation namespace string object name and auth scope, such as for teams and projects Table 15.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OverlappingRangeIPReservation Table 15.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.15. Body parameters Parameter Type Description body DeleteOptions schema Table 15.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OverlappingRangeIPReservation Table 15.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.18. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OverlappingRangeIPReservation Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 15.20. Body parameters Parameter Type Description body Patch schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OverlappingRangeIPReservation Table 15.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.23. Body parameters Parameter Type Description body OverlappingRangeIPReservation schema Table 15.24. HTTP responses HTTP code Reponse body 200 - OK OverlappingRangeIPReservation schema 201 - Created OverlappingRangeIPReservation schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_apis/overlappingrangeipreservation-whereabouts-cni-cncf-io-v1alpha1 |
Chapter 1. Customizing nodes | Chapter 1. Customizing nodes OpenShift Container Platform supports both cluster-wide and per-machine configuration via Ignition, which allows arbitrary partitioning and file content changes to the operating system. In general, if a configuration file is documented in Red Hat Enterprise Linux (RHEL), then modifying it via Ignition is supported. There are two ways to deploy machine config changes: Creating machine configs that are included in manifest files to start up a cluster during openshift-install . Creating machine configs that are passed to running OpenShift Container Platform nodes via the Machine Config Operator. Additionally, modifying the reference config, such as the Ignition config that is passed to coreos-installer when installing bare-metal nodes allows per-machine configuration. These changes are currently not visible to the Machine Config Operator. The following sections describe features that you might want to configure on your nodes in this way. 1.1. Creating machine configs with Butane Machine configs are used to configure control plane and worker machines by instructing machines how to create users and file systems, set up the network, install systemd units, and more. Because modifying machine configs can be difficult, you can use Butane configs to create machine configs for you, thereby making node configuration much easier. 1.1.1. About Butane Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. The format of the Butane config file that Butane accepts is defined in the OpenShift Butane config spec . 1.1.2. Installing Butane You can install the Butane tool ( butane ) to create OpenShift Container Platform machine configs from a command-line interface. You can install butane on Linux, Windows, or macOS by downloading the corresponding binary file. Tip Butane releases are backwards-compatible with older releases and with the Fedora CoreOS Config Transpiler (FCCT). Procedure Navigate to the Butane image download page at https://mirror.openshift.com/pub/openshift-v4/clients/butane/ . Get the butane binary: For the newest version of Butane, save the latest butane image to your current directory: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane Optional: For a specific type of architecture you are installing Butane on, such as aarch64 or ppc64le, indicate the appropriate URL. For example: USD curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane Make the downloaded binary file executable: USD chmod +x butane Move the butane binary file to a directory on your PATH . To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification steps You can now use the Butane tool by running the butane command: USD butane <butane_file> 1.1.3. Creating a MachineConfig object by using Butane You can use Butane to produce a MachineConfig object so that you can configure worker or control plane nodes at installation time or via the Machine Config Operator. Prerequisites You have installed the butane utility. Procedure Create a Butane config file. The following example creates a file named 99-worker-custom.bu that configures the system console to show kernel debug messages and specifies custom settings for the chrony time service: variant: openshift version: 4.17.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony Note The 99-worker-custom.bu file is set to create a machine config for worker nodes. To deploy on control plane nodes, change the role from worker to master . To do both, you could repeat the whole procedure using different file names for the two types of deployments. Create a MachineConfig object by giving Butane the file that you created in the step: USD butane 99-worker-custom.bu -o ./99-worker-custom.yaml A MachineConfig object YAML file is created for you to finish configuring your machines. Save the Butane config in case you need to update the MachineConfig object in the future. If the cluster is not running yet, generate manifest files and add the MachineConfig object YAML file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-worker-custom.yaml Additional resources Adding kernel modules to nodes Encrypting and mirroring disks during installation 1.2. Adding day-1 kernel arguments Although it is often preferable to modify kernel arguments as a day-2 activity, you might want to add kernel arguments to all master or worker nodes during initial cluster installation. Here are some reasons you might want to add kernel arguments during cluster installation so they take effect before the systems first boot up: You need to do some low-level network configuration before the systems start. You want to disable a feature, such as SELinux, so it has no impact on the systems when they first come up. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. For a listing of arguments you can pass to a RHEL 8 kernel at boot time, see Kernel.org kernel parameters . It is best to only add kernel arguments with this procedure if they are needed to complete the initial OpenShift Container Platform installation. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. In the openshift directory, create a file (for example, 99-openshift-machineconfig-master-kargs.yaml ) to define a MachineConfig object to add the kernel settings. This example adds a loglevel=7 kernel argument to control plane nodes: USD cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF You can change master to worker to add kernel arguments to worker nodes instead. Create a separate YAML file to add to both master and worker nodes. You can now continue on to create the cluster. 1.3. Adding kernel modules to nodes For most common hardware, the Linux kernel includes the device driver modules needed to use that hardware when the computer starts up. For some hardware, however, modules are not available in Linux. Therefore, you must find a way to provide those modules to each host computer. This procedure describes how to do that for nodes in an OpenShift Container Platform cluster. When a kernel module is first deployed by following these instructions, the module is made available for the current kernel. If a new kernel is installed, the kmods-via-containers software will rebuild and deploy the module so a compatible version of that module is available with the new kernel. The way that this feature is able to keep the module up to date on each node is by: Adding a systemd service to each node that starts at boot time to detect if a new kernel has been installed and If a new kernel is detected, the service rebuilds the module and installs it to the kernel For information on the software needed for this procedure, see the kmods-via-containers github site. A few important issues to keep in mind: This procedure is Technology Preview. Software tools and examples are not yet available in official RPM form and can only be obtained for now from unofficial github.com sites noted in the procedure. Third-party kernel modules you might add through these procedures are not supported by Red Hat. In this procedure, the software needed to build your kernel modules is deployed in a RHEL 8 container. Keep in mind that modules are rebuilt automatically on each node when that node gets a new kernel. For that reason, each node needs access to a yum repository that contains the kernel and related packages needed to rebuild the module. That content is best provided with a valid RHEL subscription. 1.3.1. Building and testing the kernel module container Before deploying kernel modules to your OpenShift Container Platform cluster, you can test the process on a separate RHEL system. Gather the kernel module's source code, the KVC framework, and the kmod-via-containers software. Then build and test the module. To do that on a RHEL 8 system, do the following: Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software that is required to build the software and container: # yum install podman make git -y Clone the kmod-via-containers repository: Create a folder for the repository: USD mkdir kmods; cd kmods Clone the repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Install a KVC framework instance on your RHEL 8 build host to test the module. This adds a kmods-via-container systemd service and loads it: Change to the kmod-via-containers directory: USD cd kmods-via-containers/ Install the KVC framework instance: USD sudo make install Reload the systemd manager configuration: USD sudo systemctl daemon-reload Get the kernel module source code. The source code might be used to build a third-party module that you do not have control over, but is supplied by others. You will need content similar to the content shown in the kvc-simple-kmod example that can be cloned to your system as follows: USD cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod Edit the configuration file, simple-kmod.conf file, in this example, and change the name of the Dockerfile to Dockerfile.rhel : Change to the kvc-simple-kmod directory: USD cd kvc-simple-kmod Rename the Dockerfile: USD cat simple-kmod.conf Example Dockerfile KMOD_CONTAINER_BUILD_CONTEXT="https://github.com/kmods-via-containers/kvc-simple-kmod.git" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES="simple-kmod simple-procfs-kmod" Create an instance of [email protected] for your kernel module, simple-kmod in this example: USD sudo make install Enable the [email protected] instance: USD sudo kmods-via-containers build simple-kmod USD(uname -r) Enable and start the systemd service: USD sudo systemctl enable [email protected] --now Review the service status: USD sudo systemctl status [email protected] Example output β [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago... To confirm that the kernel modules are loaded, use the lsmod command to list the modules: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 Optional. Use other methods to check that the simple-kmod example is working: Look for a "Hello world" message in the kernel ring buffer with dmesg : USD dmesg | grep 'Hello world' Example output [ 6420.761332] Hello world from simple_kmod. Check the value of simple-procfs-kmod in /proc : USD sudo cat /proc/simple-procfs-kmod Example output simple-procfs-kmod number = 0 Run the spkut command to get more information from the module: USD sudo spkut 44 Example output KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container... + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44 Going forward, when the system boots this service will check if a new kernel is running. If there is a new kernel, the service builds a new version of the kernel module and then loads it. If the module is already built, it will just load it. 1.3.2. Provisioning a kernel module to OpenShift Container Platform Depending on whether or not you must have the kernel module in place when OpenShift Container Platform cluster first boots, you can set up the kernel modules to be deployed in one of two ways: Provision kernel modules at cluster install time (day-1) : You can create the content as a MachineConfig object and provide it to openshift-install by including it with a set of manifest files. Provision kernel modules via Machine Config Operator (day-2) : If you can wait until the cluster is up and running to add your kernel module, you can deploy the kernel module software via the Machine Config Operator (MCO). In either case, each node needs to be able to get the kernel packages and related software packages at the time that a new kernel is detected. There are a few ways you can set up each node to be able to obtain that content. Provide RHEL entitlements to each node. Get RHEL entitlements from an existing RHEL host, from the /etc/pki/entitlement directory and copy them to the same location as the other files you provide when you build your Ignition config. Inside the Dockerfile, add pointers to a yum repository containing the kernel and other packages. This must include new kernel packages as they are needed to match newly installed kernels. 1.3.2.1. Provision kernel modules via a MachineConfig object By packaging kernel module software with a MachineConfig object, you can deliver that software to worker or control plane nodes at installation time or via the Machine Config Operator. Procedure Register a RHEL 8 system: # subscription-manager register Attach a subscription to the RHEL 8 system: # subscription-manager attach --auto Install software needed to build the software: # yum install podman make git -y Create a directory to host the kernel module and tooling: USD mkdir kmods; cd kmods Get the kmods-via-containers software: Clone the kmods-via-containers repository: USD git clone https://github.com/kmods-via-containers/kmods-via-containers Clone the kvc-simple-kmod repository: USD git clone https://github.com/kmods-via-containers/kvc-simple-kmod Get your module software. In this example, kvc-simple-kmod is used. Create a fakeroot directory and populate it with files that you want to deliver via Ignition, using the repositories cloned earlier: Create the directory: USD FAKEROOT=USD(mktemp -d) Change to the kmod-via-containers directory: USD cd kmods-via-containers Install the KVC framework instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Change to the kvc-simple-kmod directory: USD cd ../kvc-simple-kmod Create the instance: USD make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/ Clone the fakeroot directory, replacing any symbolic links with copies of their targets, by running the following command: USD cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree Create a Butane config file, 99-simple-kmod.bu , that embeds the kernel module tree and enables the systemd service. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.17.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true 1 To deploy on control plane nodes, change worker to master . To deploy on both control plane and worker nodes, perform the remainder of these instructions once for each node type. Use Butane to generate a machine config YAML file, 99-simple-kmod.yaml , containing the files and configuration to be delivered: USD butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml If the cluster is not up yet, generate manifest files and add this file to the openshift directory. If the cluster is already running, apply the file as follows: USD oc create -f 99-simple-kmod.yaml Your nodes will start the [email protected] service and the kernel modules will be loaded. To confirm that the kernel modules are loaded, you can log in to a node (using oc debug node/<openshift-node> , then chroot /host ). To list the modules, use the lsmod command: USD lsmod | grep simple_ Example output simple_procfs_kmod 16384 0 simple_kmod 16384 0 1.4. Encrypting and mirroring disks during installation During an OpenShift Container Platform installation, you can enable boot disk encryption and mirroring on the cluster nodes. 1.4.1. About disk encryption You can enable encryption for the boot disks on the control plane and compute nodes at installation time. OpenShift Container Platform supports the Trusted Platform Module (TPM) v2 and Tang encryption modes. TPM v2 This is the preferred mode. TPM v2 stores passphrases in a secure cryptoprocessor on the server. You can use this mode to prevent decryption of the boot disk data on a cluster node if the disk is removed from the server. Tang Tang and Clevis are server and client components that enable network-bound disk encryption (NBDE). You can bind the boot disk data on your cluster nodes to one or more Tang servers. This prevents decryption of the data unless the nodes are on a secure network where the Tang servers are accessible. Clevis is an automated decryption framework used to implement decryption on the client side. Important The use of the Tang encryption mode to encrypt your disks is only supported for bare metal and vSphere installations on user-provisioned infrastructure. In earlier versions of Red Hat Enterprise Linux CoreOS (RHCOS), disk encryption was configured by specifying /etc/clevis.json in the Ignition config. That file is not supported in clusters created with OpenShift Container Platform 4.7 or later. Configure disk encryption by using the following procedure. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. This feature: Is available for installer-provisioned infrastructure, user-provisioned infrastructure, and Assisted Installer deployments For Assisted installer deployments: Each cluster can only have a single encryption method, Tang or TPM Encryption can be enabled on some or all nodes There is no Tang threshold; all servers must be valid and operational Encryption applies to the installation disks only, not to the workload disks Is supported on Red Hat Enterprise Linux CoreOS (RHCOS) systems only Sets up disk encryption during the manifest installation phase, encrypting all data written to disk, from first boot forward Requires no user intervention for providing passphrases Uses AES-256-XTS encryption, or AES-256-CBC if FIPS mode is enabled 1.4.1.1. Configuring an encryption threshold In OpenShift Container Platform, you can specify a requirement for more than one Tang server. You can also configure the TPM v2 and Tang encryption modes simultaneously. This enables boot disk data decryption only if the TPM secure cryptoprocessor is present and the Tang servers are accessible over a secure network. You can use the threshold attribute in your Butane configuration to define the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. The threshold is met when the stated value is reached through any combination of the declared conditions. In the case of offline provisioning, the offline server is accessed using an included advertisement, and only uses that supplied advertisement if the number of online servers do not meet the set threshold. For example, the threshold value of 2 in the following configuration can be reached by accessing two Tang servers, with the offline server available as a backup, or by accessing the TPM secure cryptoprocessor and one of the Tang servers: Example Butane configuration for disk encryption variant: openshift version: 4.17.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: "{\"payload\": \"...\", \"protected\": \"...\", \"signature\": \"...\"}" 4 threshold: 2 5 openshift: fips: true 1 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 2 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 3 Include this section if you want to use one or more Tang servers. 4 Optional: Include this field for offline provisioning. Ignition will provision the Tang server binding rather than fetching the advertisement from the server at runtime. This lets the server be unavailable at provisioning time. 5 Specify the minimum number of TPM v2 and Tang encryption conditions required for decryption to occur. Important The default threshold value is 1 . If you include multiple encryption conditions in your configuration but do not specify a threshold, decryption can occur if any of the conditions are met. Note If you require TPM v2 and Tang for decryption, the value of the threshold attribute must equal the total number of stated Tang servers plus one. If the threshold value is lower, it is possible to reach the threshold value by using a single encryption mode. For example, if you set tpm2 to true and specify two Tang servers, a threshold of 2 can be met by accessing the two Tang servers, even if the TPM secure cryptoprocessor is not available. 1.4.2. About disk mirroring During OpenShift Container Platform installation on control plane and worker nodes, you can enable mirroring of the boot and other disks to two or more redundant storage devices. A node continues to function after storage device failure provided one device remains available. Mirroring does not support replacement of a failed disk. Reprovision the node to restore the mirror to a pristine, non-degraded state. Note For user-provisioned infrastructure deployments, mirroring is available only on RHCOS systems. Support for mirroring is available on x86_64 nodes booted with BIOS or UEFI and on ppc64le nodes. 1.4.3. Configuring disk encryption and mirroring You can enable and configure encryption and mirroring during an OpenShift Container Platform installation. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to offer convenient, short-hand syntax for writing and validating machine configs. For more information, see "Creating machine configs with Butane". You have access to a Red Hat Enterprise Linux (RHEL) 8 machine that can be used to generate a thumbprint of the Tang exchange key. Procedure If you want to use TPM v2 to encrypt your cluster, check to see if TPM v2 encryption needs to be enabled in the host firmware for each node. This is required on most Dell systems. Check the manual for your specific system. If you want to use Tang to encrypt your cluster, follow these preparatory steps: Set up a Tang server or access an existing one. See Network-bound disk encryption for instructions. Install the clevis package on a RHEL 8 machine, if it is not already installed: USD sudo yum install clevis On the RHEL 8 machine, run the following command to generate a thumbprint of the exchange key. Replace http://tang1.example.com:7500 with the URL of your Tang server: USD clevis-encrypt-tang '{"url":"http://tang1.example.com:7500"}' < /dev/null > /dev/null 1 1 In this example, tangd.socket is listening on port 7500 on the Tang server. Note The clevis-encrypt-tang command generates a thumbprint of the exchange key. No data passes to the encryption command during this step; /dev/null exists here as an input instead of plain text. The encrypted output is also sent to /dev/null , because it is not required for this procedure. Example output The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1 1 The thumbprint of the exchange key. When the Do you wish to trust these keys? [ynYN] prompt displays, type Y . Optional: For offline Tang provisioning: Obtain the advertisement from the server using the curl command. Replace http://tang2.example.com:7500 with the URL of your Tang server: USD curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws Expected output {"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"} Provide the advertisement file to Clevis for encryption: USD clevis-encrypt-tang '{"url":"http://tang2.example.com:7500","adv":"adv.jws"}' < /dev/null > /dev/null If the nodes are configured with static IP addressing, run coreos-installer iso customize --dest-karg-append or use the coreos-installer --append-karg option when installing RHCOS nodes to set the IP address of the installed system. Append the ip= and other arguments needed for your network. Important Some methods for configuring static IPs do not affect the initramfs after the first boot and will not work with Tang encryption. These include the coreos-installer --copy-network option, the coreos-installer iso customize --network-keyfile option, and the coreos-installer pxe customize --network-keyfile option, as well as adding ip= arguments to the kernel command line of the live ISO or PXE image during installation. Incorrect static IP configuration causes the second boot of the node to fail. On your installation node, change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 Replace <installation_directory> with the path to the directory that you want to store the installation files in. Create a Butane config that configures disk encryption, mirroring, or both. For example, to configure storage for compute nodes, create a USDHOME/clusterconfig/worker-storage.bu file. Butane config example for a boot device variant: openshift version: 4.17.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: "{"payload": "eyJrZXlzIjogW3siYWxnIjogIkV", "protected": "eyJhbGciOiJFUzUxMiIsImN0eSI", "signature": "ADLgk7fZdE3Yt4FyYsm0pHiau7Q"}" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13 1 2 For control plane configurations, replace worker with master in both of these locations. 3 Set this field to the instruction set architecture of the cluster nodes. Some examples include, x86_64 , aarch64 , or ppc64le . 4 Include this section if you want to encrypt the root file system. For more details, see "About disk encryption". 5 Include this field if you want to use a Trusted Platform Module (TPM) to encrypt the root file system. 6 Include this section if you want to use one or more Tang servers. 7 Specify the URL of a Tang server. In this example, tangd.socket is listening on port 7500 on the Tang server. 8 Specify the exchange key thumbprint, which was generated in a preceding step. 9 Optional: Specify the advertisement for your offline Tang server in valid JSON format. 10 Specify the minimum number of TPM v2 and Tang encryption conditions that must be met for decryption to occur. The default value is 1 . For more information about this topic, see "Configuring an encryption threshold". 11 Include this section if you want to mirror the boot disk. For more details, see "About disk mirroring". 12 List all disk devices that should be included in the boot disk mirror, including the disk that RHCOS will be installed onto. 13 Include this directive to enable FIPS mode on your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . If you are configuring nodes to use both disk encryption and mirroring, both features must be configured in the same Butane configuration file. If you are configuring disk encryption on a node with FIPS mode enabled, you must include the fips directive in the same Butane configuration file, even if FIPS mode is also enabled in a separate manifest. Create a control plane or compute node manifest from the corresponding Butane configuration file and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml Repeat this step for each node type that requires disk encryption or mirroring. Save the Butane configuration file in case you need to update the manifests in the future. Continue with the remainder of the OpenShift Container Platform installation. Tip You can monitor the console log on the RHCOS nodes during installation for error messages relating to disk encryption or mirroring. Important If you configure additional data partitions, they will not be encrypted unless encryption is explicitly requested. Verification After installing OpenShift Container Platform, you can verify if boot disk encryption or mirroring is enabled on the cluster nodes. From the installation host, access a cluster node by using a debug pod: Start a debug pod for the node, for example: USD oc debug node/compute-1 Set /host as the root directory within the debug shell. The debug pod mounts the root file system of the node in /host within the pod. By changing the root directory to /host , you can run binaries contained in the executable paths on the node: # chroot /host Note OpenShift Container Platform cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. If you configured boot disk encryption, verify if it is enabled: From the debug shell, review the status of the root mapping on the node: # cryptsetup status root Example output /dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write 1 The encryption format. When the TPM v2 or Tang encryption modes are enabled, the RHCOS boot disks are encrypted using the LUKS2 format. 2 The encryption algorithm used to encrypt the LUKS2 volume. The aes-cbc-essiv:sha256 cipher is used if FIPS mode is enabled. 3 The device that contains the encrypted LUKS2 volume. If mirroring is enabled, the value will represent a software mirror device, for example /dev/md126 . List the Clevis plugins that are bound to the encrypted device: # clevis luks list -d /dev/sda4 1 1 Specify the device that is listed in the device field in the output of the preceding step. Example output 1: sss '{"t":1,"pins":{"tang":[{"url":"http://tang.example.com:7500"}]}}' 1 1 In the example output, the Tang plugin is used by the Shamir's Secret Sharing (SSS) Clevis plugin for the /dev/sda4 device. If you configured mirroring, verify if it is enabled: From the debug shell, list the software RAID devices on the node: # cat /proc/mdstat Example output Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none> 1 The /dev/md126 software RAID mirror device uses the /dev/sda3 and /dev/sdb3 disk devices on the cluster node. 2 The /dev/md127 software RAID mirror device uses the /dev/sda4 and /dev/sdb4 disk devices on the cluster node. Review the details of each of the software RAID devices listed in the output of the preceding command. The following example lists the details of the /dev/md126 device: # mdadm --detail /dev/md126 Example output /dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8 1 Specifies the RAID level of the device. raid1 indicates RAID 1 disk mirroring. 2 Specifies the state of the RAID device. 3 4 States the number of underlying disk devices that are active and working. 5 States the number of underlying disk devices that are in a failed state. 6 The name of the software RAID device. 7 8 Provides information about the underlying disk devices used by the software RAID device. List the file systems mounted on the software RAID devices: # mount | grep /dev/md Example output /dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel) In the example output, the /boot file system is mounted on the /dev/md126 software RAID device and the root file system is mounted on /dev/md127 . Repeat the verification steps for each OpenShift Container Platform node type. Additional resources For more information about the TPM v2 and Tang encryption modes, see Configuring automated unlocking of encrypted volumes using policy-based decryption . 1.4.4. Configuring a RAID-enabled data volume You can enable software RAID partitioning to provide an external data volume. OpenShift Container Platform supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and RAID 10 for data protection and fault tolerance. See "About disk mirroring" for more details. Note OpenShift Container Platform 4.17 does not support software RAIDs on the installation drive. Prerequisites You have downloaded the OpenShift Container Platform installation program on your installation node. You have installed Butane on your installation node. Note Butane is a command-line utility that OpenShift Container Platform uses to provide convenient, short-hand syntax for writing machine configs, as well as for performing additional validation of machine configs. For more information, see the Creating machine configs with Butane section. Procedure Create a Butane config that configures a data volume by using software RAID. To configure a data volume with RAID 1 on the same disks that are used for a mirrored boot disk, create a USDHOME/clusterconfig/raid1-storage.bu file, for example: RAID 1 on mirrored boot disk variant: openshift version: 4.17.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true 1 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. To configure a data volume with RAID 1 on secondary disks, create a USDHOME/clusterconfig/raid1-alt-storage.bu file, for example: RAID 1 on secondary disks variant: openshift version: 4.17.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true Create a RAID manifest from the Butane config you created in the step and save it to the <installation_directory>/openshift directory. For example, to create a manifest for the compute nodes, run the following command: USD butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1 1 Replace <butane_config> and <manifest_name> with the file names from the step. For example, raid1-alt-storage.bu and raid1-alt-storage.yaml for secondary disks. Save the Butane config in case you need to update the manifest in the future. Continue with the remainder of the OpenShift Container Platform installation. 1.4.5. Configuring an Intel(R) Virtual RAID on CPU (VROC) data volume Intel(R) VROC is a type of hybrid RAID, where some of the maintenance is offloaded to the hardware, but appears as software RAID to the operating system. The following procedure configures an Intel(R) VROC-enabled RAID1. Prerequisites You have a system with Intel(R) Volume Management Device (VMD) enabled. Procedure Create the Intel(R) Matrix Storage Manager (IMSM) RAID container by running the following command: USD mdadm -CR /dev/md/imsm0 -e \ imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1 1 The RAID device names. In this example, there are two devices listed. If you provide more than two device names, you must adjust the -n flag. For example, listing three devices would use the flag -n3 . Create the RAID1 storage inside the container: Create a dummy RAID0 volume in front of the real RAID1 volume by running the following command: USD mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean Create the real RAID1 array by running the following command: USD mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0 Stop both RAID0 and RAID1 member arrays and delete the dummy RAID0 array with the following commands: USD mdadm -S /dev/md/dummy \ mdadm -S /dev/md/coreos \ mdadm --kill-subarray=0 /dev/md/imsm0 Restart the RAID1 arrays by running the following command: USD mdadm -A /dev/md/coreos /dev/md/imsm0 Install RHCOS on the RAID1 device: Get the UUID of the IMSM container by running the following command: USD mdadm --detail --export /dev/md/imsm0 Install RHCOS and include the rd.md.uuid kernel argument by running the following command: USD coreos-installer install /dev/md/coreos \ --append-karg rd.md.uuid=<md_UUID> 1 ... 1 The UUID of the IMSM container. Include any additional coreos-installer arguments you need to install RHCOS. 1.5. Configuring chrony time service You can set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Alternately, you can specify any of the following NTP servers: 1.rhel.pool.ntp.org , 2.rhel.pool.ntp.org , or 3.rhel.pool.ntp.org . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 1.6. Additional resources For information on Butane, see Creating machine configs with Butane . For information on FIPS support, see Support for FIPS cryptography . | [
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane",
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane",
"chmod +x butane",
"echo USDPATH",
"butane <butane_file>",
"variant: openshift version: 4.17.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-custom.bu -o ./99-worker-custom.yaml",
"oc create -f 99-worker-custom.yaml",
"./openshift-install create manifests --dir <installation_directory>",
"cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"cd kmods-via-containers/",
"sudo make install",
"sudo systemctl daemon-reload",
"cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"cd kvc-simple-kmod",
"cat simple-kmod.conf",
"KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"",
"sudo make install",
"sudo kmods-via-containers build simple-kmod USD(uname -r)",
"sudo systemctl enable [email protected] --now",
"sudo systemctl status [email protected]",
"β [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"dmesg | grep 'Hello world'",
"[ 6420.761332] Hello world from simple_kmod.",
"sudo cat /proc/simple-procfs-kmod",
"simple-procfs-kmod number = 0",
"sudo spkut 44",
"KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"FAKEROOT=USD(mktemp -d)",
"cd kmods-via-containers",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd ../kvc-simple-kmod",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree",
"variant: openshift version: 4.17.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true",
"butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml",
"oc create -f 99-simple-kmod.yaml",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"variant: openshift version: 4.17.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF - url: http://tang3.example.com:7500 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 advertisement: \"{\\\"payload\\\": \\\"...\\\", \\\"protected\\\": \\\"...\\\", \\\"signature\\\": \\\"...\\\"}\" 4 threshold: 2 5 openshift: fips: true",
"sudo yum install clevis",
"clevis-encrypt-tang '{\"url\":\"http://tang1.example.com:7500\"}' < /dev/null > /dev/null 1",
"The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1",
"curl -f http://tang2.example.com:7500/adv > adv.jws && cat adv.jws",
"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}",
"clevis-encrypt-tang '{\"url\":\"http://tang2.example.com:7500\",\"adv\":\"adv.jws\"}' < /dev/null > /dev/null",
"./openshift-install create manifests --dir <installation_directory> 1",
"variant: openshift version: 4.17.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang1.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF advertisement: \"{\"payload\": \"eyJrZXlzIjogW3siYWxnIjogIkV\", \"protected\": \"eyJhbGciOiJFUzUxMiIsImN0eSI\", \"signature\": \"ADLgk7fZdE3Yt4FyYsm0pHiau7Q\"}\" 9 threshold: 1 10 mirror: 11 devices: 12 - /dev/sda - /dev/sdb openshift: fips: true 13",
"butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml",
"oc debug node/compute-1",
"chroot /host",
"cryptsetup status root",
"/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write",
"clevis luks list -d /dev/sda4 1",
"1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1",
"cat /proc/mdstat",
"Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>",
"mdadm --detail /dev/md126",
"/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8",
"mount | grep /dev/md",
"/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)",
"variant: openshift version: 4.17.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/disk/by-id/scsi-3600508b400105e210000900000490000 - /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 storage: disks: - device: /dev/disk/by-id/scsi-3600508b400105e210000900000490000 partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/disk/by-id/scsi-SSEAGATE_ST373453LW_3HW1RHM6 partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true",
"variant: openshift version: 4.17.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true",
"butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1",
"mdadm -CR /dev/md/imsm0 -e imsm -n2 /dev/nvme0n1 /dev/nvme1n1 1",
"mdadm -CR /dev/md/dummy -l0 -n2 /dev/md/imsm0 -z10M --assume-clean",
"mdadm -CR /dev/md/coreos -l1 -n2 /dev/md/imsm0",
"mdadm -S /dev/md/dummy mdadm -S /dev/md/coreos mdadm --kill-subarray=0 /dev/md/imsm0",
"mdadm -A /dev/md/coreos /dev/md/imsm0",
"mdadm --detail --export /dev/md/imsm0",
"coreos-installer install /dev/md/coreos --append-karg rd.md.uuid=<md_UUID> 1",
"variant: openshift version: 4.17.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installation_configuration/installing-customizing |
Chapter 4. Advisories related to this release | Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2023:1875 RHSA-2023:1877 RHSA-2023:1878 RHSA-2023:1880 RHSA-2023:1882 RHSA-2023:1883 RHSA-2023:1889 RHSA-2023:1892 RHSA-2023:1895 RHSA-2023:1899 Revised on 2024-05-09 16:47:23 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.19/rn-openjdk11019-advisory_openjdk |
Embedding in a RHEL for Edge image | Embedding in a RHEL for Edge image Red Hat build of MicroShift 4.18 Embedding in a RHEL for Edge image Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/embedding_in_a_rhel_for_edge_image/index |
2.2. Setting a Password for the Piranha Configuration Tool | 2.2. Setting a Password for the Piranha Configuration Tool Before using the Piranha Configuration Tool for the first time on the primary LVS router, you must restrict access to it by creating a password. To do this, login as root and issue the following command: /usr/sbin/piranha-passwd After entering this command, create the administrative password when prompted. Warning For a password to be more secure, it should not contain proper nouns, commonly used acronyms, or words in a dictionary from any language. Do not leave the password unencrypted anywhere on the system. If the password is changed during an active Piranha Configuration Tool session, the administrator is prompted to provide the new password. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-piranha-password-vsa |
Chapter 13. Scanning pods for vulnerabilities | Chapter 13. Scanning pods for vulnerabilities Using the Red Hat Quay Container Security Operator, you can access vulnerability scan results from the OpenShift Container Platform web console for container images used in active pods on the cluster. The Red Hat Quay Container Security Operator: Watches containers associated with pods on all or specified namespaces Queries the container registry where the containers came from for vulnerability information, provided an image's registry is running image scanning (such as Quay.io or a Red Hat Quay registry with Clair scanning) Exposes vulnerabilities via the ImageManifestVuln object in the Kubernetes API Using the instructions here, the Red Hat Quay Container Security Operator is installed in the openshift-operators namespace, so it is available to all namespaces on your OpenShift cluster. 13.1. Running the Red Hat Quay Container Security Operator You can start the Red Hat Quay Container Security Operator from the OpenShift Container Platform web console by selecting and installing that Operator from the Operator Hub, as described here. Prerequisites Have administrator privileges to the OpenShift Container Platform cluster Have containers that come from a Red Hat Quay or Quay.io registry running on your cluster Procedure Navigate to Operators OperatorHub and select Security . Select the Container Security Operator, then select Install to go to the Create Operator Subscription page. Check the settings. All namespaces and automatic approval strategy are selected, by default. Select Install . The Container Security Operator appears after a few moments on the Installed Operators screen. Optionally, you can add custom certificates to the Red Hat Quay Container Security Operator. In this example, create a certificate named quay.crt in the current directory. Then run the following command to add the cert to the Red Hat Quay Container Security Operator: USD oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators If you added a custom certificate, restart the Operator pod for the new certs to take effect. Open the OpenShift Dashboard ( Home Overview ). A link to Quay Image Security appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a Quay Image Security breakdown , as shown in the following figure: You can do one of two things at this point to follow up on any detected vulnerabilities: Select the link to the vulnerability. You are taken to the container registry that the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry: Select the namespaces link to go to the ImageManifestVuln screen, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in the quay-enterprise namespace: At this point, you know what images are vulnerable, what you need to do to fix those vulnerabilities, and every namespace that the image was run in. So you can: Alert anyone running the image that they need to correct the vulnerability Stop the images from running by deleting the deployment or other object that started the pod that the image is in Note that if you do delete the pod, it may take several minutes for the vulnerability to reset on the dashboard. 13.2. Querying image vulnerabilities from the CLI Using the oc command, you can display information about vulnerabilities detected by the Red Hat Quay Container Security Operator. Prerequisites Be running the Red Hat Quay Container Security Operator on your OpenShift Container Platform instance Procedure To query for detected container image vulnerabilities, type: USD oc get vuln --all-namespaces Example output NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s To display details for a particular vulnerability, provide the vulnerability name and its namespace to the oc describe command. This example shows an active container whose image includes an RPM package with a vulnerability: USD oc describe vuln --namespace mynamespace sha256.ac50e3752... Example output Name: sha256.ac50e3752... Namespace: quay-enterprise ... Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries... | [
"oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators",
"oc get vuln --all-namespaces",
"NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s",
"oc describe vuln --namespace mynamespace sha256.ac50e3752",
"Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/security_and_compliance/pod-vulnerability-scan |
Chapter 3. Installing a cluster quickly on Alibaba Cloud | Chapter 3. Installing a cluster quickly on Alibaba Cloud In OpenShift Container Platform version 4.13, you can install a cluster on Alibaba Cloud that uses the default configuration options. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have created the required Alibaba Cloud resources . If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 3.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.6. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 3.7. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set the USDRELEASE_IMAGE variable by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --cloud=alibabacloud \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1 1 credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. Note This command can take a few moments to run. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.13 on Alibaba Cloud 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4 1 The Machine API Operator CR is required. 2 The Image Registry Operator CR is required. 3 The Ingress Operator CR is required. 4 The Storage Operator CR is an optional component and might be disabled in your cluster. Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ --region=<alibaba_region> \ --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \ --output-dir=<path_to_ccoctl_output_dir> where: <name> is the name used to tag any cloud resources that are created for tracking. <alibaba_region> is the Alibaba Cloud region in which cloud resources will be created. <path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the files for the component CredentialsRequest objects. <path_to_ccoctl_output_dir> is the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output: openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 3.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 3.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service 3.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=alibabacloud --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cluster-image-registry-operator_01-registry-credentials-request-alibaba.yaml 2 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 3 0000_50_cluster-storage-operator_03_credentials_request_alibaba.yaml 4",
"ccoctl alibabacloud create-ram-users --name <name> --region=<alibaba_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --output-dir=<path_to_ccoctl_output_dir>",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_alibaba/installing-alibaba-default |
3.4. Exclusive Activation of a Volume Group in a Cluster | 3.4. Exclusive Activation of a Volume Group in a Cluster The following procedure configures the LVM volume group in a way that will ensure that only the cluster is capable of activating the volume group, and that the volume group will not be activated outside of the cluster on startup. If the volume group is activated by a system outside of the cluster, there is a risk of corrupting the volume group's metadata. This procedure modifies the volume_list entry in the /etc/lvm/lvm.conf configuration file. Volume groups listed in the volume_list entry are allowed to automatically activate on the local node outside of the cluster manager's control. Volume groups related to the node's local root and home directories should be included in this list. All volume groups managed by the cluster manager must be excluded from the volume_list entry. Note that this procedure does not require the use of clvmd . Perform the following procedure on each node in the cluster. Execute the following command to ensure that locking_type is set to 1 and that use_lvmetad is set to 0 in the /etc/lvm/lvm.conf file. This command also disables and stops any lvmetad processes immediately. Determine which volume groups are currently configured on your local storage with the following command. This will output a list of the currently-configured volume groups. If you have space allocated in separate volume groups for root and for your home directory on this node, you will see those volumes in the output, as in this example. Add the volume groups other than my_vg (the volume group you have just defined for the cluster) as entries to volume_list in the /etc/lvm/lvm.conf configuration file. For example, if you have space allocated in separate volume groups for root and for your home directory, you would uncomment the volume_list line of the lvm.conf file and add these volume groups as entries to volume_list as follows: Note If no local volume groups are present on a node to be activated outside of the cluster manager, you must still initialize the volume_list entry as volume_list = [] . Rebuild the initramfs boot image to guarantee that the boot image will not try to activate a volume group controlled by the cluster. Update the initramfs device with the following command. This command may take up to a minute to complete. Reboot the node. Note If you have installed a new Linux kernel since booting the node on which you created the boot image, the new initrd image will be for the kernel that was running when you created it and not for the new kernel that is running when you reboot the node. You can ensure that the correct initrd device is in use by running the uname -r command before and after the reboot to determine the kernel release that is running. If the releases are not the same, update the initrd file after rebooting with the new kernel and then reboot the node. When the node has rebooted, check whether the cluster services have started up again on that node by executing the pcs cluster status command on that node. If this yields the message Error: cluster is not currently running on this node then enter the following command. Alternately, you can wait until you have rebooted each node in the cluster and start cluster services on all of the nodes in the cluster with the following command. | [
"lvmconf --enable-halvm --services --startstopservices",
"vgs --noheadings -o vg_name my_vg rhel_home rhel_root",
"volume_list = [ \"rhel_root\", \"rhel_home\" ]",
"dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)",
"pcs cluster start",
"pcs cluster start --all"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-exclusiveactivenfs-HAAA |
3.2. Additional Information | 3.2. Additional Information For more information on eCryptfs and its mount options, refer to man ecryptfs (provided by the ecryptfs-utils package). The following Kernel document (provided by the kernel-doc package) also provides additional information on eCryptfs: /usr/share/doc/kernel-doc- version /Documentation/filesystems/ecryptfs.txt | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/efsaddinfo |
Chapter 3. CloudCredential [operator.openshift.io/v1] | Chapter 3. CloudCredential [operator.openshift.io/v1] Description CloudCredential provides a means to configure an operator to manage CredentialsRequests. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CloudCredentialSpec is the specification of the desired behavior of the cloud-credential-operator. status object CloudCredentialStatus defines the observed status of the cloud-credential-operator. 3.1.1. .spec Description CloudCredentialSpec is the specification of the desired behavior of the cloud-credential-operator. Type object Property Type Description credentialsMode string CredentialsMode allows informing CCO that it should not attempt to dynamically determine the root cloud credentials capabilities, and it should just run in the specified mode. It also allows putting the operator into "manual" mode if desired. Leaving the field in default mode runs CCO so that the cluster's cloud credentials will be dynamically probed for capabilities (on supported clouds/platforms). Supported modes: AWS/Azure/GCP: "" (Default), "Mint", "Passthrough", "Manual" Others: Do not set value as other platforms only support running in "Passthrough" logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 3.1.2. .status Description CloudCredentialStatus defines the observed status of the cloud-credential-operator. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 3.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 3.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 3.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 3.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 3.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/cloudcredentials DELETE : delete collection of CloudCredential GET : list objects of kind CloudCredential POST : create a CloudCredential /apis/operator.openshift.io/v1/cloudcredentials/{name} DELETE : delete a CloudCredential GET : read the specified CloudCredential PATCH : partially update the specified CloudCredential PUT : replace the specified CloudCredential /apis/operator.openshift.io/v1/cloudcredentials/{name}/status GET : read status of the specified CloudCredential PATCH : partially update status of the specified CloudCredential PUT : replace status of the specified CloudCredential 3.2.1. /apis/operator.openshift.io/v1/cloudcredentials HTTP method DELETE Description delete collection of CloudCredential Table 3.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CloudCredential Table 3.2. HTTP responses HTTP code Reponse body 200 - OK CloudCredentialList schema 401 - Unauthorized Empty HTTP method POST Description create a CloudCredential Table 3.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.4. Body parameters Parameter Type Description body CloudCredential schema Table 3.5. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 201 - Created CloudCredential schema 202 - Accepted CloudCredential schema 401 - Unauthorized Empty 3.2.2. /apis/operator.openshift.io/v1/cloudcredentials/{name} Table 3.6. Global path parameters Parameter Type Description name string name of the CloudCredential HTTP method DELETE Description delete a CloudCredential Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CloudCredential Table 3.9. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CloudCredential Table 3.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CloudCredential Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. Body parameters Parameter Type Description body CloudCredential schema Table 3.14. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 201 - Created CloudCredential schema 401 - Unauthorized Empty 3.2.3. /apis/operator.openshift.io/v1/cloudcredentials/{name}/status Table 3.15. Global path parameters Parameter Type Description name string name of the CloudCredential HTTP method GET Description read status of the specified CloudCredential Table 3.16. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CloudCredential Table 3.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CloudCredential Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body CloudCredential schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK CloudCredential schema 201 - Created CloudCredential schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/cloudcredential-operator-openshift-io-v1 |
5.192. mingw32-matahari | 5.192. mingw32-matahari 5.192.1. RHBA-2012:0984 - mingw32-matahari bug fix update An updated mingw32-matahari package that fixes one bug is now available for Red Hat Enterprise Linux 6. This package includes Matahari Qpid Management Framework (QMF) Agents for Windows guests. QMF Agent can be used to control and manage various pieces of functionality for an ovirt node, using the Advanced Message Queuing Protocol (AMQP) protocol. Bug Fix BZ# 806948 Previously, Matahari depended on libqpidclient and libqpidcommon. As a result, Qpid's APIs using libqpidclient and libqpidcommon did not have stable ABI and rebuilding Qpid negatively affected mingw32-matahari. With this update, the dependencies have been removed, thus fixing this bug. All mingw32-matahari users are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mingw32-matahari |
Chapter 1. Initial Troubleshooting | Chapter 1. Initial Troubleshooting As a storage administrator, you can do the initial troubleshooting of a Red Hat Ceph Storage cluster before contacting Red Hat support. This chapter includes the following information: Identifying problems . Diagnosing the health of a storage cluster . Understanding Ceph Health . Muting health alerts of a Ceph cluster . Understanding Ceph logs . Generating an `sos report` . Prerequisites A running Red Hat Ceph Storage cluster. 1.1. Identifying problems To determine possible causes of the error with the Red Hat Ceph Storage cluster, answer the questions in the Procedure section. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Certain problems can arise when using unsupported configurations. Ensure that your configuration is supported. Do you know what Ceph component causes the problem? No. Follow Diagnosing the health of a Ceph storage cluster procedure in the Red Hat Ceph Storage Troubleshooting Guide . Ceph Monitors. See Troubleshooting Ceph Monitors section in the Red Hat Ceph Storage Troubleshooting Guide . Ceph OSDs. See Troubleshooting Ceph OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . Ceph placement groups. See Troubleshooting Ceph placement groups section in the Red Hat Ceph Storage Troubleshooting Guide . Multi-site Ceph Object Gateway. See Troubleshooting a multi-site Ceph Object Gateway section in the Red Hat Ceph Storage Troubleshooting Guide . Additional Resources See the Red Hat Ceph Storage: Supported configurations article for details. 1.2. Diagnosing the health of a storage cluster This procedure lists basic steps to diagnose the health of a Red Hat Ceph Storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Log into the Cephadm shell: Example Check the overall status of the storage cluster: Example If the command returns HEALTH_WARN or HEALTH_ERR see Understanding Ceph health for details. Monitor the logs of the storage cluster: Example To capture the logs of the cluster to a file, run the following commands: Example The logs are located by default in the /var/log/ceph/ CLUSTER_FSID / directory. Check the Ceph logs for any error messages listed in Understanding Ceph logs . If the logs do not include a sufficient amount of information, increase the debugging level and try to reproduce the action that failed. See Configuring logging for details. 1.3. Understanding Ceph health The ceph health command returns information about the status of the Red Hat Ceph Storage cluster: HEALTH_OK indicates that the cluster is healthy. HEALTH_WARN indicates a warning. In some cases, the Ceph status returns to HEALTH_OK automatically. For example when Red Hat Ceph Storage cluster finishes the rebalancing process. However, consider further troubleshooting if a cluster is in the HEALTH_WARN state for longer time. HEALTH_ERR indicates a more serious problem that requires your immediate attention. Use the ceph health detail and ceph -s commands to get a more detailed output. Note A health warning is displayed if there is no mgr daemon running. In case the last mgr daemon of a Red Hat Ceph Storage cluster was removed, you can manually deploy a mgr daemon, on a random host of the Red Hat Storage cluster. See the Manually deploying a mgr daemon in the Red Hat Ceph Storage 8 Administration Guide . Additional Resources See the Ceph Monitor error messages table in the Red Hat Ceph Storage Troubleshooting Guide . See the Ceph OSD error messages table in the Red Hat Ceph Storage Troubleshooting Guide . See the Placement group error messages table in the Red Hat Ceph Storage Troubleshooting Guide . 1.4. Muting health alerts of a Ceph cluster In certain scenarios, users might want to temporarily mute some warnings, because they are already aware of the warning and cannot act on it right away. You can mute health checks so that they do not affect the overall reported status of the Ceph cluster. Alerts are specified using the health check codes. One example is, when an OSD is brought down for maintenance, OSD_DOWN warnings are expected. You can choose to mute the warning until the maintenance is over because those warnings put the cluster in HEALTH_WARN instead of HEALTH_OK for the entire duration of maintenance. Most health mutes also disappear if the extent of an alert gets worse. For example, if there is one OSD down, and the alert is muted, the mute disappears if one or more additional OSDs go down. This is true for any health alert that involves a count indicating how much or how many of something is triggering the warning or error. Prerequisites A running Red Hat Ceph Storage cluster. Root-level of access to the nodes. A health warning message. Procedure Log into the Cephadm shell: Example Check the health of the Red Hat Ceph Storage cluster by running the ceph health detail command: Example You can see that the storage cluster is in HEALTH_WARN status as one of the OSDs is down. Mute the alert: Syntax Example Optional: A health check mute can have a time to live (TTL) associated with it, such that the mute automatically expires after the specified period of time has elapsed. Specify the TTL as an optional duration argument in the command: Syntax DURATION can be specified in s , sec , m , min , h , or hour . Example In this example, the alert OSD_DOWN is muted for 10 minutes. Verify if the Red Hat Ceph Storage cluster status has changed to HEALTH_OK : Example In this example, you can see that the alert OSD_DOWN and OSD_FLAG is muted and the mute is active for nine minutes. Optional: You can retain the mute even after the alert is cleared by making it sticky . Syntax Example You can remove the mute by running the following command: Syntax Example Additional Resources See the Health messages of a Ceph cluster section in the Red Hat Ceph Storage Troubleshooting Guide for details. 1.5. Understanding Ceph logs Ceph stores its logs in the /var/log/ceph/ CLUSTER_FSID / directory after the logging to files is enabled. The CLUSTER_NAME .log is the main storage cluster log file that includes global events. By default, the log file name is ceph.log . Only the Ceph Monitor nodes include the main storage cluster log. Each Ceph OSD and Monitor has its own log file, named CLUSTER_NAME -osd. NUMBER .log and CLUSTER_NAME -mon. HOSTNAME .log . When you increase debugging level for Ceph subsystems, Ceph generates new log files for those subsystems as well. Additional Resources For details about logging, see Configuring logging in the Red Hat Ceph Storage Troubleshooting Guide . See the Common Ceph Monitor error messages in the Ceph logs table in the Red Hat Ceph Storage Troubleshooting Guide . See the Common Ceph OSD error messages in the Ceph logs table in the Red Hat Ceph Storage Troubleshooting Guide . See the Ceph daemon logs to enable logging to files. 1.6. Generating an sos report You can run the sos report command to collect the configuration details, system information, and diagnostic information of a Red Hat Ceph Storage cluster from a Red Hat Enterprise Linux. Red Hat Support team uses this information for further troubleshooting of the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Procedure Install the sos package: Example Run the sos report to get the system information of the storage cluster: Example The report is saved in the /var/tmp file. Run the following command for specific Ceph daemon information: Example Additional Resources See the What is an sosreport and how to create one in Red Hat Enterprise Linux? KnowledgeBase article for more information. | [
"cephadm shell",
"ceph health detail",
"ceph -W cephadm",
"ceph config set global log_to_file true ceph config set global mon_cluster_log_to_file true",
"cephadm shell",
"ceph health detail HEALTH_WARN 1 osds down; 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set [WRN] OSD_DOWN: 1 osds down osd.1 (root=default,host=host01) is down [WRN] OSD_FLAGS: 1 OSDs or CRUSH {nodes, device-classes} have {NOUP,NODOWN,NOIN,NOOUT} flags set osd.1 has flags noup",
"ceph health mute HEALTH_MESSAGE",
"ceph health mute OSD_DOWN",
"ceph health mute HEALTH_MESSAGE DURATION",
"ceph health mute OSD_DOWN 10m",
"ceph -s cluster: id: 81a4597a-b711-11eb-8cb8-001a4a000740 health: HEALTH_OK (muted: OSD_DOWN(9m) OSD_FLAGS(9m)) services: mon: 3 daemons, quorum host01,host02,host03 (age 33h) mgr: host01.pzhfuh(active, since 33h), standbys: host02.wsnngf, host03.xwzphg osd: 11 osds: 10 up (since 4m), 11 in (since 5d) data: pools: 1 pools, 1 pgs objects: 13 objects, 0 B usage: 85 MiB used, 165 GiB / 165 GiB avail pgs: 1 active+clean",
"ceph health mute HEALTH_MESSAGE DURATION --sticky",
"ceph health mute OSD_DOWN 1h --sticky",
"ceph health unmute HEALTH_MESSAGE",
"ceph health unmute OSD_DOWN",
"dnf install sos",
"sos report -a --all-logs",
"sos report --all-logs -e ceph_mgr,ceph_common,ceph_mon,ceph_osd,ceph_ansible,ceph_mds,ceph_rgw"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/initial-troubleshooting |
10.2. About PAM Configuration Files | 10.2. About PAM Configuration Files Each PAM-aware application or service has a file in the /etc/pam.d/ directory. Each file in this directory has the same name as the service to which it controls access. For example, the login program defines its service name as login and installs the /etc/pam.d/login PAM configuration file. Warning It is highly recommended to configure PAMs using the authconfig tool instead of manually editing the PAM configuration files. 10.2.1. PAM Configuration File Format Each PAM configuration file contains a group of directives that define the module (the authentication configuration area) and any controls or arguments with it. The directives all have a simple syntax that identifies the module purpose (interface) and the configuration settings for the module. In a PAM configuration file, the module interface is the first field defined. For example: A PAM interface is essentially the type of authentication action which that specific module can perform. Four types of PAM module interface are available, each corresponding to a different aspect of the authentication and authorization process: auth - This module interface authenticates users. For example, it requests and verifies the validity of a password. Modules with this interface can also set credentials, such as group memberships. account - This module interface verifies that access is allowed. For example, it checks if a user account has expired or if a user is allowed to log in at a particular time of day. password - This module interface is used for changing user passwords. session - This module interface configures and manages user sessions. Modules with this interface can also perform additional tasks that are needed to allow access, like mounting a user's home directory and making the user's mailbox available. An individual module can provide any or all module interfaces. For instance, pam_unix.so provides all four module interfaces. The module name, such as pam_unix.so , provides PAM with the name of the library containing the specified module interface. The directory name is omitted because the application is linked to the appropriate version of libpam , which can locate the correct version of the module. All PAM modules generate a success or failure result when called. Control flags tell PAM what to do with the result. Modules can be listed ( stacked ) in a particular order, and the control flags determine how important the success or failure of a particular module is to the overall goal of authenticating the user to the service. There are several simple flags [2] , which use only a keyword to set the configuration: required - The module result must be successful for authentication to continue. If the test fails at this point, the user is not notified until the results of all module tests that reference that interface are complete. requisite - The module result must be successful for authentication to continue. However, if a test fails at this point, the user is notified immediately with a message reflecting the first failed required or requisite module test. sufficient - The module result is ignored if it fails. However, if the result of a module flagged sufficient is successful and no modules flagged required have failed, then no other results are required and the user is authenticated to the service. optional - The module result is ignored. A module flagged as optional only becomes necessary for successful authentication when no other modules reference the interface. include - Unlike the other controls, this does not relate to how the module result is handled. This flag pulls in all lines in the configuration file which match the given parameter and appends them as an argument to the module. Module interface directives can be stacked , or placed upon one another, so that multiple modules are used together for one purpose. Note If a module's control flag uses the sufficient or requisite value, then the order in which the modules are listed is important to the authentication process. Using stacking, the administrator can require specific conditions to exist before the user is allowed to authenticate. For example, the setup utility normally uses several stacked modules, as seen in its PAM configuration file: auth sufficient pam_rootok.so - This line uses the pam_rootok.so module to check whether the current user is root, by verifying that their UID is 0. If this test succeeds, no other modules are consulted and the command is executed. If this test fails, the module is consulted. auth include system-auth - This line includes the content of the /etc/pam.d/system-auth module and processes this content for authentication. account required pam_permit.so - This line uses the pam_permit.so module to allow the root user or anyone logged in at the console to reboot the system. session required pam_permit.so - This line is related to the session setup. Using pam_permit.so , it ensures that the setup utility does not fail. PAM uses arguments to pass information to a pluggable module during authentication for some modules. For example, the pam_pwquality.so module checks how strong a password is and can take several arguments. In the following example, enforce_for_root specifies that even password of the root user must successfully pass the strength check and retry defines that a user will receive three opportunities to enter a strong password. Invalid arguments are generally ignored and do not otherwise affect the success or failure of the PAM module. Some modules, however, may fail on invalid arguments. Most modules report errors to the journald service. For information on how to use journald and the related journalctl tool, see the System Administrator's Guide . Note The journald service was introduced in Red Hat Enterprise Linux 7.1. In versions of Red Hat Enterprise Linux, most modules report errors to the /var/log/secure file. 10.2.2. Annotated PAM Configuration Example Example 10.1, "Simple PAM Configuration" is a sample PAM application configuration file: Example 10.1. Simple PAM Configuration The first line is a comment, indicated by the hash mark ( # ) at the beginning of the line. Lines two through four stack three modules for login authentication. auth required pam_securetty.so - This module ensures that if the user is trying to log in as root, the TTY on which the user is logging in is listed in the /etc/securetty file, if that file exists. If the TTY is not listed in the file, any attempt to log in as root fails with a Login incorrect message. auth required pam_unix.so nullok - This module prompts the user for a password and then checks the password using the information stored in /etc/passwd and, if it exists, /etc/shadow . The argument nullok instructs the pam_unix.so module to allow a blank password. auth required pam_nologin.so - This is the final authentication step. It checks whether the /etc/nologin file exists. If it exists and the user is not root, authentication fails. Note In this example, all three auth modules are checked, even if the first auth module fails. This prevents the user from knowing at what stage their authentication failed. Such knowledge in the hands of an attacker could allow them to more easily deduce how to crack the system. account required pam_unix.so - This module performs any necessary account verification. For example, if shadow passwords have been enabled, the account interface of the pam_unix.so module checks to see if the account has expired or if the user has not changed the password within the allowed grace period. password required pam_pwquality.so retry=3 - If a password has expired, the password component of the pam_pwquality.so module prompts for a new password. It then tests the newly created password to see whether it can easily be determined by a dictionary-based password cracking program. The argument retry=3 specifies that if the test fails the first time, the user has two more chances to create a strong password. password required pam_unix.so shadow nullok use_authtok - This line specifies that if the program changes the user's password, using the password interface of the pam_unix.so module. The argument shadow instructs the module to create shadow passwords when updating a user's password. The argument nullok instructs the module to allow the user to change their password from a blank password, otherwise a null password is treated as an account lock. The final argument on this line, use_authtok , provides a good example of the importance of order when stacking PAM modules. This argument instructs the module not to prompt the user for a new password. Instead, it accepts any password that was recorded by a password module. In this way, all new passwords must pass the pam_pwquality.so test for secure passwords before being accepted. session required pam_unix.so - The final line instructs the session interface of the pam_unix.so module to manage the session. This module logs the user name and the service type to /var/log/secure at the beginning and end of each session. This module can be supplemented by stacking it with other session modules for additional functionality. [2] There are many complex control flags that can be set. These are set in attribute=value pairs; a complete list of attributes is available in the pam.d manpage. | [
"module_interface control_flag module_name module_arguments",
"auth required pam_unix.so",
"cat /etc/pam.d/setup auth sufficient pam_rootok.so auth include system-auth account required pam_permit.so session required pam_permit.so",
"password requisite pam_pwquality.so enforce_for_root retry=3",
"#%PAM-1.0 auth required pam_securetty.so auth required pam_unix.so nullok auth required pam_nologin.so account required pam_unix.so password required pam_pwquality.so retry=3 password required pam_unix.so shadow nullok use_authtok session required pam_unix.so"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/PAM_Configuration_Files |
Appendix B. Understanding the luks_tang_inventory.yml file | Appendix B. Understanding the luks_tang_inventory.yml file B.1. Configuration parameters for disk encryption hc_nodes (required) A list of hyperconverged hosts that uses the back-end FQDN of the host, and the configuration details of those hosts. Configuration that is specific to a host is defined under that host's back-end FQDN. Configuration that is common to all hosts is defined in the vars: section. blacklist_mpath_devices (optional) By default, Red Hat Virtualization Host enables multipath configuration, which provides unique multipath names and worldwide identifiers for all disks, even when disks do not have underlying multipath configuration. Include this section if you do not have multipath configuration so that the multipath device names are not used for listed devices. Disks that are not listed here are assumed to have multipath configuration available, and require the path format /dev/mapper/<WWID> instead of /dev/sdx when defined in subsequent sections of the inventory file. On a server with four devices (sda, sdb, sdc and sdd), the following configuration blacklists only two devices. The path format /dev/mapper/<WWID> is expected for devices not in this list. gluster_infra_luks_devices (required) A list of devices to encrypt and the encryption passphrase to use for each device. devicename The name of the device in the format /dev/sdx . passphrase The password to use for this device when configuring encryption. After disk encryption with Network-Bound Disk Encryption (NBDE) is configured, a new random key is generated, providing greater security. rootpassphrase (required) The password that you used when you selected Encrypt my data during operating system installation on this host. rootdevice (required) The root device that was encrypted when you selected Encrypt my data during operating system installation on this host. networkinterface (required) The network interface this host uses to reach the NBDE key server. ip_version (required) Whether to use IPv4 or IPv6 networking. Valid values are IPv4 and IPv6 . There is no default value. Mixed networks are not supported. ip_config_method (required) Whether to use DHCP or static networking. Valid values are dhcp and static . There is no default value. The other valid value for this option is static , which requires the following additional parameters and is defined individually for each host: gluster_infra_tangservers The address of your NBDE key server or servers, including http:// . If your servers use a port other than the default (80), specify a port by appending :_port_ to the end of the URL. B.2. Example luks_tang_inventory.yml Dynamically allocated IP addresses Static IP addresses | [
"hc_nodes: hosts: host1backend.example.com: [configuration specific to this host] host2backend.example.com: host3backend.example.com: host4backend.example.com: host5backend.example.com: host6backend.example.com: vars: [configuration common to all hosts]",
"hc_nodes: hosts: host1backend.example.com: blacklist_mpath_devices: - sdb - sdc",
"hc_nodes: hosts: host1backend.example.com: gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: Str0ngPa55#",
"hc_nodes: hosts: host1backend.example.com: rootpassphrase: h1-Str0ngPa55#",
"hc_nodes: hosts: host1backend.example.com: rootdevice: /dev/sda2",
"hc_nodes: hosts: host1backend.example.com: networkinterface: ens3s0f0",
"hc_nodes: vars: ip_version: IPv4",
"hc_nodes: vars: ip_config_method: dhcp",
"hc_nodes: hosts: host1backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.101 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host2backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100 host3backend.example.com : ip_config_method: static host_ip_addr: 192.168.1.102 host_ip_prefix: 24 host_net_gateway: 192.168.1.100",
"hc_nodes: vars: gluster_infra_tangservers: - url: http:// key-server1.example.com - url: http:// key-server2.example.com : 80",
"hc_nodes: hosts: host1-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host2-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host3-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 vars: ip_version: IPv4 ip_config_method: dhcp gluster_infra_tangservers: - url: http:// key-server1.example.com :80 - url: http:// key-server2.example.com :80",
"hc_nodes: hosts: host1-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host1-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host2-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host2-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway host3-backend.example.com : blacklist_mpath_devices: - sda - sdb - sdc gluster_infra_luks_devices: - devicename: /dev/sdb passphrase: dev-sdb-encrypt-passphrase - devicename: /dev/sdc passphrase: dev-sdc-encrypt-passphrase rootpassphrase: host3-root-passphrase rootdevice: /dev/sda2 networkinterface: eth0 host_ip_addr: host1-static-ip host_ip_prefix: network-prefix host_net_gateway: default-network-gateway vars: ip_version: IPv4 ip_config_method: static gluster_infra_tangservers: - url: http:// key-server1.example.com :80 - url: http:// key-server2.example.com :80"
] | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/understanding-the-luks_tang_inventory-yml-file |
Chapter 85. Openshift Builds | Chapter 85. Openshift Builds Since Camel 2.17 Only producer is supported The Openshift Builds component is one of the Kubernetes Components which provides a producer to execute Openshift builds operations. 85.1. Dependencies When using openshift-builds with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 85.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 85.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 85.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 85.3. Component Options The Openshift Builds component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 85.4. Endpoint Options The Openshift Builds endpoint is configured using URI syntax: with the following path and query parameters: 85.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 85.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 85.5. Message Headers The Openshift Builds component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesBuildsLabels (producer) Constant: KUBERNETES_BUILDS_LABELS The Openshift build labels. Map CamelKubernetesBuildName (producer) Constant: KUBERNETES_BUILD_NAME The Openshift build name. String 85.6. Supported producer operation listBuilds listBuildsByLabels getBuild 85.7. Openshift Builds Producer Examples listBuilds: this operation list the Builds on an Openshift cluster. from("direct:list"). toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuilds"). to("mock:result"); This operation returns a List of Builds from your Openshift cluster. listBuildsByLabels: this operation list the builds by labels on an Openshift cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILDS_LABELS, labels); } }); toF("openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels"). to("mock:result"); This operation returns a List of Builds from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 85.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"openshift-builds:masterUrl",
"from(\"direct:list\"). toF(\"openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuilds\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_BUILDS_LABELS, labels); } }); toF(\"openshift-builds:///?kubernetesClient=#kubernetesClient&operation=listBuildsByLabels\"). to(\"mock:result\");"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-openshift-builds-component-starter |
About | About OpenShift Container Platform 4.18 Introduction to OpenShift Container Platform Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/about/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_automated_rules_on_cryostat/making-open-source-more-inclusive |
6.4. State Transfer | 6.4. State Transfer State transfer is a basic data grid or clustered cache functionality. Without state transfer, data would be lost as nodes are added to or removed from the cluster. State transfer adjusts the cache's internal state in response to a change in a cache membership. The change can be when a node joins or leaves, when two or more cluster partitions merge, or a combination of joins, leaves, and merges. State transfer occurs automatically in Red Hat JBoss Data Grid whenever a node joins or leaves the cluster. In Red Hat JBoss Data Grid's replication mode, a new node joining the cache receives the entire cache state from the existing nodes. In distribution mode, the new node receives only a part of the state from the existing nodes, and the existing nodes remove some of their state in order to keep numOwners copies of each key in the cache (as determined through consistent hashing). In invalidation mode the initial state transfer is similar to replication mode, the only difference being that the nodes are not guaranteed to have the same state. When a node leaves, a replicated mode or invalidation mode cache does not perform any state transfer. A distributed cache needs to make additional copies of the keys that were stored on the leaving nodes, again to keep numOwners copies of each key. A State Transfer transfers both in-memory and persistent state by default, but both can be disabled in the configuration. When State Transfer is disabled a ClusterLoader must be configured, otherwise a node will become the owner or backup owner of a key without the data being loaded into its cache. In addition, if State Transfer is disabled in distributed mode then a key will occasionally have less than numOwners owners. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 6.4.1. Non-Blocking State Transfer Non-Blocking State Transfer in Red Hat JBoss Data Grid minimizes the time in which a cluster or node is unable to respond due to a state transfer in progress. Non-blocking state transfer is a core architectural improvement with the following goals: Minimize the interval(s) where the entire cluster cannot respond to requests because of a state transfer in progress. Minimize the interval(s) where an existing member stops responding to requests because of a state transfer in progress. Allow state transfer to occur with a drop in the performance of the cluster. However, the drop in the performance during the state transfer does not throw any exception, and allows processes to continue. Allows a GET operation to successfully retrieve a key from another node without returning a null value during a progressive state transfer. For simplicity, the total order-based commit protocol uses a blocking version of the currently implemented state transfer mechanism. The main differences between the regular state transfer and the total order state transfer are: The blocking protocol queues the transaction delivery during the state transfer. State transfer control messages (such as CacheTopologyControlCommand) are sent according to the total order information. The total order-based commit protocol works with the assumption that all the transactions are delivered in the same order and they see the same data set. So, no transactions are validated during the state transfer because all the nodes must have the most recent key or values in memory. Using the state transfer and blocking protocol in this manner allows the state transfer and transaction delivery on all on the nodes to be synchronized. However, transactions that are already involved in a state transfer (sent before the state transfer began and delivered after it concludes) must be resent. When resent, these transactions are treated as new joiners and assigned a new total order value. Report a bug 6.4.2. Suppress State Transfer via JMX State transfer can be suppressed using JMX in order to bring down and relaunch a cluster for maintenance. This operation permits a more efficient cluster shutdown and startup, and removes the risk of Out Of Memory errors when bringing down a grid. When a new node joins the cluster and rebalancing is suspended, the getCache() call will timeout after stateTransfer.timeout expires unless rebalancing is re-enabled or stateTransfer.awaitInitialTransfer is set to false . Disabling state transfer and rebalancing can be used for partial cluster shutdown or restart, however there is the possibility that data may be lost in a partial cluster shutdown due to state transfer being disabled. Report a bug 6.4.3. The rebalancingEnabled Attribute Suppressing rebalancing can only be triggered via the rebalancingEnabled JMX attribute, and requires no specific configuration. The rebalancingEnabled attribute can be modified for the entire cluster from the LocalTopologyManager JMX Mbean on any node. This attribute is true by default, and is configurable programmatically. Servers such as Hot Rod attempt to start all caches declared in the configuration during startup. If rebalancing is disabled, the cache will fail to start. Therefore, it is mandatory to use the following setting in a server environment: Report a bug | [
"<await-initial-transfer=\"false\"/>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-state_transfer |
7.7. Searching the Audit Log Files | 7.7. Searching the Audit Log Files The ausearch utility allows you to search Audit log files for specific events. By default, ausearch searches the /var/log/audit/audit.log file. You can specify a different file using the ausearch options -if file_name command. Supplying multiple options in one ausearch command is equivalent to using the AND operator. Example 7.6. Using ausearch to search Audit log files To search the /var/log/audit/audit.log file for failed login attempts, use the following command: To search for all account, group, and role changes, use the following command: To search for all logged actions performed by a certain user, using the user's login ID ( auid ), use the following command: To search for all failed system calls from yesterday up until now, use the following command: For a full listing of all ausearch options, see the ausearch (8) man page. | [
"~]# ausearch --message USER_LOGIN --success no --interpret",
"~]# ausearch -m ADD_USER -m DEL_USER -m ADD_GROUP -m USER_CHAUTHTOK -m DEL_GROUP -m CHGRP_ID -m ROLE_ASSIGN -m ROLE_REMOVE -i",
"~]# ausearch -ua 500 -i",
"~]# ausearch --start yesterday --end now -m SYSCALL -sv no -i"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-searching_the_audit_log_files |
Chapter 86. ExternalConfigurationVolumeSource schema reference | Chapter 86. ExternalConfigurationVolumeSource schema reference Used in: ExternalConfiguration Property Property type Description configMap ConfigMapVolumeSource Reference to a key in a ConfigMap. Exactly one Secret or ConfigMap has to be specified. name string Name of the volume which will be added to the Kafka Connect pods. secret SecretVolumeSource Reference to a key in a Secret. Exactly one Secret or ConfigMap has to be specified. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-ExternalConfigurationVolumeSource-reference |
Chapter 4. FlowSchema [flowcontrol.apiserver.k8s.io/v1] | Chapter 4. FlowSchema [flowcontrol.apiserver.k8s.io/v1] Description FlowSchema defines the schema of a group of flows. Note that a flow is made up of a set of inbound API requests with similar attributes and is identified by a pair of strings: the name of the FlowSchema and a "flow distinguisher". Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object FlowSchemaSpec describes how the FlowSchema's specification looks like. status object FlowSchemaStatus represents the current state of a FlowSchema. 4.1.1. .spec Description FlowSchemaSpec describes how the FlowSchema's specification looks like. Type object Required priorityLevelConfiguration Property Type Description distinguisherMethod object FlowDistinguisherMethod specifies the method of a flow distinguisher. matchingPrecedence integer matchingPrecedence is used to choose among the FlowSchemas that match a given request. The chosen FlowSchema is among those with the numerically lowest (which we take to be logically highest) MatchingPrecedence. Each MatchingPrecedence value must be ranged in [1,10000]. Note that if the precedence is not specified, it will be set to 1000 as default. priorityLevelConfiguration object PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. rules array rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. rules[] object PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. 4.1.2. .spec.distinguisherMethod Description FlowDistinguisherMethod specifies the method of a flow distinguisher. Type object Required type Property Type Description type string type is the type of flow distinguisher method The supported types are "ByUser" and "ByNamespace". Required. 4.1.3. .spec.priorityLevelConfiguration Description PriorityLevelConfigurationReference contains information that points to the "request-priority" being used. Type object Required name Property Type Description name string name is the name of the priority level configuration being referenced Required. 4.1.4. .spec.rules Description rules describes which requests will match this flow schema. This FlowSchema matches a request if and only if at least one member of rules matches the request. if it is an empty slice, there will be no requests matching the FlowSchema. Type array 4.1.5. .spec.rules[] Description PolicyRulesWithSubjects prescribes a test that applies to a request to an apiserver. The test considers the subject making the request, the verb being requested, and the resource to be acted upon. This PolicyRulesWithSubjects matches a request if and only if both (a) at least one member of subjects matches the request and (b) at least one member of resourceRules or nonResourceRules matches the request. Type object Required subjects Property Type Description nonResourceRules array nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. nonResourceRules[] object NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. resourceRules array resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. resourceRules[] object ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. subjects array subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. subjects[] object Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. 4.1.6. .spec.rules[].nonResourceRules Description nonResourceRules is a list of NonResourcePolicyRules that identify matching requests according to their verb and the target non-resource URL. Type array 4.1.7. .spec.rules[].nonResourceRules[] Description NonResourcePolicyRule is a predicate that matches non-resource requests according to their verb and the target non-resource URL. A NonResourcePolicyRule matches a request if and only if both (a) at least one member of verbs matches the request and (b) at least one member of nonResourceURLs matches the request. Type object Required verbs nonResourceURLs Property Type Description nonResourceURLs array (string) nonResourceURLs is a set of url prefixes that a user should have access to and may not be empty. For example: - "/healthz" is legal - "/hea*" is illegal - "/hea" is legal but matches nothing - "/hea/ " also matches nothing - "/healthz/ " matches all per-component health checks. "*" matches all non-resource urls. if it is present, it must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs. If it is present, it must be the only entry. Required. 4.1.8. .spec.rules[].resourceRules Description resourceRules is a slice of ResourcePolicyRules that identify matching requests according to their verb and the target resource. At least one of resourceRules and nonResourceRules has to be non-empty. Type array 4.1.9. .spec.rules[].resourceRules[] Description ResourcePolicyRule is a predicate that matches some resource requests, testing the request's verb and the target resource. A ResourcePolicyRule matches a resource request if and only if: (a) at least one member of verbs matches the request, (b) at least one member of apiGroups matches the request, (c) at least one member of resources matches the request, and (d) either (d1) the request does not specify a namespace (i.e., Namespace=="" ) and clusterScope is true or (d2) the request specifies a namespace and least one member of namespaces matches the request's namespace. Type object Required verbs apiGroups resources Property Type Description apiGroups array (string) apiGroups is a list of matching API groups and may not be empty. "*" matches all API groups and, if present, must be the only entry. Required. clusterScope boolean clusterScope indicates whether to match requests that do not specify a namespace (which happens either because the resource is not namespaced or the request targets all namespaces). If this field is omitted or false then the namespaces field must contain a non-empty list. namespaces array (string) namespaces is a list of target namespaces that restricts matches. A request that specifies a target namespace matches only if either (a) this list contains that target namespace or (b) this list contains " ". Note that " " matches any specified namespace but does not match a request that does not specify a namespace (see the clusterScope field for that). This list may be empty, but only if clusterScope is true. resources array (string) resources is a list of matching resources (i.e., lowercase and plural) with, if desired, subresource. For example, [ "services", "nodes/status" ]. This list may not be empty. "*" matches all resources and, if present, must be the only entry. Required. verbs array (string) verbs is a list of matching verbs and may not be empty. "*" matches all verbs and, if present, must be the only entry. Required. 4.1.10. .spec.rules[].subjects Description subjects is the list of normal user, serviceaccount, or group that this rule cares about. There must be at least one member in this slice. A slice that includes both the system:authenticated and system:unauthenticated user groups matches every request. Required. Type array 4.1.11. .spec.rules[].subjects[] Description Subject matches the originator of a request, as identified by the request authentication system. There are three ways of matching an originator; by user, group, or service account. Type object Required kind Property Type Description group object GroupSubject holds detailed information for group-kind subject. kind string kind indicates which one of the other fields is non-empty. Required serviceAccount object ServiceAccountSubject holds detailed information for service-account-kind subject. user object UserSubject holds detailed information for user-kind subject. 4.1.12. .spec.rules[].subjects[].group Description GroupSubject holds detailed information for group-kind subject. Type object Required name Property Type Description name string name is the user group that matches, or "*" to match all user groups. See https://github.com/kubernetes/apiserver/blob/master/pkg/authentication/user/user.go for some well-known group names. Required. 4.1.13. .spec.rules[].subjects[].serviceAccount Description ServiceAccountSubject holds detailed information for service-account-kind subject. Type object Required namespace name Property Type Description name string name is the name of matching ServiceAccount objects, or "*" to match regardless of name. Required. namespace string namespace is the namespace of matching ServiceAccount objects. Required. 4.1.14. .spec.rules[].subjects[].user Description UserSubject holds detailed information for user-kind subject. Type object Required name Property Type Description name string name is the username that matches, or "*" to match all usernames. Required. 4.1.15. .status Description FlowSchemaStatus represents the current state of a FlowSchema. Type object Property Type Description conditions array conditions is a list of the current states of FlowSchema. conditions[] object FlowSchemaCondition describes conditions for a FlowSchema. 4.1.16. .status.conditions Description conditions is a list of the current states of FlowSchema. Type array 4.1.17. .status.conditions[] Description FlowSchemaCondition describes conditions for a FlowSchema. Type object Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. Required. type string type is the type of the condition. Required. 4.2. API endpoints The following API endpoints are available: /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas DELETE : delete collection of FlowSchema GET : list or watch objects of kind FlowSchema POST : create a FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas GET : watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name} DELETE : delete a FlowSchema GET : read the specified FlowSchema PATCH : partially update the specified FlowSchema PUT : replace the specified FlowSchema /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas/{name} GET : watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name}/status GET : read status of the specified FlowSchema PATCH : partially update status of the specified FlowSchema PUT : replace status of the specified FlowSchema 4.2.1. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas HTTP method DELETE Description delete collection of FlowSchema Table 4.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind FlowSchema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK FlowSchemaList schema 401 - Unauthorized Empty HTTP method POST Description create a FlowSchema Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body FlowSchema schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 202 - Accepted FlowSchema schema 401 - Unauthorized Empty 4.2.2. /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas HTTP method GET Description watch individual changes to a list of FlowSchema. deprecated: use the 'watch' parameter with a list operation instead. Table 4.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name} Table 4.8. Global path parameters Parameter Type Description name string name of the FlowSchema HTTP method DELETE Description delete a FlowSchema Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified FlowSchema Table 4.11. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified FlowSchema Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified FlowSchema Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. Body parameters Parameter Type Description body FlowSchema schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty 4.2.4. /apis/flowcontrol.apiserver.k8s.io/v1/watch/flowschemas/{name} Table 4.17. Global path parameters Parameter Type Description name string name of the FlowSchema HTTP method GET Description watch changes to an object of kind FlowSchema. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/flowcontrol.apiserver.k8s.io/v1/flowschemas/{name}/status Table 4.19. Global path parameters Parameter Type Description name string name of the FlowSchema HTTP method GET Description read status of the specified FlowSchema Table 4.20. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified FlowSchema Table 4.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.22. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified FlowSchema Table 4.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.24. Body parameters Parameter Type Description body FlowSchema schema Table 4.25. HTTP responses HTTP code Reponse body 200 - OK FlowSchema schema 201 - Created FlowSchema schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/schedule_and_quota_apis/flowschema-flowcontrol-apiserver-k8s-io-v1 |
5.2. Storage Addressing Concepts | 5.2. Storage Addressing Concepts The configuration of disk platters, heads, and access arms makes it possible to position the head over any part of any surface of any platter in the mass storage device. However, this is not sufficient; to use this storage capacity, we must have some method of giving addresses to uniform-sized parts of the available storage. There is one final aspect to this process that is required. Consider all the tracks in the many cylinders present in a typical mass storage device. Because the tracks have varying diameters, their circumference also varies. Therefore, if storage was addressed only to the track level, each track would have different amounts of data -- track #0 (being near the center of the platter) might hold 10,827 bytes, while track #1,258 (near the outside edge of the platter) might hold 15,382 bytes. The solution is to divide each track into multiple sectors or blocks of consistently-sized (often 512 bytes) segments of storage. The result is that each track contains a set number [16] of sectors. A side effect of this is that every track contains unused space -- the space between the sectors. Despite the constant number of sectors in each track, the amount of unused space varies -- relatively little unused space in the inner tracks, and a great deal more unused space in the outer tracks. In either case, this unused space is wasted, as data cannot be stored on it. However, the advantage offsetting this wasted space is that effectively addressing the storage on a mass storage device is now possible. In fact, there are two methods of addressing -- geometry-based addressing and block-based addressing. 5.2.1. Geometry-Based Addressing The term geometry-based addressing refers to the fact that mass storage devices actually store data at a specific physical spot on the storage medium. In the case of the devices being described here, this refers to three specific items that define a specific point on the device's disk platters: Cylinder Head Sector The following sections describe how a hypothetical address can describe a specific physical location on the storage medium. 5.2.1.1. Cylinder As stated earlier, the cylinder denotes a specific position of the access arm (and therefore, the read/write heads). By specifying a particular cylinder, we are eliminating all other cylinders, reducing our search to only one track for each surface in the mass storage device. Table 5.1. Storage Addressing Cylinder Head Sector 1014 X X In Table 5.1, "Storage Addressing" , the first part of a geometry-based address has been filled in. Two more components to this address -- the head and sector -- remain undefined. 5.2.1.2. Head Although in the strictest sense we are selecting a particular disk platter, because each surface has a read/write head dedicated to it, it is easier to think in terms of interacting with a specific head. In fact, the device's underlying electronics actually select one head and -- deselecting the rest -- only interact with the selected head for the duration of the I/O operation. All other tracks that make up the current cylinder have now been eliminated. Table 5.2. Storage Addressing Cylinder Head Sector 1014 2 X In Table 5.2, "Storage Addressing" , the first two parts of a geometry-based address have been filled in. One final component to this address -- the sector -- remains undefined. 5.2.1.3. Sector By specifying a particular sector, we have completed the addressing, and have uniquely identified the desired block of data. Table 5.3. Storage Addressing Cylinder Head Sector 1014 2 12 In Table 5.3, "Storage Addressing" , the complete geometry-based address has been filled in. This address identifies the location of one specific block out of all the other blocks on this device. 5.2.1.4. Problems with Geometry-Based Addressing While geometry-based addressing is straightforward, there is an area of ambiguity that can cause problems. The ambiguity is in numbering the cylinders, heads, and sectors. It is true that each geometry-based address uniquely identifies one specific data block, but that only applies if the numbering scheme for the cylinders, heads, and sectors is not changed. If the numbering scheme changes (such as when the hardware/software interacting with the storage device changes), then the mapping between geometry-based addresses and their corresponding data blocks can change, making it impossible to access the desired data. Because of this potential for ambiguity, a different approach to addressing was developed. The section describes it in more detail. [16] While early mass storage devices used the same number of sectors for every track, later devices divided the range of cylinders into different "zones," with each zone having a different number of sectors per track. The reason for this is to take advantage of the additional space between sectors in the outer cylinders, where there is more unused space between sectors. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-storage-data-addr |
Chapter 7. EgressIP [k8s.ovn.org/v1] | Chapter 7. EgressIP [k8s.ovn.org/v1] Description EgressIP is a CRD allowing the user to define a fixed source IP for all egress traffic originating from any pods which match the EgressIP resource according to its spec definition. Type object Required spec 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressIP. status object Observed status of EgressIP. Read-only. 7.1.1. .spec Description Specification of the desired behavior of EgressIP. Type object Required egressIPs namespaceSelector Property Type Description egressIPs array (string) EgressIPs is the list of egress IP addresses requested. Can be IPv4 and/or IPv6. This field is mandatory. namespaceSelector object NamespaceSelector applies the egress IP only to the namespace(s) whose label matches this definition. This field is mandatory. podSelector object PodSelector applies the egress IP only to the pods whose label matches this definition. This field is optional, and in case it is not set: results in the egress IP being applied to all pods in the namespace(s) matched by the NamespaceSelector. In case it is set: is intersected with the NamespaceSelector, thus applying the egress IP to the pods (in the namespace(s) already matched by the NamespaceSelector) which match this pod selector. 7.1.2. .spec.namespaceSelector Description NamespaceSelector applies the egress IP only to the namespace(s) whose label matches this definition. This field is mandatory. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.3. .spec.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.4. .spec.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.5. .spec.podSelector Description PodSelector applies the egress IP only to the pods whose label matches this definition. This field is optional, and in case it is not set: results in the egress IP being applied to all pods in the namespace(s) matched by the NamespaceSelector. In case it is set: is intersected with the NamespaceSelector, thus applying the egress IP to the pods (in the namespace(s) already matched by the NamespaceSelector) which match this pod selector. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 7.1.6. .spec.podSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 7.1.7. .spec.podSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 7.1.8. .status Description Observed status of EgressIP. Read-only. Type object Required items Property Type Description items array The list of assigned egress IPs and their corresponding node assignment. items[] object The per node status, for those egress IPs who have been assigned. 7.1.9. .status.items Description The list of assigned egress IPs and their corresponding node assignment. Type array 7.1.10. .status.items[] Description The per node status, for those egress IPs who have been assigned. Type object Required egressIP node Property Type Description egressIP string Assigned egress IP node string Assigned node name 7.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressips DELETE : delete collection of EgressIP GET : list objects of kind EgressIP POST : create an EgressIP /apis/k8s.ovn.org/v1/egressips/{name} DELETE : delete an EgressIP GET : read the specified EgressIP PATCH : partially update the specified EgressIP PUT : replace the specified EgressIP 7.2.1. /apis/k8s.ovn.org/v1/egressips HTTP method DELETE Description delete collection of EgressIP Table 7.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressIP Table 7.2. HTTP responses HTTP code Reponse body 200 - OK EgressIPList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressIP Table 7.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.4. Body parameters Parameter Type Description body EgressIP schema Table 7.5. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 201 - Created EgressIP schema 202 - Accepted EgressIP schema 401 - Unauthorized Empty 7.2.2. /apis/k8s.ovn.org/v1/egressips/{name} Table 7.6. Global path parameters Parameter Type Description name string name of the EgressIP HTTP method DELETE Description delete an EgressIP Table 7.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressIP Table 7.9. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressIP Table 7.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressIP Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body EgressIP schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK EgressIP schema 201 - Created EgressIP schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_apis/egressip-k8s-ovn-org-v1 |
function::proc_mem_data_pid | function::proc_mem_data_pid Name function::proc_mem_data_pid - Program data size (data + stack) in pages Synopsis Arguments pid The pid of process to examine Description Returns the given process data size (data + stack) in pages, or zero when the process doesn't exist or the number of pages couldn't be retrieved. | [
"function proc_mem_data_pid:long(pid:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-proc-mem-data-pid |
Part VI. Installing and configuring KIE Server on Oracle WebLogic Server | Part VI. Installing and configuring KIE Server on Oracle WebLogic Server Red Hat Decision Manager is a subset of Red Hat Process Automation Manager. Starting with this release, the distribution files for Red Hat Decision Manager are replaced with Red Hat Process Automation Manager files. There are no Decision Manager artifacts. The Red Hat Decision Manager subscription, support entitlements, and fees remain the same. Red Hat Decision Manager subscribers will continue to receive full support for the decision management and optimization capabilities of Red Hat Decision Manager. The business process management (BPM) capabilities of Red Hat Process Automation Manager are exclusive to Red Hat Process Automation Manager subscribers. They are available for use by Red Hat Decision Manager subscribers but with development support services only. Red Hat Decision Manager subscribers can upgrade to a full Red Hat Process Automation Manager subscription at any time to receive full support for BPM features. As a system administrator, you can configure your Oracle WebLogic Server for Red Hat KIE Server and install KIE Server on that Oracle server instance. Note Support for Red Hat Decision Manager on Oracle WebLogic Server is now in the maintenance phase. Red Hat will continue to support Red Hat Decision Manager on Oracle WebLogic Server with the following limitations: Red Hat will not release new certifications or software functionality. Red Hat will release only qualified security patches that have a critical impact and mission-critical bug fix patches. In the future, Red Hat might direct customers to migrate to new platforms and product components that are compatible with the Red Hat hybrid cloud strategy. Prerequisites An Oracle WebLogic Server instance version 12.2.1.3.0 or later is installed. For complete installation instructions, see the Oracle WebLogic Server product page . You have access to the Oracle WebLogic Server Administration Console, usually at http://<HOST>:7001/console . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/assembly-installing-kie-server-on-wls |
Chapter 1. Release notes for the Red Hat build of OpenTelemetry | Chapter 1. Release notes for the Red Hat build of OpenTelemetry 1.1. Red Hat build of OpenTelemetry overview Red Hat build of OpenTelemetry is based on the open source OpenTelemetry project , which aims to provide unified, standardized, and vendor-neutral telemetry data collection for cloud-native software. Red Hat build of OpenTelemetry product provides support for deploying and managing the OpenTelemetry Collector and simplifying the workload instrumentation. The OpenTelemetry Collector can receive, process, and forward telemetry data in multiple formats, making it the ideal component for telemetry processing and interoperability between telemetry systems. The Collector provides a unified solution for collecting and processing metrics, traces, and logs. The OpenTelemetry Collector has a number of features including the following: Data Collection and Processing Hub It acts as a central component that gathers telemetry data like metrics and traces from various sources. This data can be created from instrumented applications and infrastructure. Customizable telemetry data pipeline The OpenTelemetry Collector is designed to be customizable. It supports various processors, exporters, and receivers. Auto-instrumentation features Automatic instrumentation simplifies the process of adding observability to applications. Developers don't need to manually instrument their code for basic telemetry data. Here are some of the use cases for the OpenTelemetry Collector: Centralized data collection In a microservices architecture, the Collector can be deployed to aggregate data from multiple services. Data enrichment and processing Before forwarding data to analysis tools, the Collector can enrich, filter, and process this data. Multi-backend receiving and exporting The Collector can receive and send data to multiple monitoring and analysis platforms simultaneously. You can use the Red Hat build of OpenTelemetry in combination with the Red Hat OpenShift distributed tracing platform (Tempo) . Note Only supported features are documented. Undocumented features are currently unsupported. If you need assistance with a feature, contact Red Hat's support. 1.2. Release notes for Red Hat build of OpenTelemetry 3.5 The Red Hat build of OpenTelemetry 3.5 is provided through the Red Hat build of OpenTelemetry Operator 0.119.0 . Note The Red Hat build of OpenTelemetry 3.5 is based on the open source OpenTelemetry release 0.119.0. 1.2.1. New features and enhancements This update introduces the following enhancements: The following Technology Preview features reach General Availability: Host Metrics Receiver Kubelet Stats Receiver With this update, the OpenTelemetry Collector uses the OTLP HTTP Exporter to push logs to a LokiStack instance. With this update, the Operator automatically creates RBAC rules for the Kubernetes Events Receiver ( k8sevents ), Kubernetes Cluster Receiver ( k8scluster ), and Kubernetes Objects Receiver ( k8sobjects ) if the Operator has sufficient permissions. For more information, see "Creating the required RBAC resources automatically" in Configuring the Collector . 1.2.2. Deprecated functionality In the Red Hat build of OpenTelemetry 3.5, the Loki Exporter, which is a temporary Technology Preview feature, is deprecated. The Loki Exporter is planned to be removed in the Red Hat build of OpenTelemetry 3.6. If you currently use the Loki Exporter for the OpenShift Logging 6.1 or later, replace the Loki Exporter with the OTLP HTTP Exporter. Important The Loki Exporter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.2.3. Bug fixes This update introduces the following bug fix: Before this update, manually created routes for the Collector services were unintentionally removed when the Operator pod was restarted. With this update, restarting the Operator pod does not result in the removal of the manually created routes. 1.3. Release notes for Red Hat build of OpenTelemetry 3.4 The Red Hat build of OpenTelemetry 3.4 is provided through the Red Hat build of OpenTelemetry Operator 0.113.0 . The Red Hat build of OpenTelemetry 3.4 is based on the open source OpenTelemetry release 0.113.0. 1.3.1. Technology Preview features This update introduces the following Technology Preview features: OpenTelemetry Protocol (OTLP) JSON File Receiver Count Connector Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.3.2. New features and enhancements This update introduces the following enhancements: The following Technology Preview features reach General Availability: BearerTokenAuth Extension Kubernetes Attributes Processor Spanmetrics Connector You can use the instrumentation.opentelemetry.io/inject-sdk annotation with the Instrumentation custom resource to enable injection of the OpenTelemetry SDK environment variables into multi-container pods. 1.3.3. Removal notice In the Red Hat build of OpenTelemetry 3.4, the Logging Exporter has been removed from the Collector. As an alternative, you must use the Debug Exporter instead. Warning If you have the Logging Exporter configured, upgrading to the Red Hat build of OpenTelemetry 3.4 will cause crash loops. To avoid such issues, you must configure the Red Hat build of OpenTelemetry to use the Debug Exporter instead of the Logging Exporter before upgrading to the Red Hat build of OpenTelemetry 3.4. In the Red Hat build of OpenTelemetry 3.4, the Technology Preview Memory Ballast Extension has been removed. As an alternative, you can use the GOMEMLIMIT environment variable instead. 1.4. Release notes for Red Hat build of OpenTelemetry 3.3.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. The Red Hat build of OpenTelemetry 3.3.1 is based on the open source OpenTelemetry release 0.107.0. 1.4.1. Bug fixes This update introduces the following bug fix: Before this update, injection of the NGINX auto-instrumentation failed when copying the instrumentation libraries into the application container. With this update, the copy command is configured correctly, which fixes the issue. ( TRACING-4673 ) 1.5. Release notes for Red Hat build of OpenTelemetry 3.3 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. The Red Hat build of OpenTelemetry 3.3 is based on the open source OpenTelemetry release 0.107.0. 1.5.1. CVEs This release fixes the following CVEs: CVE-2024-6104 CVE-2024-42368 1.5.2. Technology Preview features This update introduces the following Technology Preview features: Group-by-Attributes Processor Transform Processor Routing Connector Prometheus Remote Write Exporter Exporting logs to the LokiStack log store Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.5.3. New features and enhancements This update introduces the following enhancements: Collector dashboard for the internal Collector metrics and analyzing Collector health and performance. ( TRACING-3768 ) Support for automatically reloading certificates in both the OpenTelemetry Collector and instrumentation. ( TRACING-4186 ) 1.5.4. Bug fixes This update introduces the following bug fixes: Before this update, the ServiceMonitor object was failing to scrape operator metrics due to missing permissions for accessing the metrics endpoint. With this update, this issue is fixed by creating the ServiceMonitor custom resource when operator monitoring is enabled. ( TRACING-4288 ) Before this update, the Collector service and the headless service were both monitoring the same endpoints, which caused duplication of metrics collection and ServiceMonitor objects. With this update, this issue is fixed by not creating the headless service. ( OBSDA-773 ) 1.6. Release notes for Red Hat build of OpenTelemetry 3.2.2 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.6.1. CVEs This release fixes the following CVEs: CVE-2023-2953 CVE-2024-28182 1.6.2. Bug fixes This update introduces the following bug fix: Before this update, secrets were perpetually generated on OpenShift Container Platform 4.16 because the operator tried to reconcile a new openshift.io/internal-registry-pull-secret-ref annotation for service accounts, causing a loop. With this update, the operator ignores this new annotation. ( TRACING-4435 ) 1.7. Release notes for Red Hat build of OpenTelemetry 3.2.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.7.1. CVEs This release fixes the following CVEs: CVE-2024-25062 Upstream CVE-2024-36129 1.7.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.2.1 is based on the open source OpenTelemetry release 0.102.1. 1.8. Release notes for Red Hat build of OpenTelemetry 3.2 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.8.1. Technology Preview features This update introduces the following Technology Preview features: Host Metrics Receiver OIDC Auth Extension Kubernetes Cluster Receiver Kubernetes Events Receiver Kubernetes Objects Receiver Load-Balancing Exporter Kubelet Stats Receiver Cumulative to Delta Processor Forward Connector Journald Receiver Filelog Receiver File Storage Extension Important Each of these features is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.8.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.2 is based on the open source OpenTelemetry release 0.100.0. 1.8.3. Deprecated functionality In Red Hat build of OpenTelemetry 3.2, use of empty values and null keywords in the OpenTelemetry Collector custom resource is deprecated and planned to be unsupported in a future release. Red Hat will provide bug fixes and support for this syntax during the current release lifecycle, but this syntax will become unsupported. As an alternative to empty values and null keywords, you can update the OpenTelemetry Collector custom resource to contain empty JSON objects as open-closed braces {} instead. 1.8.4. Bug fixes This update introduces the following bug fix: Before this update, the checkbox to enable Operator monitoring was not available in the web console when installing the Red Hat build of OpenTelemetry Operator. As a result, a ServiceMonitor resource was not created in the openshift-opentelemetry-operator namespace. With this update, the checkbox appears for the Red Hat build of OpenTelemetry Operator in the web console so that Operator monitoring can be enabled during installation. ( TRACING-3761 ) 1.9. Release notes for Red Hat build of OpenTelemetry 3.1.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.9.1. CVEs This release fixes CVE-2023-39326 . 1.10. Release notes for Red Hat build of OpenTelemetry 3.1 The Red Hat build of OpenTelemetry is provided through the Red Hat build of OpenTelemetry Operator. 1.10.1. Technology Preview features This update introduces the following Technology Preview feature: The target allocator is an optional component of the OpenTelemetry Operator that shards Prometheus receiver scrape targets across the deployed fleet of OpenTelemetry Collector instances. The target allocator provides integration with the Prometheus PodMonitor and ServiceMonitor custom resources. Important The target allocator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.10.2. New features and enhancements This update introduces the following enhancement: Red Hat build of OpenTelemetry 3.1 is based on the open source OpenTelemetry release 0.93.0. 1.11. Release notes for Red Hat build of OpenTelemetry 3.0 1.11.1. New features and enhancements This update introduces the following enhancements: Red Hat build of OpenTelemetry 3.0 is based on the open source OpenTelemetry release 0.89.0. The OpenShift distributed tracing data collection Operator is renamed as the Red Hat build of OpenTelemetry Operator . Support for the ARM architecture. Support for the Prometheus receiver for metrics collection. Support for the Kafka receiver and exporter for sending traces and metrics to Kafka. Support for cluster-wide proxy environments. The Red Hat build of OpenTelemetry Operator creates the Prometheus ServiceMonitor custom resource if the Prometheus exporter is enabled. The Operator enables the Instrumentation custom resource that allows injecting upstream OpenTelemetry auto-instrumentation libraries. 1.11.2. Removal notice In Red Hat build of OpenTelemetry 3.0, the Jaeger exporter has been removed. Bug fixes and support are provided only through the end of the 2.9 lifecycle. As an alternative to the Jaeger exporter for sending data to the Jaeger collector, you can use the OTLP exporter instead. 1.11.3. Bug fixes This update introduces the following bug fixes: Fixed support for disconnected environments when using the oc adm catalog mirror CLI command. 1.11.4. Known issues There is currently a known issue: Currently, the cluster monitoring of the Red Hat build of OpenTelemetry Operator is disabled due to a bug ( TRACING-3761 ). The bug is preventing the cluster monitoring from scraping metrics from the Red Hat build of OpenTelemetry Operator due to a missing label openshift.io/cluster-monitoring=true that is required for the cluster monitoring and service monitor object. Workaround You can enable the cluster monitoring as follows: Add the following label in the Operator namespace: oc label namespace openshift-opentelemetry-operator openshift.io/cluster-monitoring=true Create a service monitor, role, and role binding: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" rules: - apiGroups: - "" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring 1.12. Release notes for Red Hat build of OpenTelemetry 2.9.2 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9.2 is based on the open source OpenTelemetry release 0.81.0. 1.12.1. CVEs This release fixes CVE-2023-46234 . 1.12.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.13. Release notes for Red Hat build of OpenTelemetry 2.9.1 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9.1 is based on the open source OpenTelemetry release 0.81.0. 1.13.1. CVEs This release fixes CVE-2023-44487 . 1.13.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.14. Release notes for Red Hat build of OpenTelemetry 2.9 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.9 is based on the open source OpenTelemetry release 0.81.0. 1.14.1. New features and enhancements This release introduces the following enhancements for the Red Hat build of OpenTelemetry: Support OTLP metrics ingestion. The metrics can be forwarded and stored in the user-workload-monitoring via the Prometheus exporter. Support the Operator maturity Level IV, Deep Insights, which enables upgrading and monitoring of OpenTelemetry Collector instances and the Red Hat build of OpenTelemetry Operator. Report traces and metrics from remote clusters using OTLP or HTTP and HTTPS. Collect OpenShift Container Platform resource attributes via the resourcedetection processor. Support the managed and unmanaged states in the OpenTelemetryCollector custom resouce. 1.14.2. Known issues There is currently a known issue: Currently, you must manually set Operator maturity to Level IV, Deep Insights. ( TRACING-3431 ) 1.15. Release notes for Red Hat build of OpenTelemetry 2.8 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.8 is based on the open source OpenTelemetry release 0.74.0. 1.15.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.16. Release notes for Red Hat build of OpenTelemetry 2.7 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.7 is based on the open source OpenTelemetry release 0.63.1. 1.16.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.17. Release notes for Red Hat build of OpenTelemetry 2.6 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.6 is based on the open source OpenTelemetry release 0.60. 1.17.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.18. Release notes for Red Hat build of OpenTelemetry 2.5 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.5 is based on the open source OpenTelemetry release 0.56. 1.18.1. New features and enhancements This update introduces the following enhancement: Support for collecting Kubernetes resource attributes to the Red Hat build of OpenTelemetry Operator. 1.18.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.19. Release notes for Red Hat build of OpenTelemetry 2.4 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.4 is based on the open source OpenTelemetry release 0.49. 1.19.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.20. Release notes for Red Hat build of OpenTelemetry 2.3 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.3.1 is based on the open source OpenTelemetry release 0.44.1. Red Hat build of OpenTelemetry 2.3.0 is based on the open source OpenTelemetry release 0.44.0. 1.20.1. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.21. Release notes for Red Hat build of OpenTelemetry 2.2 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.2 is based on the open source OpenTelemetry release 0.42.0. 1.21.1. Technology Preview features The unsupported OpenTelemetry Collector components included in the 2.1 release are removed. 1.21.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.22. Release notes for Red Hat build of OpenTelemetry 2.1 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.1 is based on the open source OpenTelemetry release 0.41.1. 1.22.1. Technology Preview features This release introduces a breaking change to how to configure certificates in the OpenTelemetry custom resource file. With this update, the ca_file moves under tls in the custom resource, as shown in the following examples. CA file configuration for OpenTelemetry version 0.33 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" CA file configuration for OpenTelemetry version 0.41.1 spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" 1.22.2. Bug fixes This release addresses Common Vulnerabilities and Exposures (CVEs) and bug fixes. 1.23. Release notes for Red Hat build of OpenTelemetry 2.0 Important The Red Hat build of OpenTelemetry is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat build of OpenTelemetry 2.0 is based on the open source OpenTelemetry release 0.33.0. This release adds the Red Hat build of OpenTelemetry as a Technology Preview , which you install using the Red Hat build of OpenTelemetry Operator. Red Hat build of OpenTelemetry is based on the OpenTelemetry APIs and instrumentation. The Red Hat build of OpenTelemetry includes the OpenTelemetry Operator and Collector. You can use the Collector to receive traces in the OpenTelemetry or Jaeger protocol and send the trace data to the Red Hat build of OpenTelemetry. Other capabilities of the Collector are not supported at this time. The OpenTelemetry Collector allows developers to instrument their code with vendor agnostic APIs, avoiding vendor lock-in and enabling a growing ecosystem of observability tooling. 1.24. Getting support If you experience difficulty with a procedure described in this documentation, or with OpenShift Container Platform in general, visit the Red Hat Customer Portal . From the Customer Portal, you can: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in OpenShift Cluster Manager Hybrid Cloud Console . Insights provides details about issues and, if available, information on how to solve a problem. If you have a suggestion for improving this documentation or have found an error, submit a Jira issue for the most relevant documentation component. Please provide specific details, such as the section name and OpenShift Container Platform version. 1.25. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | [
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: opentelemetry-operator-controller-manager-metrics-service namespace: openshift-opentelemetry-operator spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token path: /metrics port: https scheme: https tlsConfig: insecureSkipVerify: true selector: matchLabels: app.kubernetes.io/name: opentelemetry-operator control-plane: controller-manager --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" rules: - apiGroups: - \"\" resources: - services - endpoints - pods verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: otel-operator-prometheus namespace: openshift-opentelemetry-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: otel-operator-prometheus subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/red_hat_build_of_opentelemetry/otel_rn |
Chapter 19. Tuning guidelines | Chapter 19. Tuning guidelines Review the following guidelines to tune AMQ Broker. 19.1. Tuning persistence Review the following information for tips on improving persistence performance. Persist messages to a file-based journal. Use a file-based journal for message persistence. AMQ Broker can also persist messages to a Java Database Connectivity (JDBC) database, but this has a performance cost when compared to using a file-based journal. Put the message journal on its own physical volume. One of the advantages of an append-only journal is that disk head movement is minimized. This advantage is lost if the disk is shared. When multiple processes, such as a transaction coordinator, databases, and other journals, read and write from the same disk, performance is impacted because the disk head must skip around between different files. If you are using paging or large messages, make sure that they are also put on separate volumes. Tune the journal-min-files parameter value. Set the journal-min-files parameter to the number of files that fits your average sustainable rate. If new files are created frequently in the journal data directory, meaning that much data is being persisted, you need to increase the minimum number of files that the journal maintains. This allows the journal to reuse, rather than create, new data files. Optimize the journal file size. Align the value of the journal-file-size parameter to the capacity of a cylinder on the disk. The default value of 10 MB should be enough on most systems. Use the asynchronous IO (AIO) journal type. For Linux operating systems, keep your journal type as AIO. AIO scales better than Java new I/O (NIO). Tune the journal-buffer-timeout parameter value. Increasing the value of the journal-buffer-timeout parameter results in increased throughput at the expense of latency. Tune the journal-max-io parameter value. If you are using AIO, you might be able to improve performance by increasing the journal-max-io parameter value. Do not change this value if you are using NIO. Tune the journal-pool-files parameter. Set the journal-pool-files parameter, which is the upper threshold of the journal file pool, to a number that is close to your maximum expected load. When required, the journal expands beyond the upper threshold, but shrinks to the threshold, when possible. This allows reuse of files without consuming more disk space than required. If you see new files being created too often in the journal data directory, increase the journal-pool-size parameter. Increasing this parameter allows the journal to reuse more existing files instead of creating new files, which improves performance. Disable the journal-data-sync parameter if you do not require durability guarantees on journal writes. If you do not require guaranteed durability on journal writes if a power failure occurs, disable the journal-data-sync parameter and use a journal type of NIO or MAPPED for better performance. 19.2. Tuning Java Message Service (JMS) If you use the JMS API, review the following information for tips on how to improve performance. Disable the message ID. If you do not need message IDs, disable them by using the setDisableMessageID() method on the MessageProducer class. Setting the value to true eliminates the need to create a unique ID and decreases the size of the message. Disable the message timestamp. If you do not need message timestamps, disable them by using the setDisableMessageTimeStamp() method on the MessageProducer class. Setting the value to true eliminates the overhead of creating the timestamp and decreases the size of the message. Avoid using ObjectMessage . ObjectMessage is used to send a message that has a serialized object, meaning the body of the message, or payload, is sent over the wire as a stream of bytes. The Java serialized form of even small objects is quite large and takes up significant space on the wire. It is also slow when compared to custom marshalling techniques. Use ObjectMessage only if you cannot use one of the other message types, for example, if you do not know the type of the payload until runtime. Avoid AUTO_ACKNOWLEDGE . The choice of acknowledgment mode for a consumer impacts performance because of the additional overhead and traffic incurred by sending the acknowledgment message over the network. AUTO_ACKNOWLEDGE incurs this overhead because it requires that an acknowledgment is sent from the server for each message received on the client. If possible, use DUPS_OK_ACKNOWLEDGE , which acknowledges messages in a lazy manner or CLIENT_ACKNOWLEDGE , meaning the client code will call a method to acknowledge the message. Or, batch up many acknowledgments with one acknowledge or commit in a transacted session. Avoid durable messages. By default, JMS messages are durable. If you do not need durable messages, set them to be non-durable. Durable messages incur additional overhead because they are persisted to storage. Use TRANSACTED_SESSION mode to send and receive messages in a single transaction. By batching messages in a single transaction, AMQ Broker requires only one network round trip on the commit, not on every send or receive. 19.3. Tuning transport settings Review the following information for tips on tuning transport settings. If your operating system supports TCP auto-tuning, as is the case with later versions of Linux, do not increase the TCP send and receive buffer sizes to try to improve performance. Setting the buffer sizes manually on a system that has auto-tuning can prevent auto-turing from working and actually reduce broker performance. If your operating system does not support TCP auto-tuning and the broker is running on a fast machine and network, you might improve the broker performance by increasing the TCP send and receive buffer sizes. For more information, see Appendix A, Acceptor and Connector Configuration Parameters . If you expect many concurrent connections on your broker, or if clients are rapidly opening and closing connections, ensure that the user running the broker has permission to create enough file handles. The way you do this varies between operating systems. On Linux systems, you can increase the number of allowable open file handles in the /etc/security/limits.conf file. For example, add the lines: This example allows the serveruser user to open up to 20000 file handles. Set a value for the batchDelay netty TCP parameter and set the directDeliver netty TCP parameter to false to maximize throughput for very small messages. 19.4. Tuning the broker virtual machine Review the following information for tips on how to tune various virtual machine settings. Use the latest Java virtual machine for best performance. Allocate as much memory as possible to the server. AMQ Broker can run with low memory by using paging. However, you get improved performance if AMQ Broker can keep all queues in memory. The amount of memory you require depends on the size and number of your queues and the size and number of your messages. Use the -Xms and -Xmx JVM arguments to set the available memory. Tune the heap size. During periods of high load, it is likely that AMQ Broker generates and destroys large numbers of objects, which can result in a build up of stale objects. This increases the risk of the broker running out of memory and causing a full garbage collection, which might introduce pauses and unintentional behaviour. To reduce this risk, ensure that the maximum heap size (-Xmx) for the JVM is set to at least five times the value of the global-max-size parameter. For example, if the broker is under high load and running with a global-max-size of 1 GB, set the maximum heap size to 5 GB. 19.5. Tuning other settings Review the following information for additional tips on improving performance. Use asynchronous send acknowledgements. If you need to send non-transactional, durable messages and do not need a guarantee that they have reached the server by the time the call to send() returns, do not set them to be sent blocking. Instead, use asynchronous send acknowledgements to get the send acknowledgements returned in a separate stream. However, in the case of a server crash, some messages might be lost. Use pre-acknowledge mode. With pre-acknowledge mode, messages are acknowledged before they are sent to the client. This reduces the amount of acknowledgment traffic on the wire. However, if that client crashes, messages are not redelivered if the client reconnects. Disable security. A small performance improvement results from disabling security by setting the security-enabled parameter to false . Disable persistence. You can turn off message persistence by setting the persistence-enabled parameter to false . Sync transactions lazily. Setting the journal-sync-transactional parameter to false provides better performance when persisting transactions, at the expense of some possibility of loss of transactions on failure. Sync non-transactional lazily. Setting the journal-sync-non-transactional parameter to false provides better performance when persisting non-transactions, at the expense of some possibility of loss of durable messages on failure. Send messages non-blocking. To avoid waiting for a network round trip for every message sent, set the block-on-durable-send and block-on-non-durable-send parameters to false if you are using Java Messaging Service (JMS) and Java Naming and Directory Interface (JNDI). Or, set them directly on the ServerLocator by calling the setBlockOnDurableSend() and setBlockOnNonDurableSend() methods. Optimize the consumer-window-size . If you have very fast consumers, you can increase the value of the consumer-window-size parameter to effectively disable consumer flow control. Use the core API instead of the JMS API. JMS operations must be translated into core operations before the server can handle them, resulting in lower performance than when you use the core API. When using the core API, try to use methods that take SimpleString as much as possible. SimpleString , unlike java.lang.String, does not require copying before it is written to the wire. Therefore, if you reuse SimpleString instances between calls, you can avoid some unnecessary copying. Note that the core API is not portable to other brokers. 19.6. Avoiding anti patterns Reuse connections, sessions, consumers, and producers where possible. The most common messaging anti-pattern is the creation of a new connection, session, and producer for every message sent or consumed. These objects take time to create and might involve several network round trips, which is a poor use of resources. Note Some popular libraries such as the Spring JMS Template use these anti-patterns. If you are using the Spring JMS Template, you might see poor performance. The Spring JMS Template can be used safely only on an application server which caches JMS sessions, for example, using Java Connector Architecture, and then only for sending messages. It cannot be used safely to consume messages synchronously, even on an application server. Avoid fat messages. Verbose formats such as XML take up significant space on the wire and performance suffers as a result. Avoid XML in message bodies if you can. Do not create temporary queues for each request. This common anti-pattern involves the temporary queue request-response pattern. With the temporary queue request-response pattern, a message is sent to a target, and a reply-to header is set with the address of a local temporary queue. When the recipient receives the message, they process it and then send back a response to the address specified in the reply-to header. A common mistake made with this pattern is to create a new temporary queue on each message sent, which drastically reduces performance. Instead, the temporary queue should be reused for many requests. Do not use message-driven beans unless it is necessary. Using message-driven beans to consume messages is slower than consuming messages by using a simple JMS message consumer. | [
"serveruser soft nofile 20000 serveruser hard nofile 20000"
] | https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/assembly-br-tuning_guidelines_configuring |
Chapter 7. Advisories related to this release | Chapter 7. Advisories related to this release The following advisories have been issued to document enhancements, bugfixes, and CVE fixes included in this release. RHSA-2024:2693 RHSA-2024:2694 | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_4_release_notes/errata |
Chapter 5. Configuring compliance policy deployment methods | Chapter 5. Configuring compliance policy deployment methods Use one the following procedures to configure Satellite for the method that you have selected to deploy compliance policies. You will select one of these methods when you later create a compliance policy . Procedure for Ansible deployment Import the theforeman.foreman_scap_client Ansible role. For more information, see Managing configurations using Ansible integration . Assign the created policy and the theforeman.foreman_scap_client Ansible role to a host or host group. To trigger the deployment, run the Ansible role on the host or host group either manually, or set up a recurring job by using remote execution for regular policy updates. For more information, see Configuring and Setting Up Remote Jobs in Managing hosts . Procedure for Puppet deployment Ensure Puppet is enabled. Ensure the Puppet agent is installed on hosts. Import the Puppet environment that contains the foreman_scap_client Puppet module. For more information, see Managing configurations using Puppet integration . Assign the created policy and the foreman_scap_client Puppet class to a host or host group. Puppet triggers the deployment on the regular run or you can run Puppet manually. Puppet runs every 30 minutes by default. Procedure for manual deployment For the manual deployment method, no additional Satellite configuration is required. For information on manual deployment, see How to set up OpenSCAP Policies using Manual Deployment option in the Red Hat Knowledgebase . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_security_compliance/configuring-compliance-policy-deployment-methods_security-compliance |
B.70. polkit | B.70. polkit B.70.1. RHSA-2011:0455 - Important: polkit security update Updated polkit packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. PolicyKit is a toolkit for defining and handling authorizations. CVE-2011-1485 A race condition flaw was found in the PolicyKit pkexec utility and polkitd daemon. A local user could use this flaw to appear as a privileged user to pkexec, allowing them to execute arbitrary commands as root by running those commands with pkexec. Red Hat would like to thank Neel Mehta of Google for reporting this issue. All polkit users should upgrade to these updated packages, which contain backported patches to correct this issue. The system must be rebooted for this update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/polkit |
Chapter 3. AWS STS and ROSA with HCP explained | Chapter 3. AWS STS and ROSA with HCP explained Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) uses an AWS (Amazon Web Services) Security Token Service (STS) for AWS Identity Access Management (IAM) to obtain the necessary credentials to interact with resources in your AWS account. 3.1. AWS STS credential method As part of ROSA with HCP, Red Hat must be granted the necessary permissions to manage infrastructure resources in your AWS account. ROSA with HCP grants the cluster's automation software limited, short-term access to resources in your AWS account. The STS method uses predefined roles and policies to grant temporary, least-privilege permissions to IAM roles. The credentials typically expire an hour after being requested. Once expired, they are no longer recognized by AWS and no longer have account access from API requests made with them. For more information, see the AWS documentation . AWS IAM STS roles must be created for each ROSA with HCP cluster. The ROSA command line interface (CLI) ( rosa ) manages the STS roles and helps you attach the ROSA-specific, AWS-managed policies to each role. The CLI provides the commands and files to create the roles, attach the AWS-managed policies, and an option to allow the CLI to automatically create the roles and attach the policies. 3.2. AWS STS security Security features for AWS STS include: An explicit and limited set of policies that the user creates ahead of time. The user can review every requested permission needed by the platform. The service cannot do anything outside of those permissions. There is no need to rotate or revoke credentials. Whenever the service needs to perform an action, it obtains credentials that expire in one hour or less. Credential expiration reduces the risks of credentials leaking and being reused. ROSA with HCP grants cluster software components least-privilege permissions with short-term security credentials to specific and segregated IAM roles. The credentials are associated with IAM roles specific to each component and cluster that makes AWS API calls. This method aligns with principles of least-privilege and secure practices in cloud service resource management. 3.3. Components of ROSA with HCP AWS infrastructure - The infrastructure required for the cluster including the Amazon EC2 instances, Amazon EBS storage, and networking components. See AWS compute types to see the supported instance types for compute nodes and provisioned AWS infrastructure for more information on cloud resource configuration. AWS STS - A method for granting short-term, dynamic tokens to provide users the necessary permissions to temporarily interact with your AWS account resources. OpenID Connect (OIDC) - A mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from AWS IAM STS to make the required API calls. Roles and policies - The roles and policies used by ROSA with HCP can be divided into account-wide roles and policies and Operator roles and policies. The policies determine the allowed actions for each of the roles. See About IAM resources for more details about the individual roles and policies. See ROSA IAM role resource for more details about trust policies. The account-wide roles are: <prefix>-HCP-ROSA-Worker-Role <prefix>-HCP-ROSA-Support-Role <prefix>-HCP-ROSA-Installer-Role The account-wide AWS-managed policies are: ROSAInstallerPolicy ROSAWorkerInstancePolicy ROSASRESupportPolicy ROSAIngressOperatorPolicy ROSAAmazonEBSCSIDriverOperatorPolicy ROSACloudNetworkConfigOperatorPolicy ROSAControlPlaneOperatorPolicy ROSAImageRegistryOperatorPolicy ROSAKMSProviderPolicy ROSAKubeControllerPolicy ROSAManageSubscription ROSANodePoolManagementPolicy Note Certain policies are used by the cluster Operator roles, listed below. The Operator roles are created in a second step because they are dependent on an existing cluster name and cannot be created at the same time as the account-wide roles. The Operator roles are: <operator_role_prefix>-openshift-cluster-csi-drivers-ebs-cloud-credentials <operator_role_prefix>-openshift-cloud-network-config-controller-cloud-credentials <operator_role_prefix>-openshift-machine-api-aws-cloud-credentials <operator_role_prefix>-openshift-cloud-credential-operator-cloud-credentials <operator_role_prefix>-openshift-image-registry-installer-cloud-credentials <operator_role_prefix>-openshift-ingress-operator-cloud-credentials Trust policies are created for each account-wide role and each Operator role. 3.4. Deploying a ROSA with HCP cluster Deploying a ROSA with HCP cluster follows the following steps: You create the account-wide roles. You create the Operator roles. Red Hat uses AWS STS to send the required permissions to AWS that allow AWS to create and attach the corresponding AWS-managed Operator policies. You create the OIDC provider. You create the cluster. During the cluster creation process, the ROSA CLI creates the required JSON files for you and outputs the commands you need. If desired, the ROSA CLI can also run the commands for you. The ROSA CLI can automatically create the roles for you, or you can manually create them by using the --mode manual or --mode auto flags. For further details about deployment, see Creating a cluster with customizations . 3.5. ROSA with HCP workflow The user creates the required account-wide roles. During role creation, a trust policy, known as a cross-account trust policy, is created which allows a Red Hat-owned role to assume the roles. Trust policies are also created for the EC2 service, which allows workloads on EC2 instances to assume roles and obtain credentials. AWS assigns a corresponding permissions policy to each role. After the account-wide roles and policies are created, the user can create a cluster. Once cluster creation is initiated, the user creates the Operator roles so that cluster Operators can make AWS API calls. These roles are then assigned to the corresponding permission policies that were created earlier and a trust policy with an OIDC provider. The Operator roles differ from the account-wide roles in that they ultimately represent the pods that need access to AWS resources. Because a user cannot attach IAM roles to pods, they must create a trust policy with an OIDC provider so that the Operator, and therefore the pods, can access the roles they need. Once the user assigns the roles to the corresponding policy permissions, the final step is creating the OIDC provider. When a new role is needed, the workload currently using the Red Hat role will assume the role in the AWS account, obtain temporary credentials from AWS STS, and begin performing the actions using API calls within the user's AWS account as permitted by the assumed role's permissions policy. The credentials are temporary and have a maximum duration of one hour. Operators use the following process to obtain the requisite credentials to perform their tasks. Each Operator is assigned an Operator role, a permissions policy, and a trust policy with an OIDC provider. The Operator will assume the role by passing a JSON web token that contains the role and a token file ( web_identity_token_file ) to the OIDC provider, which then authenticates the signed key with a public key. The public key is created during cluster creation and stored in an S3 bucket. The Operator then confirms that the subject in the signed token file matches the role in the role trust policy which ensures that the OIDC provider can only obtain the allowed role. The OIDC provider then returns the temporary credentials to the Operator so that the Operator can make AWS API calls. For a visual representation, see the following diagram: | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/about/cloud-experts-rosa-hcp-sts-explained |
Networking Guide | Networking Guide Red Hat Enterprise Linux 7 Configuring and managing networks, network interfaces, and network services in RHEL 7 Marc Muehlfeld Red Hat Customer Content Services [email protected] Ioanna Gkioka Red Hat Customer Content Services Mirek Jahoda Red Hat Customer Content Services Jana Heves Red Hat Customer Content Services Stephen Wadeley Red Hat Customer Content Services Christian Huffman Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/index |
Chapter 3. Working with container registries | Chapter 3. Working with container registries A container image registry is a repository or collection of repositories for storing container images and container-based application artifacts. The /etc/containers/registries.conf file is a system-wide configuration file containing the container image registries that can be used by the various container tools such as Podman, Buildah, and Skopeo. If the container image given to a container tool is not fully qualified, then the container tool references the registries.conf file. Within the registries.conf file, you can specify aliases for short names, granting administrators full control over where images are pulled from when not fully qualified. For example, the podman pull example.com/example_image command pulls a container image from the example.com registry to your local system as specified in the registries.conf file . 3.1. Container registries A container registry is a repository or collection of repositories for storing container images and container-based application artifacts. The registries that Red Hat provides are: registry.redhat.io (requires authentication) registry.access.redhat.com (requires no authentication) registry.connect.redhat.com (holds Red Hat Partner Connect program images) To get container images from a remote registry, such as Red Hat's own container registry, and add them to your local system, use the podman pull command: where <registry>[:<port>]/[<namespace>/]<name>:<tag> is the name of the container image. For example, the registry.redhat.io/ubi9/ubi container image is identified by: Registry server ( registry.redhat.io ) Namespace ( ubi9 ) Image name ( ubi ) If there are multiple versions of the same image, add a tag to explicitly specify the image name. By default, Podman uses the :latest tag, for example ubi9/ubi:latest . Some registries also use <namespace> to distinguish between images with the same <name> owned by different users or organizations. For example: Namespace Examples ( <namespace> / <name> ) organization redhat/kubernetes , google/kubernetes login (user name) alice/application , bob/application role devel/database , test/database , prod/database For details on the transition to registry.redhat.io, see Red Hat Container Registry Authentication . Before you can pull containers from registry.redhat.io, you need to authenticate using your RHEL Subscription credentials. 3.2. Configuring container registries You can display the container registries using the podman info --format command: Note The podman info command is available in Podman 4.0.0 or later. You can edit the list of container registries in the registries.conf configuration file. As a root user, edit the /etc/containers/registries.conf file to change the default system-wide search settings. As a user, create the USDHOME/.config/containers/registries.conf file to override the system-wide settings. By default, the podman pull and podman search commands search for container images from registries listed in the unqualified-search-registries list in the given order. Configuring a local container registry You can configure a local container registry without the TLS verification. You have two options on how to disable TLS verification. First, you can use the --tls-verify=false option in Podman. Second, you can set insecure=true in the registries.conf file: Blocking a registry, namespace, or image You can define registries the local system is not allowed to access. You can block a specific registry by setting blocked=true . You can also block a namespace by setting the prefix to prefix="registry.example.org/namespace" . For example, pulling the image using the podman pull registry. example.org/example/image:latest command will be blocked, because the specified prefix is matched. Note prefix is optional, default value is the same as the location value. You can block a specific image by setting prefix="registry.example.org/namespace/image" . Mirroring registries You can set a registry mirror in cases you cannot access the original registry. For example, you cannot connect to the internet, because you work in a highly-sensitive environment. You can specify multiple mirrors that are contacted in the specified order. For example, when you run podman pull registry.example.com/myimage:latest command, the mirror-1.com is tried first, then mirror-2.com . Additional resources How to manage Linux container registries podman-pull and podman-info man pages on your system 3.3. Searching for container images Using the podman search command you can search selected container registries for images. You can also search for images in the Red Hat Container Catalog . The Red Hat Container Registry includes the image description, contents, health index, and other information. Note The podman search command is not a reliable way to determine the presence or existence of an image. The podman search behavior of the v1 and v2 Docker distribution API is specific to the implementation of each registry. Some registries may not support searching at all. Searching without a search term only works for registries that implement the v2 API. The same holds for the docker search command. To search for the postgresql-10 images in the quay.io registry, follow the steps. Prerequisites The container-tools meta-package is installed. The registry is configured. Procedure Authenticate to the registry: Search for the image: To search for a particular image on a specific registry, enter: Alternatively, to display all images provided by a particular registry, enter: To search for the image name in all registries, enter: To display the full descriptions, pass the --no-trunc option to the command. Additional resources podman-search man page on your system 3.4. Pulling images from registries Use the podman pull command to get the image to your local system. Prerequisites The container-tools meta-package is installed. Procedure Log in to the registry.redhat.io registry: Pull the registry.redhat.io/ubi9/ubi container image: Verification List all images pulled to your local system: Additional resources podman-pull man page on your system 3.5. Configuring short-name aliases Red Hat recommends always to pull an image by its fully-qualified name. However, it is customary to pull images by short names. For example, you can use ubi9 instead of registry.access.redhat.com/ubi9:latest . The registries.conf file allows to specify aliases for short names, giving administrators full control over where images are pulled from. Aliases are specified in the [aliases] table in the form "name" = "value" . You can see the lists of aliases in the /etc/containers/registries.conf.d directory. Red Hat ships a set of aliases in this directory. For example, podman pull ubi9 directly resolves to the right image, that is registry.access.redhat.com/ubi9:latest . For example: The short-names modes are: enforcing : If no matching alias is found during the image pull, Podman prompts the user to choose one of the unqualified-search registries. If the selected image is pulled successfully, Podman automatically records a new short-name alias in the USDHOME/.cache/containers/short-name-aliases.conf file (rootless user) or in the /var/cache/containers/short-name-aliases.conf (root user). If the user cannot be prompted (for example, stdin or stdout are not a TTY), Podman fails. Note that the short-name-aliases.conf file has precedence over the registries.conf file if both specify the same alias. permissive : Similar to enforcing mode, but Podman does not fail if the user cannot be prompted. Instead, Podman searches in all unqualified-search registries in the given order. Note that no alias is recorded. disabled : All unqualified-search registries are tried in a given order, no alias is recorded. Note Red Hat recommends using fully qualified image names including registry, namespace, image name, and tag. When using short names, there is always an inherent risk of spoofing. Add registries that are trusted, that is, registries that do not allow unknown or anonymous users to create accounts with arbitrary names. For example, a user wants to pull the example container image from example.registry.com registry . If example.registry.com is not first in the search list, an attacker could place a different example image at a registry earlier in the search list. The user would accidentally pull and run the attacker image rather than the intended content. Additional resources Container image short names in Podman | [
"podman pull <registry>[:<port>]/[<namespace>/]<name>:<tag>",
"podman info -f json | jq '.registries[\"search\"]' [ \"registry.access.redhat.com\", \"registry.redhat.io\", \"docker.io\" ]",
"unqualified-search-registries = [\"registry.access.redhat.com\", \"registry.redhat.io\", \"docker.io\"] short-name-mode = \"enforcing\"",
"[[registry]] location=\"localhost:5000\" insecure=true",
"[[registry]] location = \"registry.example.org\" blocked = true",
"[[registry]] location = \"registry.example.org\" prefix=\"registry.example.org/namespace\" blocked = true",
"[[registry]] location = \"registry.example.org\" prefix=\"registry.example.org/namespace/image\" blocked = true",
"[[registry]] location=\"registry.example.com\" [[registry.mirror]] location=\"mirror-1.com\" [[registry.mirror]] location=\"mirror-2.com\"",
"podman login quay.io",
"podman search quay.io/postgresql-10 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED redhat.io registry.redhat.io/rhel8/postgresql-10 This container image ... 0 redhat.io registry.redhat.io/rhscl/postgresql-10-rhel7 PostgreSQL is an ... 0",
"podman search quay.io/",
"podman search postgresql-10",
"podman login registry.redhat.io Username: <username> Password: <password> Login Succeeded!",
"podman pull registry.redhat.io/ubi9/ubi",
"podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ubi9/ubi latest 3269c37eae33 7 weeks ago 208 MB",
"unqualified-search-registries=[\"registry.fedoraproject.org\", \"quay.io\"] [aliases] \"fedora\"=\"registry.fedoraproject.org/fedora\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/working-with-container-registries_building-running-and-managing-containers |
19.4. Tools and Performance | 19.4. Tools and Performance Resource Management and Linux Containers Guide The Resource Management and Linux Containers Guide documents tools and techniques for managing system resources and deploying LXC application containers on Red Hat Enterprise Linux 7. Performance Tuning Guide The Performance Tuning Guide documents how to optimize subsystem throughput in Red Hat Enterprise Linux 7. Developer Guide The Developer Guide describes the different features and utilities that make Red Hat Enterprise Linux 7 an ideal enterprise platform for application development. SystemTap Beginners Guide The SystemTap Beginners Guide provides basic instructions on how to use SystemTap to monitor different subsystems of Red Hat Enterprise Linux in finer detail. SystemTap Reference The SystemTap Tapset Reference guide describes the most common tapset definitions users can apply to SystemTap scripts. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-documentation-tools_and_performance |
Chapter 24. Troubleshooting Installation on IBM System z | Chapter 24. Troubleshooting Installation on IBM System z This section discusses some common installation problems and their solutions. For debugging purposes, anaconda logs installation actions into files in the /tmp directory. These files include: /tmp/anaconda.log general anaconda messages /tmp/program.log all external programs run by anaconda /tmp/storage.log extensive storage module information /tmp/yum.log yum package installation messages /tmp/syslog hardware-related system messages If the installation fails, the messages from these files are consolidated into /tmp/anaconda-tb- identifier , where identifier is a random string. All of the files above reside in the installer's ramdisk and are thus volatile. To make a permanent copy, copy those files to another system on the network using scp on the installation image (not the other way round). 24.1. You Are Unable to Boot Red Hat Enterprise Linux 24.1.1. Is Your System Displaying Signal 11 Errors? A signal 11 error, commonly known as a segmentation fault , means that the program accessed a memory location that was not assigned to it. A signal 11 error may be due to a bug in one of the software programs that is installed, or faulty hardware. Ensure that you have the latest installation updates and images from Red Hat. Review the online errata to see if newer versions are available. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-trouble-s390 |
13.4. Deleting and Removing Volumes | 13.4. Deleting and Removing Volumes This section shows how to delete a disk volume from a block based storage pool using the virsh vol-delete command. In this example, the volume is volume 1 and the storage pool is guest_images . | [
"virsh vol-delete --pool guest_images volume1 Vol volume1 deleted"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-storage_volumes-deleting_volumes |
Chapter 16. Using declarative configuration | Chapter 16. Using declarative configuration With declarative configuration, you can update configurations by storing them in files in repositories and apply them to the system. Declarative configuration is useful, for example, if you are using a GitOps workflow. You can currently use declarative configuration in Red Hat Advanced Cluster Security for Kubernetes (RHACS) for authentication and authorization resources such as authentication providers, roles, permission sets, and access scopes. To use declarative configuration, you create YAML files that contain configuration information about authentication and authorization resources. These files, or configurations, are added to RHACS by using a mount point during Central installation. See the installation documentation in the "Additional resources" section for more information on configuring mount points when installing RHACS. The configuration files used with declarative configuration are stored in config maps or secrets, depending on the type of resource. Store configurations for authentication providers in a secret for greater security. You can store other configurations in config maps. A single config map or secret can contain more than one configuration of multiple resource types. This allows you to limit the number of volume mounts for the Central instance. 16.1. Restrictions for resources created from declarative configuration Because resources can reference other resources (for example, a role can reference both a permission set and access scope), the following restrictions for references apply: A declarative configuration can only reference a resource that is either also created declaratively or a system RHACS resource; for example, a resource such as the Admin or Analyst system role or permission set. All references between resources use names to identify the resource; therefore, all names within the same resource type must be unique. Resources created from declarative configuration can only be modified or deleted by altering the declarative configuration files. You cannot change these resources by using the RHACS portal or the API. 16.2. Creating declarative configurations Use roxctl to create the YAML files that store the configurations, create a config map from the files, and apply the config map. Prerequisites You have added the mount for the config map or secret during the installation of Central. In this example, the config map is called "declarative-configs". See the installation documentation listed in the "Additional resources" section for more information. Procedure Create the permission set by entering the following command. This example creates a permission set named "restricted" and is saved as the permission-set.yaml file. It sets read and write access for the Administration resource and read access to the Access resource. USD roxctl declarative-config create permission-set \ --name="restricted" \ --description="Restriction permission set that only allows \ access to Administration and Access resources" \ --resource-with-access=Administration=READ_WRITE_ACCESS \ --resource-with-access=Access=READ_ACCESS > permission-set.yaml Create the role that allows access to the Administration and Access resources by entering the following command. This example creates a role named "restricted" and is saved as the role.yaml file. USD roxctl declarative-config create role \ --name="restricted" \ --description="Restricted role that only allows access to Administration and Access" \ --permission-set="restricted" \ --access-scope="Unrestricted" > role.yaml Create a config map from the two YAML files that were created in the earlier steps by entering the following command. This example creates the declarative-configurations config map. USD kubectl create configmap declarative-configurations \ 1 --from-file permission-set.yaml --from-file role.yaml \ -o yaml --namespace=stackrox > declarative-configs.yaml 1 For OpenShift Container Platform, use oc create . Apply the config map by entering the following command: USD kubectl apply -f declarative-configs.yaml 1 1 For OpenShift Container Platform, use oc apply . After you apply the config map, configuration information extracted from Central creates the resources. Note Although the watch interval is 5 seconds, as described in the following paragraph, there can be a delay in propagating changes from the config map to the Central mount. You can configure the following intervals to specify how declarative configurations interact with Central: Configuration watch interval: The interval for Central to check for changes is every 5 seconds. You can configure this interval by using the ROX_DECLARATIVE_CONFIG_WATCH_INTERVAL environment variable. Reconciliation interval: By default, declarative configuration reconciliation with Central occurs every 20 seconds. You can configure this interval by using the ROX_DECLARATIVE_CONFIG_RECONCILE_INTERVAL environment variable. After creating authentication and authorization resources by using declarative configuration, you can view them in the Access Control page in the RHACS web portal. The Origin field indicates Declarative if the resource was created by using declarative configuration. Note You cannot edit resources created from declarative configurations in the RHACS web portal. You must edit the configuration files directly to make changes to these resources. You can view the status of declarative configurations by navigating to Platform Configuration System Health and scrolling to the Declarative configuration section. 16.3. Declarative configuration examples You can create declarative configurations by using the following examples as a guide. Use the roxctl declarative-config lint command to verify that your configurations are valid. 16.3.1. Declarative configuration authentication provider example Declarative configuration authentication provider example name: A sample auth provider minimumRole: Analyst 1 uiEndpoint: central.custom-domain.com:443 2 extraUIEndpoints: 3 - central-alt.custom-domain.com:443 groups: 4 - key: email 5 value: [email protected] role: Admin 6 - key: groups value: reviewers role: Analyst requiredAttributes: 7 - key: org_id value: "12345" claimMappings: 8 - path: org_id value: my_org_id oidc: 9 issuer: sample.issuer.com 10 mode: auto 11 clientID: CLIENT_ID clientSecret: CLIENT_SECRET clientSecret: CLIENT_SECRET iap: 12 audience: audience saml: 13 spIssuer: sample.issuer.com metadataURL: sample.provider.com/metadata saml: 14 spIssuer: sample.issuer.com cert: | 15 ssoURL: saml.provider.com idpIssuer: idp.issuer.com userpki: certificateAuthorities: | 16 certificate 17 openshift: 18 enable: true 1 Identifies the minimum role that will be assigned by default to any user logging in. If left blank, the value is None . 2 Use the user interface endpoint of your Central instance. 3 If your Central instance is exposed to different endpoints, specify them here. 4 These fields map users to specific roles, based on their attributes. 5 The key can be any claim returned from the authentication provider. 6 Identifies the role that the users are given. You can use a default role or a declaratively-created role. 7 Optional: Use these fields if attributes returned from the authentication provider are required; for example, if the audience is limited to a specific organization or group. 8 Optional: Use these fields if claims returned from the identity provider should be mapped to custom claims. 9 This section is required only for OpenID Connect (OIDC) authentication providers. 10 Identifies the expected issuer for the token. 11 Identifies the OIDC callback mode. Possible values are auto , post , query , and fragment . The preferred value is auto . 12 This section is required only for Google Identity-Aware Proxy (IAP) authentication providers. 13 This section is required only for Security Assertion Markup Language (SAML) 2.0 dynamic configuration authentication providers. 14 This section is required only for SAML 2.0 static configuration authentication providers. 15 Include the certificate in Privacy Enhanced Mail (PEM) format. 16 This section is required only for authentication with user certificates. 17 Include the certificate in PEM format. 18 This section is required only for OpenShift Auth authentication providers. 16.3.2. Declarative configuration permission set example Declarative configuration permission set example name: A sample permission set description: A sample permission set created declaratively resources: - resource: Integration 1 access: READ_ACCESS 2 - resource: Administration access: READ_WRITE_ACCESS 1 For a full list of supported resources, go to Access Control Permission Sets . 2 Access can be either READ_ACCESS or READ_WRITE_ACCESS . 16.3.3. Declarative configuration access scope example Declarative configuration access scope example name: A sample access scope description: A sample access scope created declaratively rules: included: - cluster: secured-cluster-A 1 namespaces: - namespaceA - cluster: secured-cluster-B 2 clusterLabelSelectors: - requirements: - requirements: - key: kubernetes.io/metadata.name operator: IN 3 values: - production - staging - environment 1 Identifies a cluster where only specific namespaces are included within the access scope. 2 Identifies a cluster where all namespaces are included within the access scope. 3 Identifies the Operator to use for the label selection. Valid values are IN , NOT_IN , EXISTS , and NOT_EXISTS . 16.3.4. Declarative configuration role example Declarative configuration role example name: A sample role description: A sample role created declaratively permissionSet: A sample permission set 1 accessScope: Unrestricted 2 1 Name of the permission set; can be either one of the system permission sets or a declaratively-created permission set. 2 Name of the access scope; can be either one of the system access scopes or a declaratively-created access scope. 16.4. Troubleshooting declarative configuration You can use the error messages displayed in the Declarative configuration section of the Platform Configuration System Health page to help in troubleshooting. The roxctl declarative-config command also includes a lint option to validate the configuration file and help you detect errors. The error messages displayed in the Declarative configuration section of the Platform Configuration System Health page provide information about issues with declarative configurations. Problems with declarative configurations can be caused by the following conditions: The format of the configuration file is not in valid YAML. The configuration file contains invalid values, such as invalid access within a permission set. Invalid storage constraints exist, such as resource names are not unique or the configuration contains invalid references to a resource. To validate configuration files, check for errors in configuration files, and make sure that there are no invalid storage constraints when creating and updating configuration files, use the roxctl declarative-config lint command. To troubleshoot a storage constraint during deletion, check if the resource has been marked as Declarative Orphaned . This indicates that the declarative configuration referenced by a resource was deleted (for example, if the declarative configuration for a permission set that was referenced by a role was deleted). To correct this error, edit the resource to point to a new permission set, or restore the declarative configuration that was deleted. 16.5. Additional resources Install Central using Helm charts with customizations (Red Hat OpenShift) Install Central using Helm charts with customizations (other Kubernetes platforms) | [
"roxctl declarative-config create permission-set --name=\"restricted\" --description=\"Restriction permission set that only allows access to Administration and Access resources\" --resource-with-access=Administration=READ_WRITE_ACCESS --resource-with-access=Access=READ_ACCESS > permission-set.yaml",
"roxctl declarative-config create role --name=\"restricted\" --description=\"Restricted role that only allows access to Administration and Access\" --permission-set=\"restricted\" --access-scope=\"Unrestricted\" > role.yaml",
"kubectl create configmap declarative-configurations \\ 1 --from-file permission-set.yaml --from-file role.yaml -o yaml --namespace=stackrox > declarative-configs.yaml",
"kubectl apply -f declarative-configs.yaml 1",
"name: A sample auth provider minimumRole: Analyst 1 uiEndpoint: central.custom-domain.com:443 2 extraUIEndpoints: 3 - central-alt.custom-domain.com:443 groups: 4 - key: email 5 value: [email protected] role: Admin 6 - key: groups value: reviewers role: Analyst requiredAttributes: 7 - key: org_id value: \"12345\" claimMappings: 8 - path: org_id value: my_org_id oidc: 9 issuer: sample.issuer.com 10 mode: auto 11 clientID: CLIENT_ID clientSecret: CLIENT_SECRET clientSecret: CLIENT_SECRET iap: 12 audience: audience saml: 13 spIssuer: sample.issuer.com metadataURL: sample.provider.com/metadata saml: 14 spIssuer: sample.issuer.com cert: | 15 ssoURL: saml.provider.com idpIssuer: idp.issuer.com userpki: certificateAuthorities: | 16 certificate 17 openshift: 18 enable: true",
"name: A sample permission set description: A sample permission set created declaratively resources: - resource: Integration 1 access: READ_ACCESS 2 - resource: Administration access: READ_WRITE_ACCESS",
"name: A sample access scope description: A sample access scope created declaratively rules: included: - cluster: secured-cluster-A 1 namespaces: - namespaceA - cluster: secured-cluster-B 2 clusterLabelSelectors: - requirements: - requirements: - key: kubernetes.io/metadata.name operator: IN 3 values: - production - staging - environment",
"name: A sample role description: A sample role created declaratively permissionSet: A sample permission set 1 accessScope: Unrestricted 2"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/declarative-configuration-using |
4.343. vte | 4.343. vte 4.343.1. RHBA-2011:1204 - vte bug fix update An updated vte package that fixes one bug is now available for Red Hat Enterprise Linux 6. VTE is a terminal emulator widget for use with GTK+ 2.0. Bug Fix BZ# 658774 Previously, setting a cursor color was not working properly in that a terminal (text) cursor was invisible in some applications which used vte. With this update, the bug has been fixed so that the cursor is now rendered properly and is visible as expected. All vte users are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/vte |
Chapter 12. Managing persistent volume claims | Chapter 12. Managing persistent volume claims Important Expanding PVCs is not supported for PVCs backed by OpenShift Data Foundation. 12.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 12.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 12.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Storage systems tab, select the storage system and then click Overview Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 12.4. Dynamic provisioning 12.4.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 12.4.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 12.4.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. | [
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim",
"volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>",
"volumes: - name: mypd persistentVolumeClaim: claimName: myclaim"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/managing-persistent-volume-claims_osp |
Chapter 17. Debugging low latency node tuning status | Chapter 17. Debugging low latency node tuning status Use the PerformanceProfile custom resource (CR) status fields for reporting tuning status and debugging latency issues in the cluster node. 17.1. Debugging low latency CNF tuning status The PerformanceProfile custom resource (CR) contains status fields for reporting tuning status and debugging latency degradation issues. These fields report on conditions that describe the state of the operator's reconciliation functionality. A typical issue can arise when the status of machine config pools that are attached to the performance profile are in a degraded state, causing the PerformanceProfile status to degrade. In this case, the machine config pool issues a failure message. The Node Tuning Operator contains the performanceProfile.spec.status.Conditions status field: Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded The Status field contains Conditions that specify Type values that indicate the status of the performance profile: Available All machine configs and Tuned profiles have been created successfully and are available for cluster components are responsible to process them (NTO, MCO, Kubelet). Upgradeable Indicates whether the resources maintained by the Operator are in a state that is safe to upgrade. Progressing Indicates that the deployment process from the performance profile has started. Degraded Indicates an error if: Validation of the performance profile has failed. Creation of all relevant components did not complete successfully. Each of these types contain the following fields: Status The state for the specific type ( true or false ). Timestamp The transaction timestamp. Reason string The machine readable reason. Message string The human readable reason describing the state and error details, if any. 17.1.1. Machine config pools A performance profile and its created products are applied to a node according to an associated machine config pool (MCP). The MCP holds valuable information about the progress of applying the machine configurations created by performance profiles that encompass kernel args, kube config, huge pages allocation, and deployment of rt-kernel. The Performance Profile controller monitors changes in the MCP and updates the performance profile status accordingly. The only conditions returned by the MCP to the performance profile status is when the MCP is Degraded , which leads to performanceProfile.status.condition.Degraded = true . Example The following example is for a performance profile with an associated machine config pool ( worker-cnf ) that was created for it: The associated machine config pool is in a degraded state: # oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h The describe section of the MCP shows the reason: # oc describe mcp worker-cnf Example output Message: Node node-worker-cnf is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found" Reason: 1 nodes are reporting degraded status on sync The degraded state should also appear under the performance profile status field marked as degraded = true : # oc describe performanceprofiles performance Example output Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found". Reason: MCPDegraded Status: True Type: Degraded 17.2. Collecting low latency tuning debugging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including node tuning, NUMA topology, and other information needed to debug issues with low latency setup. For prompt support, supply diagnostic information for both OpenShift Container Platform and low latency tuning. 17.2.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, such as: Resource definitions Audit logs Service logs You can specify one or more images when you run the command by including the --image argument. When you specify an image, the tool collects data related to that feature or product. When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in your current working directory. 17.2.2. Gathering low latency tuning data Use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with low latency tuning, including: The Node Tuning Operator namespaces and child objects. MachineConfigPool and associated MachineConfig objects. The Node Tuning Operator and associated Tuned objects. Linux kernel command line options. CPU and NUMA topology Basic PCI device information and NUMA locality. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI (oc) installed. Procedure Navigate to the directory where you want to store the must-gather data. Collect debugging information by running the following command: USD oc adm must-gather Example output [must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at "<cluster_version>" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version... [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default... [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift... [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system... [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd... ... Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at "<cluster_version>" ClusterOperators: All healthy and stable Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1 1 Replace must-gather-local.5421342344627712289// with the directory name created by the must-gather tool. Note Create a compressed file to attach the data to a support case or to use with the Performance Profile Creator wrapper script when you create a performance profile. Attach the compressed file to your support case on the Red Hat Customer Portal . Additional resources Gathering data about your cluster with the must-gather tool Managing nodes with MachineConfig and KubeletConfig CRs Using the Node Tuning Operator Configuring huge pages at boot time How huge pages are consumed by apps | [
"Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h",
"oc describe mcp worker-cnf",
"Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync",
"oc describe performanceprofiles performance",
"Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded",
"oc adm must-gather",
"[must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable",
"tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/scalability_and_performance/cnf-debugging-low-latency-tuning-status |
Chapter 8. Improving the latency in a multi-supplier replication environment | Chapter 8. Improving the latency in a multi-supplier replication environment In certain multi-supplier replication environments, for example if the servers are connected over a wide area network (WAN), the replication latency can be high if multiple suppliers receive updates at the same time. This happens when one supplier exclusively accesses a replica without releasing it for a long time. In such situations, other suppliers cannot send updates to this consumer, which increases the replication latency. To release a replica after a fixed amount of time, set the nsds5ReplicaReleaseTimeout parameter on suppliers and hubs. Note The 60 seconds default value is ideal for most environments. A value set too high or too low can have a negative impact on the replication performance. If you set the value too low, replication servers are constantly reacquiring each other, and servers are not able to send many updates. In a high-traffic replication environment, a longer timeout can improve situations where one supplier exclusively accesses a replica. However, in most cases, a value higher than 120 seconds slows down replication. 8.1. Setting the replication release timeout using the command line To improve the replication efficiency in a multi-supplier replication environment, update the replication release timeout value on all hubs and suppliers. Prerequisites You configured replication between multiple suppliers and hubs. Procedure Set the timeout value for the suffix: # dsconf -D " cn=Directory Manager " ldap://supplier.example.com replication set --suffix=" dc=example,dc=com " --repl-release-timeout= 70 This command changes the replication timeout of the example,dc=com suffix to 70 seconds. Restart the instance: # dsctl instance_name restart 8.2. Setting the replication release timeout using the web console To improve the replication efficiency in a multi-supplier replication environment, update the replication release timeout value on all hubs and suppliers. Prerequisites You configured replication between multiple suppliers and hubs. Procedure On the Replication tab, select the suffix entry. Click Show Advanced Settings . Update the value in the Replication Release Timeout field. Click Save Configuration . | [
"dsconf -D \" cn=Directory Manager \" ldap://supplier.example.com replication set --suffix=\" dc=example,dc=com \" --repl-release-timeout= 70",
"dsctl instance_name restart"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_improving-the-latency-in-a-multi-supplier-replication-environment_configuring-and-managing-replication |
18.2.4. Moving a Mount Point | 18.2.4. Moving a Mount Point To change the directory in which a file system is mounted, use the following command: See Example 18.8, "Moving an Existing NFS Mount Point" for an example usage. Example 18.8. Moving an Existing NFS Mount Point An NFS storage contains user directories and is already mounted in /mnt/userdirs/ . As root , move this mount point to /home by using the following command: To verify the mount point has been moved, list the content of both directories: | [
"mount --move old_directory new_directory",
"~]# mount --move /mnt/userdirs /home",
"~]# ls /mnt/userdirs ~]# ls /home jill joe"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/sect-using_the_mount_command-mounting-moving |
Chapter 1. Installing software in GNOME | Chapter 1. Installing software in GNOME You can install applications and other software packages using several methods in GNOME. 1.1. Prerequisites You have administrator permissions on the system. 1.2. The GNOME Software application GNOME Software is an utility that enables you to install and update applications and software components in a graphical interface. GNOME Software provides a catalog of graphical applications, which are the applications that include a *.desktop file. The available applications are grouped into multiple categories according to their purpose. GNOME Software uses the PackageKit and Flatpak technologies as its back ends. 1.3. Installing an application using GNOME Software This procedure installs a graphical application using the GNOME Software utility. Procedure Launch the GNOME Software application. Find the application that you want to install using any of the following methods: Click the search button in the upper-left corner of the window and type the name of the application. Browse the application categories in the Explore tab. Click the selected application. Click Install . 1.4. Installing an application to open a file type This procedure installs an application that can open a given file type. Prerequisites You can access a file of the required file type in your file system. Procedure Try opening a file that is associated with an application that is currently not installed on your system. GNOME automatically identifies the suitable application that can open the file, and offers to download the application. 1.5. Installing an RPM package file in GNOME This procedure installs an RPM software package that you manually downloaded as a file. Prerequisites You have downloaded the required RPM package. Procedure In the Files application, open the directory that stores the downloaded RPM package. Note By default, downloaded files are stored in the /home/ user /Downloads/ directory. Double-click the RPM package file to install it. 1.6. Installing an application from the Activities Overview search This procedure installs a graphical application from search results on the GNOME Activities Overview screen. Procedure Open the Activities Overview screen. Type the name of the required application in the search entry. The search results display the application's icon, name, and description. Click the application's icon to open the Software application. Click Install to finish the installation in Software . Verification Click Open to launch the installed application. 1.7. Additional resources Managing software with the DNF tool Chapter 2, Installing applications using Flatpak | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/administering_the_system_using_the_gnome_desktop_environment/assembly_installing-software-in-gnome_administering-the-system-using-the-gnome-desktop-environment |
3.6. Saving Network Packet Filter Settings | 3.6. Saving Network Packet Filter Settings After configuring the appropriate network packet filters for your situation, save the settings so they get restored after a reboot. For iptables , type the following command: /sbin/service iptables save This saves the settings in /etc/sysconfig/iptables so they can be recalled at boot time. Once this file is written, you are able to use the /sbin/service command to start, stop, and check the status (using the status switch) of iptables . The /sbin/service will automatically load the appropriate module for you. For an example of how to use the /sbin/service command, see Section 2.3, "Starting the Piranha Configuration Tool Service" . Finally, you need to be sure the appropriate service is set to activate on the proper runlevels. For more on this, see Section 2.1, "Configuring Services on the LVS Routers" . The chapter explains how to use the Piranha Configuration Tool to configure the LVS router and describe the steps necessary to activate LVS. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-fwm-sav-vsa |
Chapter 1. Introducing virtualization in RHEL | Chapter 1. Introducing virtualization in RHEL If you are unfamiliar with the concept of virtualization or its implementation in Linux, the following sections provide a general overview of virtualization in RHEL 9: its basics, advantages, components, and other possible virtualization solutions provided by Red Hat. 1.1. What is virtualization? RHEL 9 provides the virtualization functionality, which enables a machine running RHEL 9 to host multiple virtual machines (VMs), also referred to as guests . VMs use the host's physical hardware and computing resources to run a separate, virtualized operating system ( guest OS ) as a user-space process on the host's operating system. In other words, virtualization makes it possible to have operating systems within operating systems. VMs enable you to safely test software configurations and features, run legacy software, or optimize the workload efficiency of your hardware. For more information about the benefits, see Advantages of virtualization . For more information about what virtualization is, see the Virtualization topic page . steps To start using virtualization in Red Hat Enterprise Linux 9, see Enabling virtualization in Red Hat Enterprise Linux 9 . In addition to Red Hat Enterprise Linux 9 virtualization, Red Hat offers a number of specialized virtualization solutions, each with a different user focus and features. For more information, see Red Hat virtualization solutions . 1.2. Advantages of virtualization Using virtual machines (VMs) has the following benefits in comparison to using physical machines: Flexible and fine-grained allocation of resources A VM runs on a host machine, which is usually physical, and physical hardware can also be assigned for the guest OS to use. However, the allocation of physical resources to the VM is done on the software level, and is therefore very flexible. A VM uses a configurable fraction of the host memory, CPUs, or storage space, and that configuration can specify very fine-grained resource requests. For example, what the guest OS sees as its disk can be represented as a file on the host file system, and the size of that disk is less constrained than the available sizes for physical disks. Software-controlled configurations The entire configuration of a VM is saved as data on the host, and is under software control. Therefore, a VM can easily be created, removed, cloned, migrated, operated remotely, or connected to remote storage. Separation from the host A guest OS runs on a virtualized kernel, separate from the host OS. This means that any OS can be installed on a VM, and even if the guest OS becomes unstable or is compromised, the host is not affected in any way. Space and cost efficiency A single physical machine can host a large number of VMs. Therefore, it avoids the need for multiple physical machines to do the same tasks, and thus lowers the space, power, and maintenance requirements associated with physical hardware. Software compatibility Because a VM can use a different OS than its host, virtualization makes it possible to run applications that were not originally released for your host OS. For example, using a RHEL 7 guest OS, you can run applications released for RHEL 7 on a RHEL 9 host system. Note Not all operating systems are supported as a guest OS in a RHEL 9 host. For details, see Recommended features in RHEL 9 virtualization . 1.3. Virtual machine components and their interaction Virtualization in RHEL 9 consists of the following principal software components: Hypervisor The basis of creating virtual machines (VMs) in RHEL 9 is the hypervisor , a software layer that controls hardware and enables running multiple operating systems on a host machine. The hypervisor includes the Kernel-based Virtual Machine (KVM) module and virtualization kernel drivers. These components ensure that the Linux kernel on the host machine provides resources for virtualization to user-space software. At the user-space level, the QEMU emulator simulates a complete virtualized hardware platform that the guest operating system can run in, and manages how resources are allocated on the host and presented to the guest. In addition, the libvirt software suite serves as a management and communication layer, making QEMU easier to interact with, enforcing security rules, and providing a number of additional tools for configuring and running VMs. XML configuration A host-based XML configuration file (also known as a domain XML file) determines all settings and devices in a specific VM. The configuration includes: Metadata such as the name of the VM, time zone, and other information about the VM. A description of the devices in the VM, including virtual CPUs (vCPUS), storage devices, input/output devices, network interface cards, and other hardware, real and virtual. VM settings such as the maximum amount of memory it can use, restart settings, and other settings about the behavior of the VM. For more information about the contents of an XML configuration, see Sample virtual machine XML configuration . Component interaction When a VM is started, the hypervisor uses the XML configuration to create an instance of the VM as a user-space process on the host. The hypervisor also makes the VM process accessible to the host-based interfaces, such as the virsh , virt-install , and guestfish utilities, or the web console GUI. When these virtualization tools are used, libvirt translates their input into instructions for QEMU. QEMU communicates the instructions to KVM, which ensures that the kernel appropriately assigns the resources necessary to carry out the instructions. As a result, QEMU can execute the corresponding user-space changes, such as creating or modifying a VM, or performing an action in the VM's guest operating system. Note While QEMU is an essential component of the architecture, it is not intended to be used directly on RHEL 9 systems, due to security concerns. Therefore, qemu-* commands are not supported by Red Hat, and it is highly recommended to interact with QEMU by using libvirt. For more information about the host-based interfaces, see Tools and interfaces for virtualization management . Figure 1.1. RHEL 9 virtualization architecture 1.4. Tools and interfaces for virtualization management You can manage virtualization in RHEL 9 by using the command line (CLI) or several graphical user interfaces (GUIs). Command-line interface The CLI is the most powerful method of managing virtualization in RHEL 9. Prominent CLI commands for virtual machine (VM) management include: virsh - A versatile virtualization command-line utility and shell with a great variety of purposes, depending on the provided arguments. For example: Starting and shutting down a VM - virsh start and virsh shutdown Listing available VMs - virsh list Creating a VM from a configuration file - virsh create Entering a virtualization shell - virsh For more information, see the virsh(1) man page on your system. virt-install - A CLI utility for creating new VMs. For more information, see the virt-install(1) man page on your system. virt-xml - A utility for editing the configuration of a VM. guestfish - A utility for examining and modifying VM disk images. For more information, see the guestfish(1) man page on your system. Graphical interfaces You can use the following GUIs to manage virtualization in RHEL 9: The RHEL 9 web console , also known as Cockpit , provides a remotely accessible and easy to use graphical user interface for managing VMs and virtualization hosts. For instructions on basic virtualization management with the web console, see Managing virtual machines in the web console . 1.5. Red Hat virtualization solutions The following Red Hat products are built on top of RHEL 9 virtualization features and expand the KVM virtualization capabilities available in RHEL 9. In addition, many limitations of RHEL 9 virtualization do not apply to these products: OpenShift Virtualization Based on the KubeVirt technology, OpenShift Virtualization is a part of the Red Hat OpenShift Container Platform, and makes it possible to run virtual machines in containers. For more information about OpenShift Virtualization see the Red Hat Hybrid Cloud pages. Red Hat OpenStack Platform (RHOSP) Red Hat OpenStack Platform offers an integrated foundation to create, deploy, and scale a secure and reliable public or private OpenStack cloud. For more information about Red Hat OpenStack Platform, see the Red Hat Customer Portal or the Red Hat OpenStack Platform documentation suite . Note For details on virtualization features not supported in RHEL but supported in other Red Hat virtualization solutions, see: Unsupported features in RHEL 9 virtualization | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/introducing-virtualization-in-rhel_configuring-and-managing-virtualization |
3.6. Managing CA-Related Profiles | 3.6. Managing CA-Related Profiles Certificate profiles and extensions must be used to set rules on how subordinate CAs can issue certificates. There are two parts to this: Managing the CA signing certificate Defining issuance rules 3.6.1. Setting Restrictions on CA Certificates When a subordinate CA is created, the root CA can impose limits or restrictions on the subordinate CA. For example, the root CA can dictate the maximum depth of valid certification paths (the number of subordinate CAs allowed to be chained below the new CA) by setting the pathLenConstraint field of the Basic Constraints extension in the CA signing certificate. A certificate chain generally consists of an entity certificate, zero or more intermediate CA certificates, and a root CA certificate. The root CA certificate is either self-signed or signed by an external trusted CA. Once issued, the root CA certificate is loaded into a certificate database as a trusted CA. An exchange of certificates takes place when performing a TLS handshake, when sending an S/MIME message, or when sending a signed object. As part of the handshake, the sender is expected to send the subject certificate and any intermediate CA certificates needed to link the subject certificate to the trusted root. For certificate chaining to work properly the certificates should have the following properties: CA certificates must have the Basic Constraints extension. CA certificates must have the keyCertSign bit set in the Key Usage extension. When the CAs generate new keys, they must add the Authority Key Identifier extension to all subject certificates. This extensions helps distinguish the certificates from the older CA certificates. The CA certificates must contain the Subject Key Identifier extension. For more information on certificates and their extensions, see Internet X.509 Public Key Infrastructure - Certificate and Certificate Revocation List (CRL) Profile (RFC 5280) , available at RFC 5280 . These extensions can be configured through the certificate profile enrollment pages. By default, the CA contains the required and reasonable configuration settings, but it is possible to customize these settings. Note This procedure describes editing the CA certificate profile used by a CA to issue CA certificates to its subordinate CAs. The profile that is used when a CA instance is first configured is /var/lib/pki/ instance_name /ca/conf/caCert.profile . This profile cannot be edited in pkiconsole (since it is only available before the instance is configured). It is possible to edit the policies for this profile in the template file before the CA is configured using a text editor. To modify the default in the CA signing certificate profile used by a CA: If the profile is currently enabled, it must be disabled before it can be edited. Open the agent services page, select Manage Certificate Profiles from the left navigation menu, select the profile, and click Disable profile . Open the CA Console. In the left navigation tree of the Configuration tab, select Certificate Manager , then Certificate Profiles . Select caCACert, or the appropriate CA signing certificate profile, from the right window, and click Edit/View . In the Policies tab of the Certificate Profile Rule Editor , select and edit the Key Usage or Extended Key Usage Extension Default if it exists or add it to the profile. Select the Key Usage or Extended Key Usage Extension Constraint, as appropriate, for the default. Set the default values for the CA certificates. For more information, see Section B.1.13, "Key Usage Extension Default" and Section B.1.8, "Extended Key Usage Extension Default" . Set the constraint values for the CA certificates. There are no constraints to be set for a Key Usage extension; for an Extended Key Usage extension, set the appropriate OID constraints for the CA. For more information, see Section B.1.8, "Extended Key Usage Extension Default" . When the changes have been made to the profile, log into the agent services page again, and re-enable the certificate profile. Note pkiconsole is being deprecated. For more information on modifying certificate profiles, see Section 3.2, "Setting up Certificate Profiles" . 3.6.2. Changing the Restrictions for CAs on Issuing Certificates The restrictions on the certificates issued are set by default after the subsystem is configured. These include: Whether certificates can be issued with validity periods longer than the CA signing certificate. The default is to disallow this. The signing algorithm used to sign certificates. The serial number range the CA is able to use to issue certificates. Subordinate CAs have constraints for the validity periods, types of certificates, and the types of extensions which they can issue. It is possible for a subordinate CA to issue certificates that violate these constraints, but a client authenticating a certificate that violates those constraints will not accept that certificate. Check the constraints set on the CA signing certificate before changing the issuing rules for a subordinate CA. To change the certificate issuance rules: Open the Certificate System Console. Select the Certificate Manager item in the left navigation tree of the Configuration tab. Figure 3.1. The General Settings Tab in non-subordinate CAs by default By default, in non-cloned CAs, the General Settings tab of the Certificate Manager menu item contains these options: Override validity nesting requirement. This checkbox sets whether the Certificate Manager can issue certificates with validity periods longer than the CA signing certificate validity period. If this checkbox is not selected and the CA receives a request with validity period longer than the CA signing certificate's validity period, it automatically truncates the validity period to end on the day the CA signing certificate expires. Certificate Serial Number. These fields display the serial number range for certificates issued by the Certificate Manager. The server assigns the serial number in the serial number field to the certificate it issues and the number in the Ending serial number to the last certificate it issues. The serial number range allows multiple CAs to be deployed and balances the number of certificates each CA issues. The combination of an issuer name and a serial number uniquely identifies a certificate. Note The serial number ranges with cloned CAs are fluid. All cloned CAs share a common configuration entry which defines the available range. When one CA starts running low on available numbers, it checks this configuration entry and claims the range. The entry is automatically updated, so that the CA gets a new range. The ranges are defined in begin*Number and end*Number attributes, with separate ranges defined for requests and certificate serial numbers. For example: Serial number management can be enabled for CAs which are not cloned. However, by default, serial number management is disabled unless a system is cloned, when it is automatically enabled. The serial number range cannot be updated manually through the console. The serial number ranges are read-only fields. Default Signing Algorithm. Specifies the signing algorithm the Certificate Manager uses to sign certificates. The options are SHA256withRSA , and SHA512withRSA , if the CA's signing key type is RSA. The signing algorithm specified in the certificate profile configuration overrides the algorithm set here. By default, in cloned CAs, the General Settings tab of the Certificate Manager menu item contains these options: Enable serial number management Enable random certificate serial numbers Select both check boxes. Figure 3.2. The General Settings Tab in cloned CAs by default Click Save . Note pkiconsole is being deprecated. 3.6.3. Using Random Certificate Serial Numbers Red Hat Certificate System contains a serial number range management for requests, certificates, and replica IDs. This allows the automation of cloning when installing Identity Management (IdM). There are these ways to reduce the likelihood of hash-based attacks: making part of the certificate serial number unpredictable to the attacker adding a randomly chosen component to the identity making the validity dates unpredictable to the attacker by skewing each one forwards or backwards The random certificate serial number assignment method adds a randomly chosen component to the identity. This method: works with cloning allows resolving conflicts is compatible with the current serial number management method is compatible with the current workflows for administrators, agents, and end entities fixes the existing bugs in sequential serial number management Note Administrators must enable random certificate serial numbers. 3.6.3.1. Enabling Random Certificate Serial Numbers You can enable automatic serial number range management either from the command line or from the console UI. To enable automatic serial number management from the console UI: Tick the Enable serial number management option in the General Settings tab. Figure 3.3. The General Settings Tab when Random Serial Number Assignment is enabled Tick the Enable random certificate serial numbers option. Note pkiconsole is being deprecated. 3.6.4. Allowing a CA Certificate to Be Renewed Past the CA's Validity Period Normally, a certificate cannot be issued with a validity period that ends after the issuing CA certificate's expiration date. If a CA certificate has an expiration date of December 31, 2015, then all of the certificates it issues must expire by or before December 31, 2015. This rule applies to other CA signing certificates issued by a CA - and this makes renewing a root CA certificate almost impossible. Renewing a CA signing certificate means it would necessarily have to have a validity period past its own expiration date. This behavior can be altered using the CA Validity Default. This default allows a setting ( bypassCAnotafter ) which allows a CA certificate to be issued with a validity period that extends past the issuing CA's expiration ( notAfter ) date. Figure 3.4. CA Validity Default Configuration In real deployments, what this means is that a CA certificate for a root CA can be renewed, when it might otherwise be prevented. To enable CA certificate renewals past the original CA's validity date: Open the caCACert.cfg file. The CA Validity Default should be present by default. Set the value to true to allow a CA certificate to be renewed past the issuing CA's validity period. Restart the CA to apply the changes. When an agent reviews a renewal request, there is an option in the Extensions/Fields area that allows the agent to choose to bypass the normal validity period constraint. If the agent selects false , the constraint is enforced, even if bypassCAnotafter=true is set in the profile. If the agent selects true when the bypassCAnotafter value is not enabled, then the renewal request is rejected by the CA. Figure 3.5. Bypass CA Constraints Option in the Agent Services Page Note The CA Validity Default only applies to CA signing certificate renewals. Other certificates must still be issued and renewed within the CA's validity period. A separate configuration setting for the CA, ca.enablePastCATime , can be used to allow certificates to be renewed past the CA's validity period. However, this applies to every certificate issued by that CA. Because of the potential security issues, this setting is not recommended for production environments. | [
"pkiconsole https://server.example.com:8443/ca",
"pkiconsole https://server.example.com:8443/ca",
"dbs.beginRequestNumber=1 dbs.beginSerialNumber=1 dbs.enableSerialManagement=true dbs.endRequestNumber=9980000 dbs.endSerialNumber=ffe0000 dbs.ldap=internaldb dbs.newSchemaEntryAdded=true dbs.replicaCloneTransferNumber=5",
"vim /var/lib/pki/ instance_name /ca/conf/caCACert.cfg",
"policyset.caCertSet.2.default.name=CA Certificate Validity Default policyset.caCertSet.2.default.params.range=2922 policyset.caCertSet.2.default.params.startTime=0 policyset.caCertSet.2.default.params.bypassCAnotafter=true"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/managing-ca-related-profiles |
6.9. Snapshots | 6.9. Snapshots 6.9.1. Creating a Snapshot of a Virtual Machine A snapshot is a view of a virtual machine's operating system and applications on any or all available disks at a given point in time. Take a snapshot of a virtual machine before you make a change to it that may have unintended consequences. You can use a snapshot to return a virtual machine to a state. Creating a Snapshot of a Virtual Machine Click Compute Virtual Machines . Click a virtual machine's name to go to the details view. Click the Snapshots tab and click Create . Enter a description for the snapshot. Select Disks to include using the check boxes. Note If no disks are selected, a partial snapshot of the virtual machine, without a disk, is created. You can preview this snapshot to view the configuration of the virtual machine. Note that committing a partial snapshot will result in a virtual machine without a disk. Select Save Memory to include a running virtual machine's memory in the snapshot. Click OK . The virtual machine's operating system and applications on the selected disk(s) are stored in a snapshot that can be previewed or restored. The snapshot is created with a status of Locked , which changes to Ok . When you click on the snapshot, its details are shown on the General , Disks , Network Interfaces , and Installed Applications drop-down views in the Snapshots tab. 6.9.2. Using a Snapshot to Restore a Virtual Machine A snapshot can be used to restore a virtual machine to its state. Using Snapshots to Restore Virtual Machines Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Snapshots tab to list the available snapshots. Select a snapshot to restore in the upper pane. The snapshot details display in the lower pane. Click the Preview drop-down menu button and select Custom . Use the check boxes to select the VM Configuration , Memory , and disk(s) you want to restore, then click OK . This allows you to create and restore from a customized snapshot using the configuration and disk(s) from multiple snapshots. The status of the snapshot changes to Preview Mode . The status of the virtual machine briefly changes to Image Locked before returning to Down . Shut down the virtual machine. Start the virtual machine; it runs using the disk image of the snapshot. Click Commit to permanently restore the virtual machine to the condition of the snapshot. Any subsequent snapshots are erased. Alternatively, click the Undo button to deactivate the snapshot and return the virtual machine to its state. 6.9.3. Creating a Virtual Machine from a Snapshot You can use a snapshot to create another virtual machine. Creating a Virtual Machine from a Snapshot Click Compute Virtual Machines and select a virtual machine. Click the virtual machine's name to go to the details view. Click the Snapshots tab to list the available snapshots. Select a snapshot in the list displayed and click Clone . Enter the Name of the virtual machine. Click OK . After a short time, the cloned virtual machine appears in the Virtual Machines tab in the navigation pane with a status of Image Locked . The virtual machine remains in this state until Red Hat Virtualization completes the creation of the virtual machine. A virtual machine with a preallocated 20 GB hard drive takes about fifteen minutes to create. Sparsely-allocated virtual disks take less time to create than do preallocated virtual disks. When the virtual machine is ready to use, its status changes from Image Locked to Down in Compute Virtual Machines . 6.9.4. Deleting a Snapshot You can delete a virtual machine snapshot and permanently remove it from your Red Hat Virtualization environment. This operation is only supported on a running virtual machine. Important When you delete a snapshot from an image chain, there must be enough free space in the storage domain to temporarily accommodate both the original volume and the newly merged volume. Otherwise, snapshot deletion will fail and you will need to export and re-import the volume to remove snapshots. This is due to the data from the two volumes being merged in the resized volume and the resized volume growing to accommodate the total size of the two merged images. If the snapshot being deleted is contained in a base image, the volume subsequent to the volume containing the snapshot being deleted is extended to include the base volume. If the snapshot being deleted is contained in a QCOW2 (thin provisioned), non-base image hosted on internal storage, the successor volume is extended to include the volume containing the snapshot being deleted. Deleting a Snapshot Click Compute Virtual Machines . Click the virtual machine's name to go to the details view. Click the Snapshots tab to list the snapshots for that virtual machine. Select the snapshot to delete. Click Delete . Click OK . Note If the deletion fails, fix the underlying problem (for example, a failed host, an inaccessible storage device, or even a temporary network issue) and try again. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-snapshots |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.3/providing-direct-documentation-feedback_openjdk |
Chapter 4. Support for FIPS cryptography | Chapter 4. Support for FIPS cryptography You can install an OpenShift Container Platform cluster in FIPS mode. OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL 9 computer that is configured to operate in FIPS mode, and you must use a FIPS-capable version of the installation program. See the section titled Obtaining a FIPS-capable installation program using `oc adm extract` . For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied when the machines are deployed based on the status of an option in the install-config.yaml file, which governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. Because FIPS must be enabled before the operating system that your cluster uses boots for the first time, you cannot enable FIPS after you deploy a cluster. 4.1. Obtaining a FIPS-capable installation program using oc adm extract OpenShift Container Platform requires the use of a FIPS-capable installation binary to install a cluster in FIPS mode. You can obtain this binary by extracting it from the release image by using the OpenShift CLI ( oc ). After you have obtained the binary, you proceed with the cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . Prerequisites You have installed the OpenShift CLI ( oc ) with version 4.16 or newer. Procedure Extract the FIPS-capable binary from the installation program by running the following command: USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=openshift-install-fips --to "USD{extract_dir}" USD{RELEASE_IMAGE} where: <pullsecret_file> Specifies the name of a file that contains your pull secret. <extract_dir> Specifies the directory where you want to extract the binary. <RELEASE_IMAGE> Specifies the Quay.io URL of the OpenShift Container Platform release you are using. For more information on finding the release image, see Extracting the OpenShift Container Platform installation program . Proceed with cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . Additional resources Extracting the OpenShift Container Platform installation program 4.2. Obtaining a FIPS-capable installation program using the public OpenShift mirror OpenShift Container Platform requires the use of a FIPS-capable installation binary to install a cluster in FIPS mode. You can obtain this binary by downloading it from the public OpenShift mirror. After you have obtained the binary, proceed with the cluster installation, replacing all instances of the openshift-install binary with openshift-install-fips . Prerequisites You have access to the internet. Procedure Download the installation program from https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-4.17/openshift-install-rhel9-amd64.tar.gz . Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-rhel9-amd64.tar.gz Proceed with cluster installation, replacing all instances of the openshift-install command with openshift-install-fips . 4.3. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS validated or Modules In Process modules within RHEL and RHCOS for the operating system components that it uses. See RHEL core crypto components . For example, when users use SSH to connect to OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. Table 4.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.17 Attributes Limitations FIPS support in RHEL 9 and RHCOS operating systems. The FIPS implementation does not use a function that performs hash computation and signature generation or validation in a single step. This limitation will continue to be evaluated and improved in future OpenShift Container Platform releases. FIPS support in CRI-O runtimes. FIPS support in OpenShift Container Platform services. FIPS validated or Modules In Process cryptographic module and algorithms that are obtained from RHEL 9 and RHCOS binaries and images. Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for future OpenShift Container Platform releases. FIPS support across multiple architectures. FIPS is currently only supported on OpenShift Container Platform deployments using x86_64 , ppc64le , and s390x architectures. 4.4. FIPS support in components that the cluster uses Although the OpenShift Container Platform cluster itself uses FIPS validated or Modules In Process modules, ensure that the systems that support your OpenShift Container Platform cluster use FIPS validated or Modules In Process modules for cryptography. 4.4.1. etcd To ensure that the secrets that are stored in etcd use FIPS validated or Modules In Process encryption, boot the node in FIPS mode. After you install the cluster in FIPS mode, you can encrypt the etcd data by using the FIPS-approved aes cbc cryptographic algorithm. 4.4.2. Storage For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS validated or Modules In Process encryption. You can configure your cluster to encrypt the root filesystem of each node, as described in Customizing nodes . 4.4.3. Runtimes To ensure that containers know that they are running on a host that is using FIPS validated or Modules In Process cryptography modules, use CRI-O to manage your runtimes. 4.5. Installing a cluster in FIPS mode To install a cluster in FIPS mode, follow the instructions to install a customized cluster on your preferred infrastructure. Ensure that you set fips: true in the install-config.yaml file before you deploy your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . Amazon Web Services Microsoft Azure Bare metal Google Cloud Platform IBM Cloud(R) IBM Power(R) IBM Z(R) and IBM(R) LinuxONE IBM Z(R) and IBM(R) LinuxONE with RHEL KVM IBM Z(R) and IBM(R) LinuxONE in an LPAR Red Hat OpenStack Platform (RHOSP) VMware vSphere Note If you are using Azure File storage, you cannot enable FIPS mode. To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after you install your cluster. If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and Installing the system in FIPS mode . | [
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=openshift-install-fips --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"tar -xvf openshift-install-rhel9-amd64.tar.gz"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installation_overview/installing-fips |
Chapter 6. Enabling simple content access with Red Hat Subscription Management | Chapter 6. Enabling simple content access with Red Hat Subscription Management The process of migrating Red Hat accounts and organizations that primarily use Red Hat Subscription Management for subscription management to use simple content access begins on October 25, 2024 and will be complete in November 2024. 6.1. Enabling simple content access for your Red Hat Subscription Management managed systems Manual activation of simple content access is no longer necessary. 6.2. Completing post-enablement steps for Red Hat Subscription Management After the migration for your Red Hat account and organization is complete and simple content access is enabled, review the articles in the Additional resources section for more information about using the simple content access mode and configuring and working with the services in the Hybrid Cloud Console. Ensure that you understand how this change to the simple content access mode affects the workflow that your organization uses. If you had any customized processes that relied upon artifacts from the old entitlement-based mode, such as checking for valid subscriptions on a per-system basis, these processes will need to be discarded or redesigned to be compatible with the new simple content access workflow. Find out more about additional services in the Hybrid Cloud Console that can improve your subscription and system management processes and determine if you are taking advantage of them. See the Hybrid Cloud Console at https://console.redhat.com to explore these services. Authorize your Red Hat organization's users to access the services of the Red Hat Hybrid Cloud Console by setting up user groups, assigning roles, and doing other tasks in the role-based user access control (RBAC) system. Authorize your Red Hat organization's users to view system inventory data with appropriate filtering by creating workspaces that classify systems into logical groups. Configure Hybrid Cloud Console notifications so that alerts about specific events in Hybrid Cloud Console services can go to a named group of users or go to applications, APIs, or webhooks for additional custom actions. Activate the subscriptions service, if this service is not already active, to begin account-wide usage reporting of Red Hat products. Explore the capabilities, including subscription and system management capabilities, of the Hybrid Cloud Console and how workflows for some of these capabilities might have changed from the workflows that were previously available in the Red Hat Customer Portal at access.redhat.com: Tracking usage reporting for Red Hat products and variants on the product platforms pages of the subscriptions service. Tracking and managing your system infrastructure in the inventory service. Using activation keys to help with system registration, setting system purpose, and enabling repositories. Creating and exporting manifests for use within your Red Hat Satellite environment to find, access, and download content from the Red Hat Content Delivery Network. Determining whether the additional capabilities of Red Hat Insights, including the advisor, vulnerability, remediation, patch, and other services are right for your environment. Additional resources The following articles are actively being updated to address customer questions and concerns during and after the account migration process that began on October 25, 2024. Transition of Red Hat's subscription services to the Red Hat Hybrid Cloud Console (console.redhat.com) Transitioning Red Hat Subscription Management to the Hybrid Cloud Console Simple Content Access | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_simple_content_access/proc-enabling-simplecontent-with-rhsm_assembly-simplecontent-ctxt |
Chapter 2. Planning your deployment | Chapter 2. Planning your deployment To deploy and operate your Red Hat OpenStack Services on OpenShift (RHOSO) environment, you use the tools and container infrastructure provided by the Red Hat OpenShift Container Platform (RHOCP). RHOCP uses a modular system of Operators to extend the functions of your RHOCP cluster. The RHOSO OpenStack Operator ( openstack-operator ) installs and runs a RHOSO control plane within RHOCP and automates the deployment of a RHOSO data plane. The data plane is the collection of nodes that host RHOSO workloads. The OpenStack Operator prepares the nodes with the operating system configuration that is required to host the RHOSO services and workloads. The OpenStack Operator manages a set of Custom Resource Definitions (CRDs) that define how you can deploy and manage the infrastructure and configuration of the RHOSO control plane and the data plane nodes. To create a RHOSO cloud with a RHOCP-hosted control plane, you use the OpenStack Operator CRDs to create a set of custom resources (CRs) that configure your control plane and your data plane. 2.1. How to deploy the cloud infrastructure To create a RHOSO cloud with a RHOCP hosted control plane, you must complete the following tasks: Install OpenStack Operator ( openstack-operator ) on an operational RHOCP cluster. Provide secure access to the RHOSO services. Create and configure the control plane network. Create and configure the data plane networks. Create a control plane for your environment. Customize the control plane for your environment. Create and configure the data plane nodes. Optional: Configure a storage solution for the RHOSO deployment. You perform the control plane installation tasks and all data plane creation tasks on a workstation that has access to the RHOCP cluster. Install OpenStack Operator ( openstack-operator ) on an operational RHOCP cluster The RHOSO administrator installs the OpenStack Operator on the RHOCP cluster. For information about how to install the OpenStack Operator, see Installing and preparing the Operators in the Deploying Red Hat OpenStack Services on OpenShift guide. Provide secure access to the RHOSO services You must create a Secret custom resource (CR) to provide secure access to the RHOSO service pods. For information, see Providing secure access to the Red Hat OpenStack Platform services in the Deploying Red Hat OpenStack Services on OpenShift guide. Create and configure the control plane network You use RHOCP Operators to prepare the RHOCP cluster for the RHOSO control plane network. For information, see Preparing RHOCP for RHOSP networks in the Deploying Red Hat OpenStack Services on OpenShift guide. Create and configure the data plane networks You use RHOCP Operators to prepare the RHOCP cluster for the RHOSO data plane network. For information, see Configuring the data plane network in the Deploying Red Hat OpenStack Services on OpenShift guide. Create a control plane for your environment You configure and create an initial control plane with the recommended configurations for each service. For information, see Creating the control plane in the Deploying Red Hat OpenStack Services on OpenShift guide. Customize the control plane for your environment You can customize your deployed control plane with the services required for your environment. For information, see Customizing the control plane in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. Create and configure the data plane nodes You configure and create a simple data plane with the minimum features. For information, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide. Customize the data plane for your environment You can customize your deployed data plane with the features and configuration required for your environment. For information, see Customizing the data plane in the Customizing the Red Hat OpenStack Services on OpenShift deployment guide. Configure a storage solution for the RHOSO deployment You can optionally configure a storage solution for your RHOSO deployment. For information, see the Configuring persistent storage guide. 2.2. Custom resource definitions (CRDs) The OpenStack Operator includes a set of custom resource definitions (CRDs) that you can use to create and manage RHOSP resources. Use the following command to view a complete list of the RHOSP CRDs: USD oc get crd | grep "^openstack" Use the following command to view the definition for a specific CRD: Use the following command to view descriptions of the fields you can use to configure a specific CRD: Additional resources Managing resources from custom resource definitions 2.2.1. CRD naming conventions Each CRD contains multiple names in the spec.names section. Use these names depending on the context of your actions: Use kind when you create and interact with resource manifests: The kind name in the resource manifest correlates to the kind name in the respective CRD. Use singular when you interact with a single resource: | [
"oc describe crd openstackcontrolplane Name: openstackcontrolplane.openstack.org Namespace: Labels: operators.coreos.com/operator.openstack= Annotations: cert-manager.io/inject-ca-from: USD(CERTIFICATE_NAMESPACE)/USD(CERTIFICATE_NAME) controller-gen.kubebuilder.io/version: v0.3.0 API Version: apiextensions.k8s.io/v1 Kind: CustomResourceDefinition",
"oc explain openstackcontrolplane.spec KIND: OpenStackControlPlane VERSION: core.openstack.org/v1beta1 RESOURCE: spec <Object> DESCRIPTION: <empty> FIELDS: ceilometer <Object> cinder <Object> dns <Object> extraMounts <[]Object>",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane",
"oc describe openstackcontrolplane/compute"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/assembly_planning-to-deploy-a-rhoso-environment |
Chapter 16. Configuring SSH to use RSA | Chapter 16. Configuring SSH to use RSA SSH is used to clone Git repositories. By default, the DSA encryption algorithm is provided by Business Central. However, some SSH clients, for example SSH clients in the Fedora 23 environment, use the RSA algorithm instead of the DSA algorithm. Business Central contains a system property that you can use to switch from DSA to RSA if required. Note SSH clients on supported configurations, for example Red Hat Enterprise Linux 7, are not affected by this issue. For a list of supported configurations, see Red Hat Process Automation Manager 7 Supported Configurations . Procedure Complete one of the following tasks to enable this system property: Modify the ~/.ssh/config file on the client side as follows to force the SSH client to accept the deprecated DSA algorithm: Include the -Dorg.uberfire.nio.git.ssh.algorithm=RSA parameter when you start Business Central, for example: | [
"Host <SERVER_IP> HostKeyAlgorithms +ssh-dss",
"./standalone.sh -c standalone-full.xml -Dorg.uberfire.nio.git.ssh.algorithm=RSA"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/ssh-configuring-proc_install-on-eap |
15.6. Exposing GNOME Virtual File Systems to All Other Applications | 15.6. Exposing GNOME Virtual File Systems to All Other Applications In addition to applications built with the GIO library being able to access GVFS mounts, GVFS also provides a FUSE daemon which exposes active GVFS mounts. This means that any application can access active GVFS mounts using the standard POSIX APIs as though they were regular filesystems. Nevertheless, there are applications in which additional library dependency and new VFS subsystem specifics may be unsuitable or too complex. For such reasons and to boost compatibility, GVFS provides a FUSE ( Filesystem in Userspace ) daemon, which exposes active mounts through its mount for standard POSIX (Portable Operating System Interface) access. This daemon transparently translates incoming requests to imitate a local file system for applications. Important The translation coming from the different design is not 100% feature-compatible and you may experience difficulties with certain combinations of applications and GVFS back ends. The FUSE daemon starts automatically with the GVFS master daemon and places its mount either in the /run/user/ UID /gvfs or ~/.gvfs files as a fallback. Manual browsing shows that there individual directories for each GVFS mount. When you are opening documents from GVFS locations with non-native applications, a transformed path is passed as an argument. Note that native GIO applications automatically translate this path back to a native URI . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/exposing-gvfs |
About Red Hat Trusted Application Pipeline | About Red Hat Trusted Application Pipeline Red Hat Trusted Application Pipeline 1.4 Learn how to secure your software development lifecycle with Red Hat Trusted Application Pipeline. Red Hat Trusted Application Pipeline Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/about_red_hat_trusted_application_pipeline/index |
Red Hat Insights Remediations Guide with FedRAMP | Red Hat Insights Remediations Guide with FedRAMP Red Hat Insights 1-latest Fixing issues on RHEL systems with remediation playbooks Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/red_hat_insights_remediations_guide_with_fedramp/index |
Introduction | Introduction This document provides information about installing, configuring, and managing Red Hat Virtual Linux Server (LVS) components. LVS provides load balancing through specialized routing techniques that dispatch traffic to a pool of servers. This document does not include information about installing, configuring, and managing Red Hat Cluster software. Information about that is in a separate document. The audience of this document should have advanced working knowledge of Red Hat Enterprise Linux and understand the concepts of clusters, storage, and server computing. This document is organized as follows: Chapter 1, Linux Virtual Server Overview Chapter 2, Initial LVS Configuration Chapter 3, Setting Up LVS Chapter 4, Configuring the LVS Routers with Piranha Configuration Tool Appendix A, Using LVS with Red Hat Cluster Appendix B, Revision History For more information about Red Hat Enterprise Linux 4, refer to the following resources: Red Hat Enterprise Linux Installation Guide - Provides information regarding installation. Red Hat Enterprise Linux Introduction to System Administration - Provides introductory information for new Red Hat Enterprise Linux system administrators. Red Hat Enterprise Linux System Administration Guide - Provides more detailed information about configuring Red Hat Enterprise Linux to suit your particular needs as a user. Red Hat Enterprise Linux Reference Guide - Provides detailed information suited for more experienced users to reference when needed, as opposed to step-by-step instructions. Red Hat Enterprise Linux Security Guide - Details the planning and the tools involved in creating a secured computing environment for the data center, workplace, and home. For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux 4, refer to the following resources: Red Hat Cluster Suite Overview - Provides a high level overview of the Red Hat Cluster Suite. Configuring and Managing a Red Hat Cluster - Provides information about installing, configuring and managing Red Hat Cluster components. LVM Administrator's Guide: Configuration and Administration - Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment. Global File System: Configuration and Administration - Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). Using Device-Mapper Multipath - Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 4. Using GNBD with Global File System - Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS. Red Hat Cluster Suite Release Notes - Provides information about the current release of Red Hat Cluster Suite. Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at http://www.redhat.com/docs/ . 1. Feedback If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla ( http://bugzilla.redhat.com/bugzilla/ ) against the component rh-cs-en . Be sure to mention the manual's identifier: By mentioning this manual's identifier, we know exactly which version of the guide you have. If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily. | [
"Virtual_Server_Administration(EN)-4.8 (2009-04-23T15:41)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/ch-intro-vsa |
function::user_string_utf32 | function::user_string_utf32 Name function::user_string_utf32 - Retrieves UTF-32 string from user memory Synopsis Arguments addr The user address to retrieve the string from Description This function returns a null terminated UTF-8 string converted from the UTF-32 string at a given user memory address. Reports an error on string copy fault or conversion error. | [
"user_string_utf32:string(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string-utf32 |
8.101. libwacom | 8.101. libwacom 8.101.1. RHBA-2013:1567 - libwacom bug fix update Updated libwacom packages that fix one bug are now available for Red Hat Enterprise Linux 6. The libwacom packages contain a library that provides access to a tablet model database. The libwacom packages expose the contents of this database to applications, allowing for tablet-specific user interfaces. The libwacom packages allow the GNOME tools to automatically configure screen mappings and calibrations, and provide device-specific configurations. Bug Fix BZ# 847427 Previously, the Wacom Stylus pen was not supported on Lenovo ThinkPad X220 tablets by the libwacom database. Consequently, the pen was not recognized by the gnome-wacom-properties tool, and warning messages were returned. Support for the Wacom Stylus on Lenovo ThinkPad X220 tablets has been added and gnome-wacom-properties is now able to calibrate the tablet. Users of libwacom are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/libwacom |
5.2.23. /proc/mtrr | 5.2.23. /proc/mtrr This file refers to the current Memory Type Range Registers (MTRRs) in use with the system. If the system architecture supports MTRRs, then the /proc/mtrr file may look similar to the following: MTRRs are used with the Intel P6 family of processors (Pentium II and higher) and control processor access to memory ranges. When using a video card on a PCI or AGP bus, a properly configured /proc/mtrr file can increase performance more than 150%. Most of the time, this value is properly configured by default. More information on manually configuring this file can be found locally at the following location: | [
"reg00: base=0x00000000 ( 0MB), size= 256MB: write-back, count=1 reg01: base=0xe8000000 (3712MB), size= 32MB: write-combining, count=1",
"/usr/share/doc/kernel-doc- <version> /Documentation/mtrr.txt"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-mtrr |
3.3. Search Auto-Completion | 3.3. Search Auto-Completion The Administration Portal provides auto-completion to help you create valid and powerful search queries. As you type each part of a search query, a drop-down list of choices for the part of the search opens below the Search Bar. You can either select from the list and then continue typing/selecting the part of the search, or ignore the options and continue entering your query manually. The following table specifies by example how the Administration Portal auto-completion assists in constructing a query: Hosts: Vms.status = down Table 3.2. Example Search Queries Using Auto-Completion Input List Items Displayed Action h Hosts (1 option only) Select Hosts or type Hosts Hosts: All host properties Type v Hosts: v host properties starting with a v Select Vms or type Vms Hosts: Vms All virtual machine properties Type s Hosts: Vms.s All virtual machine properties beginning with s Select status or type status Hosts: Vms.status = != Select or type = Hosts: Vms.status = All status values Select or type down | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/search_auto_completion |
Chapter 2. Deploying OpenShift Data Foundation on Google Cloud | Chapter 2. Deploying OpenShift Data Foundation on Google Cloud You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Google Cloud installer-provisioned infrastructure. This enables you to create internal cluster resources and it results in internal provisioning of the base services, which helps to make additional storage classes available to applications. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.16 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . Be aware that the default storage class of the Google Cloud platform uses hard disk drive (HDD). To use solid state drive (SSD) based disks for better performance, you need to create a storage class, using pd-ssd as shown in the following ssd-storeageclass.yaml example: Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set as standard . However, if you created a storage class to use SSD based disks for better performance, you need to select that storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) ux-backend-server- * (1 pod on any storage node) * ocs-client-operator -* (1 pod on any storage node) ocs-client-operator-console -* (1 pod on any storage node) ocs-provider-server -* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.5.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: faster provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd volumeBindingMode: WaitForFirstConsumer reclaimPolicy: Delete"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_google_cloud/deploying_openshift_data_foundation_on_google_cloud |
3.6. Displaying the Full Cluster Configuration | 3.6. Displaying the Full Cluster Configuration Use the following command to display the full current cluster configuration. | [
"pcs config"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-pcsfullconfig-haar |
13.18. Begin Installation | 13.18. Begin Installation When all required sections of the Installation Summary screen have been completed, the admonition at the bottom of the menu screen disappears and the Begin Installation button becomes available. Figure 13.36. Ready to Install Warning Up to this point in the installation process, no lasting changes have been made on your computer. When you click Begin Installation , the installation program will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, return to the relevant section of the Installation Summary screen. To cancel installation completely, click Quit or switch off your computer. To switch off most computers at this stage, press the power button and hold it down for a few seconds. If you have finished customizing your installation and are certain that you want to proceed, click Begin Installation . After you click Begin Installation , allow the installation process to complete. If the process is interrupted, for example, by you switching off or resetting the computer, or by a power outage, you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-write-changes-to-disk-ppc |
Chapter 4. Pipelines | Chapter 4. Pipelines 4.1. Red Hat OpenShift Pipelines release notes Red Hat OpenShift Pipelines is a cloud-native CI/CD experience based on the Tekton project which provides: Standard Kubernetes-native pipeline definitions (CRDs). Serverless pipelines with no CI server management overhead. Extensibility to build images using any Kubernetes tool, such as S2I, Buildah, JIB, and Kaniko. Portability across any Kubernetes distribution. Powerful CLI for interacting with pipelines. Integrated user experience with the Developer perspective of the OpenShift Container Platform web console. For an overview of Red Hat OpenShift Pipelines, see Understanding OpenShift Pipelines . 4.1.1. Compatibility and support matrix Some features in this release are currently in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP Technology Preview GA General Availability Table 4.1. Compatibility and support matrix Red Hat OpenShift Pipelines Version Component Version OpenShift Version Support Status Operator Pipelines Triggers CLI Catalog Chains Hub Pipelines as Code 1.10 0.44.x 0.23.x 0.30.x NA 0.15.x (TP) 1.12.x (TP) 0.17.x (GA) 4.10, 4.11, 4.12, 4.13 GA 1.9 0.41.x 0.22.x 0.28.x NA 0.13.x (TP) 1.11.x (TP) 0.15.x (GA) 4.10, 4.11, 4.12, 4.13 GA 1.8 0.37.x 0.20.x 0.24.x NA 0.9.0 (TP) 1.8.x (TP) 0.10.x (TP) 4.10, 4.11, 4.12 GA 1.7 0.33.x 0.19.x 0.23.x 0.33 0.8.0 (TP) 1.7.0 (TP) 0.5.x (TP) 4.9, 4.10, 4.11 GA 1.6 0.28.x 0.16.x 0.21.x 0.28 N/A N/A N/A 4.9 GA 1.5 0.24.x 0.14.x (TP) 0.19.x 0.24 N/A N/A N/A 4.8 GA 1.4 0.22.x 0.12.x (TP) 0.17.x 0.22 N/A N/A N/A 4.7 GA Additionally, support for running Red Hat OpenShift Pipelines on ARM hardware is in Technology Preview . For questions and feedback, you can send an email to the product team at [email protected] . 4.1.2. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 4.1.3. Release notes for Red Hat OpenShift Pipelines General Availability 1.10 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.3.1. New features In addition to fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.10. 4.1.3.1.1. Pipelines With this update, you can specify environment variables in a PipelineRun or TaskRun pod template to override or append the variables that are configured in a task or step. Also, you can specify environment variables in a default pod template to use those variables globally for all PipelineRuns and TaskRuns . This update also adds a new default configuration named forbidden-envs to filter environment variables while propagating from pod templates. With this update, custom tasks in pipelines are enabled by default. Note To disable this update, set the enable-custom-tasks flag to false in the feature-flags config custom resource. This update supports the v1beta1.CustomRun API version for custom tasks. This update adds support for the PipelineRun reconciler to create a custom run. For example, custom TaskRuns created from PipelineRuns can now use the v1beta1.CustomRun API version instead of v1alpha1.Run , if the custom-task-version feature flag is set to v1beta1 , instead of the default value v1alpha1 . Note You need to update the custom task controller to listen for the *v1beta1.CustomRun API version instead of *v1alpha1.Run in order to respond to v1beta1.CustomRun requests. This update adds a new retries field to the v1beta1.TaskRun and v1.TaskRun specifications. 4.1.3.1.2. Triggers With this update, triggers support the creation of Pipelines , Tasks , PipelineRuns , and TaskRuns objects of the v1 API version along with CustomRun objects of the v1beta1 API version. With this update, GitHub Interceptor blocks a pull request trigger from being executed unless invoked by an owner or with a configurable comment by an owner. Note To enable or disable this update, set the value of the githubOwners parameter to true or false in the GitHub Interceptor configuration file. With this update, GitHub Interceptor has the ability to add a comma delimited list of all files that have changed for the push and pull request events. The list of changed files is added to the changed_files property of the event payload in the top-level extensions field. This update changes the MinVersion of TLS to tls.VersionTLS12 so that triggers run on OpenShift Container Platform when the Federal Information Processing Standards (FIPS) mode is enabled. 4.1.3.1.3. CLI This update adds support to pass a Container Storage Interface (CSI) file as a workspace at the time of starting a Task , ClusterTask or Pipeline . This update adds v1 API support to all CLI commands associated with task, pipeline, pipeline run, and task run resources. Tekton CLI works with both v1beta1 and v1 APIs for these resources. This update adds support for an object type parameter in the start and describe commands. 4.1.3.1.4. Operator This update adds a default-forbidden-env parameter in optional pipeline properties. The parameter includes forbidden environment variables that should not be propagated if provided through pod templates. This update adds support for custom logos in Tekton Hub UI. To add a custom logo, set the value of the customLogo parameter to base64 encoded URI of logo in the Tekton Hub CR. This update increments the version number of the git-clone task to 0.9. 4.1.3.1.5. Tekton Chains Important Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update adds annotations and labels to the PipelineRun and TaskRun attestations. This update adds a new format named slsa/v1 , which generates the same provenance as the one generated when requesting in the in-toto format. With this update, Sigstore features are moved out from the experimental features. With this update, the predicate.materials function includes image URI and digest information from all steps and sidecars for a TaskRun object. 4.1.3.1.6. Tekton Hub Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update supports installing, upgrading, or downgrading Tekton resources of the v1 API version on the cluster. This update supports adding a custom logo in place of the Tekton Hub logo in UI. This update extends the tkn hub install command functionality by adding a --type artifact flag, which fetches resources from the Artifact Hub and installs them on your cluster. This update adds support tier, catalog, and org information as labels to the resources being installed from Artifact Hub to your cluster. 4.1.3.1.7. Pipelines as Code This update enhances incoming webhook support. For a GitHub application installed on the OpenShift Container Platform cluster, you do not need to provide the git_provider specification for an incoming webhook. Instead, Pipelines as Code detects the secret and use it for the incoming webhook. With this update, you can use the same token to fetch remote tasks from the same host on GitHub with a non-default branch. With this update, Pipelines as Code supports Tekton v1 templates. You can have v1 and v1beta1 templates, which Pipelines as Code reads for PR generation. The PR is created as v1 on cluster. Before this update, OpenShift console UI would use a hardcoded pipeline run template as a fallback template when a runtime template was not found in the OpenShift namespace. This update in the pipelines-as-code config map provides a new default pipeline run template named, pipelines-as-code-template-default for the console to use. With this update, Pipelines as Code supports Tekton Pipelines 0.44.0 minimal status. With this update, Pipelines as Code supports Tekton v1 API, which means Pipelines as Code is now compatible with Tekton v0.44 and later. With this update, you can configure custom console dashboards in addition to configuring a console for OpenShift and Tekton dashboards for k8s. With this update, Pipelines as Code detects the installation of a GitHub application initiated using the tkn pac create repo command and does not require a GitHub webhook if it was installed globally. Before this update, if there was an error on a PipelineRun execution and not on the tasks attached to PipelineRun , Pipelines as Code would not report the failure properly. With this update, Pipelines as Code reports the error properly on the GitHub checks when a PipelineRun could not be created. With this update, Pipelines as Code includes a target_namespace variable, which expands to the currently running namespace where the PipelineRun is executed. With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application. With this update, Pipelines as Code does not report errors when the repository CR was not found. With this update, Pipelines as Code reports an error if multiple pipeline runs with the same name were found. 4.1.3.2. Breaking changes With this update, the prior version of the tkn command is not compatible with Red Hat OpenShift Pipelines 1.10. This update removes support for Cluster and CloudEvent pipeline resources from Tekton CLI. You cannot create pipeline resources by using the tkn pipelineresource create command. Also, pipeline resources are no longer supported in the start command of a task, cluster task, or pipeline. This update removes tekton as a provenance format from Tekton Chains. 4.1.3.3. Deprecated and removed features In Red Hat OpenShift Pipelines 1.10, the ClusterTask commands are now deprecated and are planned to be removed in a future release. The tkn task create command is also deprecated with this update. In Red Hat OpenShift Pipelines 1.10, the flags -i and -o that were used with the tkn task start command are now deprecated because the v1 API does not support pipeline resources. In Red Hat OpenShift Pipelines 1.10, the flag -r that was used with the tkn pipeline start command is deprecated because the v1 API does not support pipeline resources. The Red Hat OpenShift Pipelines 1.10 update sets the openshiftDefaultEmbeddedStatus parameter to both with full and minimal embedded status. The flag to change the default embedded status is also deprecated and will be removed. In addition, the pipeline default embedded status will be changed to minimal in a future release. 4.1.3.4. Known issues This update includes the following backward incompatible changes: Removal of the PipelineResources cluster Removal of the PipelineResources cloud event If the pipelines metrics feature does not work after a cluster upgrade, run the following command as a workaround: USD oc get tektoninstallersets.operator.tekton.dev | awk '/pipeline-main-static/ {print USD1}' | xargs oc delete tektoninstallersets With this update, usage of external databases, such as the Crunchy PostgreSQL is not supported on IBM Power, IBM Z, and {linuxoneProductName}. Instead, use the default Tekton Hub database. 4.1.3.5. Fixed issues Before this update, the opc pac command generated a runtime error instead of showing any help. This update fixes the opc pac command to show the help message. Before this update, running the tkn pac create repo command needed the webhook details for creating a repository. With this update, the tkn-pac create repo command does not configure a webhook when your GitHub application is installed. Before this update, Pipelines as Code would not report a pipeline run creation error when Tekton Pipelines had issues creating the PipelineRun resource. For example, a non-existing task in a pipeline run would show no status. With this update, Pipelines as Code shows the proper error message coming from Tekton Pipelines along with the task that is missing. This update fixes UI page redirection after a successful authentication. Now, you are redirected to the same page where you had attempted to log in to Tekton Hub. This update fixes the list command with these flags, --all-namespaces and --output=yaml , for a cluster task, an individual task, and a pipeline. This update removes the forward slash in the end of the repo.spec.url URL so that it matches the URL coming from GitHub. Before this update, the marshalJSON function would not marshal a list of objects. With this update, the marshalJSON function marshals the list of objects. With this update, Pipelines as Code lets you bypass GitHub enterprise questions in the CLI bootstrap GitHub application. This update fixes the GitHub collaborator check when your repository has more than 100 users. With this update, the sign and verify commands for a task or pipeline now work without a kubernetes configuration file. With this update, Tekton Operator cleans leftover pruner cron jobs if pruner has been skipped on a namespace. Before this update, the API ConfigMap object would not be updated with a user configured value for a catalog refresh interval. This update fixes the CATALOG_REFRESH_INTERVAL API in the Tekon Hub CR. This update fixes reconciling of PipelineRunStatus when changing the EmbeddedStatus feature flag. This update resets the following parameters: The status.runs and status.taskruns parameters to nil with minimal EmbeddedStatus The status.childReferences parameter to nil with full EmbeddedStatus This update adds a conversion configuration to the ResolutionRequest CRD. This update properly configures conversion from the v1alpha1.ResolutionRequest request to the v1beta1.ResolutionRequest request. This update checks for duplicate workspaces associated with a pipeline task. This update fixes the default value for enabling resolvers in the code. This update fixes TaskRef and PipelineRef names conversion by using a resolver. 4.1.3.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.3.6.1. Fixed issues for Pipelines as Code Before this update, if the source branch information coming from payload included refs/heads/ but the user-configured target branch only included the branch name, main , in a CEL expression, the push request would fail. With this update, Pipelines as Code passes the push request and triggers a pipeline if either the base branch or target branch has refs/heads/ in the payload. Before this update, when a PipelineRun object could not be created, the error received from the Tekton controller was not reported to the user. With this update, Pipelines as Code reports the error messages to the GitHub interface so that users can troubleshoot the errors. Pipelines as Code also reports the errors that occurred during pipeline execution. With this update, Pipelines as Code does not echo a secret to the GitHub checks interface when it failed to create the secret on the OpenShift Container Platform cluster because of an infrastructure issue. This update removes the deprecated APIs that are no longer in use from Red Hat OpenShift Pipelines. 4.1.3.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.3.7.1. Fixed issues Before this update, an issue in the Tekton Operator prevented the user from setting the value of the enable-api-fields flag to beta . This update fixes the issue. Now, you can set the value of the enable-api-fields flag to beta in the TektonConfig CR. 4.1.3.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.3 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.3.8.1. Fixed issues Before this update, the Tekton Operator did not expose the performance configuration fields for any customizations. With this update, as a cluster administrator, you can customize the following performance configuration fields in the TektonConfig CR based on your needs: disable-ha buckets kube-api-qps kube-api-burst threads-per-controller 4.1.3.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.4 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.4 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.3.9.1. Fixed issues This update fixes the bundle resolver conversion issue for the PipelineRef field in a pipeline run. Now, the conversion feature sets the value of the kind field to Pipeline after conversion. Before this update, the pipelinerun.timeouts field was reset to the timeouts.pipeline value, ignoring the timeouts.tasks and timeouts.finally values. This update fixes the issue and sets the correct default timeout value for a PipelineRun resource. Before this update, the controller logs contained unnecessary data. This update fixes the issue. 4.1.3.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.10.5 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.10.5 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13. Important Red Hat OpenShift Pipelines 1.10.5 is only available in the pipelines-1.10 channel on OpenShift Container Platform 4.10, 4.11, 4.12, and 4.13. It is not available in the latest channel for any OpenShift Container Platform version. 4.1.3.10.1. Fixed issues Before this update, huge pipeline runs were not getting listed or deleted using the oc and tkn commands. This update mitigates this issue by compressing the huge annotations that were causing this problem. Remember that if the pipeline runs are still too huge after compression, then the same error still recurs. Before this update, only the pod template specified in the pipelineRun.spec.taskRunSpecs[].podTemplate object would be considered for a pipeline run. With this update, the pod template specified in the pipelineRun.spec.podTemplate object is also considered and merged with the template specified in the pipelineRun.spec.taskRunSpecs[].podTemplate object. 4.1.4. Release notes for Red Hat OpenShift Pipelines General Availability 1.9 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.4.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.9. 4.1.4.1.1. Pipelines With this update, you can specify pipeline parameters and results in arrays and object dictionary forms. This update provides support for Container Storage Interface (CSI) and projected volumes for your workspace. With this update, you can specify the stdoutConfig and stderrConfig parameters when defining pipeline steps. Defining these parameters helps to capture standard output and standard error, associated with steps, to local files. With this update, you can add variables in the steps[].onError event handler, for example, USD(params.CONTINUE) . With this update, you can use the output from the finally task in the PipelineResults definition. For example, USD(finally.<pipelinetask-name>.result.<result-name>) , where <pipelinetask-name> denotes the pipeline task name and <result-name> denotes the result name. This update supports task-level resource requirements for a task run. With this update, you do not need to recreate parameters that are shared, based on their names, between a pipeline and the defined tasks. This update is part of a developer preview feature. This update adds support for remote resolution, such as built-in git, cluster, bundle, and hub resolvers. 4.1.4.1.2. Triggers This update adds the Interceptor CRD to define NamespacedInterceptor . You can use NamespacedInterceptor in the kind section of interceptors reference in triggers or in the EventListener specification. This update enables CloudEvents . With this update, you can configure the webhook port number when defining a trigger. This update supports using trigger eventID as input to TriggerBinding . This update supports validation and rotation of certificates for the ClusterInterceptor server. Triggers perform certificate validation for core interceptors and rotate a new certificate to ClusterInterceptor when its certificate expires. 4.1.4.1.3. CLI This update supports showing annotations in the describe command. This update supports showing pipeline, tasks, and timeout in the pr describe command. This update adds flags to provide pipeline, tasks, and timeout in the pipeline start command. This update supports showing the presence of workspace, optional or mandatory, in the describe command of a task and pipeline. This update adds the timestamps flag to show logs with a timestamp. This update adds a new flag --ignore-running-pipelinerun , which ignores the deletion of TaskRun associated with PipelineRun . This update adds support for experimental commands. This update also adds experimental subcommands, sign and verify to the tkn CLI tool. This update makes the Z shell (Zsh) completion feature usable without generating any files. This update introduces a new CLI tool called opc . It is anticipated that an upcoming release will replace the tkn CLI tool with opc . Important The new CLI tool opc is a Technology Preview feature. opc will be a replacement for tkn with additional Red Hat OpenShift Pipelines specific features, which do not necessarily fit in tkn . 4.1.4.1.4. Operator With this update, Pipelines as Code is installed by default. You can disable Pipelines as Code by using the -p flag: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}' With this update, you can also modify Pipelines as Code configurations in the TektonConfig CRD. With this update, if you disable the developer perspective, the Operator does not install developer console related custom resources. This update includes ClusterTriggerBinding support for Bitbucket Server and Bitbucket Cloud and helps you to reuse a TriggerBinding across your entire cluster. 4.1.4.1.5. Resolvers Important Resolvers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, you can configure pipeline resolvers in the TektonConfig CRD. You can enable or disable these pipeline resolvers: enable-bundles-resolver , enable-cluster-resolver , enable-git-resolver , and enable-hub-resolver . apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: enable-bundles-resolver: true enable-cluster-resolver: true enable-git-resolver: true enable-hub-resolver: true ... You can also provide resolver specific configurations in TektonConfig . For example, you can define the following fields in the map[string]string format to set configurations for individual resolvers: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: bundles-resolver-config: default-service-account: pipelines cluster-resolver-config: default-namespace: test git-resolver-config: server-url: localhost.com hub-resolver-config: default-tekton-hub-catalog: tekton ... 4.1.4.1.6. Tekton Chains Important Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Before this update, only Open Container Initiative (OCI) images were supported as outputs of TaskRun in the in-toto provenance agent. This update adds in-toto provenance metadata as outputs with these suffixes, ARTIFACT_URI and ARTIFACT_DIGEST . Before this update, only TaskRun attestations were supported. This update adds support for PipelineRun attestations as well. This update adds support for Tekton Chains to get the imgPullSecret parameter from the pod template. This update helps you to configure repository authentication based on each pipeline run or task run without modifying the service account. 4.1.4.1.7. Tekton Hub Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, as an administrator, you can use an external database, such as Crunchy PostgreSQL with Tekton Hub, instead of using the default Tekton Hub database. This update helps you to perform the following actions: Specify the coordinates of an external database to be used with Tekton Hub Disable the default Tekton Hub database deployed by the Operator This update removes the dependency of config.yaml from external Git repositories and moves the complete configuration data into the API ConfigMap . This update helps an administrator to perform the following actions: Add the configuration data, such as categories, catalogs, scopes, and defaultScopes in the Tekton Hub custom resource. Modify Tekton Hub configuration data on the cluster. All modifications are preserved upon Operator upgrades. Update the list of catalogs for Tekton Hub Change the categories for Tekton Hub Note If you do not add any configuration data, you can use the default data in the API ConfigMap for Tekton Hub configurations. 4.1.4.1.8. Pipelines as Code This update adds support for concurrency limit in the Repository CRD to define the maximum number of PipelineRuns running for a repository at a time. The PipelineRuns from a pull request or a push event are queued in alphabetical order. This update adds a new command tkn pac logs for showing the logs of the latest pipeline run for a repository. This update supports advanced event matching on file path for push and pull requests to GitHub and GitLab. For example, you can use the Common Expression Language (CEL) to run a pipeline only if a path has changed for any markdown file in the docs directory. ... annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && "docs/*.md".pathChanged() With this update, you can reference a remote pipeline in the pipelineRef: object using annotations. With this update, you can auto-configure new GitHub repositories with Pipelines as Code, which sets up a namespace and creates a Repository CRD for your GitHub repository. With this update, Pipelines as Code generates metrics for PipelineRuns with provider information. This update provides the following enhancements for the tkn-pac plugin: Detects running pipelines correctly Fixes showing duration when there is no failure completion time Shows an error snippet and highlights the error regular expression pattern in the tkn-pac describe command Adds the use-real-time switch to the tkn-pac ls and tkn-pac describe commands Imports the tkn-pac logs documentation Shows pipelineruntimeout as a failure in the tkn-pac ls and tkn-pac describe commands. Show a specific pipeline run failure with the --target-pipelinerun option. With this update, you can view the errors for your pipeline run in the form of a version control system (VCS) comment or a small snippet in the GitHub checks. With this update, Pipelines as Code optionally can detect errors inside the tasks if they are of a simple format and add those tasks as annotations in GitHub. This update is part of a developer preview feature. This update adds the following new commands: tkn-pac webhook add : Adds a webhook to project repository settings and updates the webhook.secret key in the existing k8s Secret object without updating the repository. tkn-pac webhook update-token : Updates provider token for an existing k8s Secret object without updating the repository. This update enhances functionality of the tkn-pac create repo command, which creates and configures webhooks for GitHub, GitLab, and BitbucketCloud along with creating repositories. With this update, the tkn-pac describe command shows the latest fifty events in a sorted order. This update adds the --last option to the tkn-pac logs command. With this update, the tkn-pac resolve command prompts for a token on detecting a git_auth_secret in the file template. With this update, Pipelines as Code hides secrets from log snippets to avoid exposing secrets in the GitHub interface. With this update, the secrets automatically generated for git_auth_secret are an owner reference with PipelineRun . The secrets get cleaned with the PipelineRun , not after the pipeline run execution. This update adds support to cancel a pipeline run with the /cancel comment. Before this update, the GitHub apps token scoping was not defined and tokens would be used on every repository installation. With this update, you can scope the GitHub apps token to the target repository using the following parameters: secret-github-app-token-scoped : Scopes the app token to the target repository, not to every repository the app installation has access to. secret-github-app-scope-extra-repos : Customizes the scoping of the app token with an additional owner or repository. With this update, you can use Pipelines as Code with your own Git repositories that are hosted on GitLab. With this update, you can access pipeline execution details in the form of kubernetes events in your namespace. These details help you to troubleshoot pipeline errors without needing access to admin namespaces. This update supports authentication of URLs in the Pipelines as Code resolver with the Git provider. With this update, you can set the name of the hub catalog by using a setting in the pipelines-as-code config map. With this update, you can set the maximum and default limits for the max-keep-run parameter. This update adds documents on how to inject custom Secure Sockets Layer (SSL) certificates in Pipelines as Code to let you connect to provider instance with custom certificates. With this update, the PipelineRun resource definition has the log URL included as an annotation. For example, the tkn-pac describe command shows the log link when describing a PipelineRun . With this update, tkn-pac logs show repository name, instead of PipelineRun name. 4.1.4.2. Breaking changes With this update, the Conditions custom resource definition (CRD) type has been removed. As an alternative, use the WhenExpressions instead. With this update, support for tekton.dev/v1alpha1 API pipeline resources, such as Pipeline, PipelineRun, Task, Clustertask, and TaskRun has been removed. With this update, the tkn-pac setup command has been removed. Instead, use the tkn-pac webhook add command to re-add a webhook to an existing Git repository. And use the tkn-pac webhook update-token command to update the personal provider access token for an existing Secret object in the Git repository. With this update, a namespace that runs a pipeline with default settings does not apply the pod-security.kubernetes.io/enforce:privileged label to a workload. 4.1.4.3. Deprecated and removed features In the Red Hat OpenShift Pipelines 1.9.0 release, ClusterTasks are deprecated and planned to be removed in a future release. As an alternative, you can use Cluster Resolver . In the Red Hat OpenShift Pipelines 1.9.0 release, the use of the triggers and the namespaceSelector fields in a single EventListener specification is deprecated and planned to be removed in a future release. You can use these fields in different EventListener specifications successfully. In the Red Hat OpenShift Pipelines 1.9.0 release, the tkn pipelinerun describe command does not display timeouts for the PipelineRun resource. In the Red Hat OpenShift Pipelines 1.9.0 release, the PipelineResource` custom resource (CR) is deprecated. The PipelineResource CR was a Tech Preview feature and part of the tekton.dev/v1alpha1 API. In the Red Hat OpenShift Pipelines 1.9.0 release, custom image parameters from cluster tasks are deprecated. As an alternative, you can copy a cluster task and use your custom image in it. 4.1.4.4. Known issues The chains-secret and chains-config config maps are removed after you uninstall the Red Hat OpenShift Pipelines Operator. As they contain user data, they should be preserved and not deleted. When running the tkn pac set of commands on Windows, you may receive the following error message: Command finished with error: not supported by Windows. Workaround: Set the NO_COLOR environment variable to true . Running the tkn pac resolve -f <filename> | oc create -f command may not provide expected results, if the tkn pac resolve command uses a templated parameter value to function. Workaround: To mitigate this issue, save the output of tkn pac resolve in a temporary file by running the tkn pac resolve -f <filename> -o tempfile.yaml command and then run the oc create -f tempfile.yaml command. For example, tkn pac resolve -f <filename> -o /tmp/pull-request-resolved.yaml && oc create -f /tmp/pull-request-resolved.yaml . 4.1.4.5. Fixed issues Before this update, after replacing an empty array, the original array returned an empty string rendering the paramaters inside it invalid. With this update, this issue is resolved and the original array returns as empty. Before this update, if duplicate secrets were present in a service account for a pipelines run, it resulted in failure in task pod creation. With this update, this issue is resolved and the task pod is created successfully even if duplicate secrets are present in a service account. Before this update, by looking at the TaskRun's spec.StatusMessage field, users could not distinguish whether the TaskRun had been cancelled by the user or by a PipelineRun that was part of it. With this update, this issue is resolved and users can distinguish the status of the TaskRun by looking at the TaskRun's spec.StatusMessage field. Before this update, webhook validation was removed on deletion of old versions of invalid objects. With this update, this issue is resolved. Before this update, if you set the timeouts.pipeline parameter to 0 , you could not set the timeouts.tasks parameter or the timeouts.finally parameters. This update resolves the issue. Now, when you set the timeouts.pipeline parameter value, you can set the value of either the`timeouts.tasks` parameter or the timeouts.finally parameter. For example: yaml kind: PipelineRun spec: timeouts: pipeline: "0" # No timeout tasks: "0h3m0s" Before this update, a race condition could occur if another tool updated labels or annotations on a PipelineRun or TaskRun. With this update, this issue is resolved and you can merge labels or annotations. Before this update, log keys did not have the same keys as in pipelines controllers. With this update, this issue has been resolved and the log keys have been updated to match the log stream of pipeline controllers. The keys in logs have been changed from "ts" to "timestamp", from "level" to "severity", and from "message" to "msg". Before this update, if a PipelineRun was deleted with an unknown status, an error message was not generated. With this update, this issue is resolved and an error message is generated. Before this update, to access bundle commands like list and push , it was required to use the kubeconfig file . With this update, this issue has been resolved and the kubeconfig file is not required to access bundle commands. Before this update, if the parent PipelineRun was running while deleting TaskRuns, then TaskRuns would be deleted. With this update, this issue is resolved and TaskRuns are not getting deleted if the parent PipelineRun is running. Before this update, if the user attempted to build a bundle with more objects than the pipeline controller permitted, the Tekton CLI did not display an error message. With this update, this issue is resolved and the Tekton CLI displays an error message if the user attempts to build a bundle with more objects than the limit permitted in the pipeline controller. Before this update, if namespaces were removed from the cluster, then the operator did not remove namespaces from the ClusterInterceptor ClusterRoleBinding subjects. With this update, this issue has been resolved, and the operator removes the namespaces from the ClusterInterceptor ClusterRoleBinding subjects. Before this update, the default installation of the Red Hat OpenShift Pipelines Operator resulted in the pipelines-scc-rolebinding security context constraint (SCC) role binding resource remaining in the cluster. With this update, the default installation of the Red Hat OpenShift Pipelines Operator results in the pipelines-scc-rolebinding security context constraint (SCC) role binding resource resource being removed from the cluster. Before this update, Pipelines as Code did not get updated values from the Pipelines as Code ConfigMap object. With this update, this issue is fixed and the Pipelines as Code ConfigMap object looks for any new changes. Before this update, Pipelines as Code controller did not wait for the tekton.dev/pipeline label to be updated and added the checkrun id label, which would cause race conditions. With this update, the Pipelines as Code controller waits for the tekton.dev/pipeline label to be updated and then adds the checkrun id label, which helps to avoid race conditions. Before this update, the tkn-pac create repo command did not override a PipelineRun if it already existed in the git repository. With this update, tkn-pac create command is fixed to override a PipelineRun if it exists in the git repository and this resolves the issue successfully. Before this update, the tkn pac describe command did not display reasons for every message. With this update, this issue is fixed and the tkn pac describe command displays reasons for every message. Before this update, a pull request failed if the user in the annotation provided values by using a regex form, for example, refs/head/rel-* . The pull request failed because it was missing refs/heads in its base branch. With this update, the prefix is added and checked that it matches. This resolves the issue and the pull request is successful. 4.1.4.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.1 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.4.7. Fixed issues Before this update, the tkn pac repo list command did not run on Microsoft Windows. This update fixes the issue, and now you can run the tkn pac repo list command on Microsoft Windows. Before this update, Pipelines as Code watcher did not receive all the configuration change events. With this update, the Pipelines as Code watcher is updated, and now the Pipelines as Code watcher does not miss the configuration change events. Before this update, the pods created by Pipelines as Code, such as TaskRuns or PipelineRuns could not access custom certificates exposed by the user in the cluster. This update fixes the issue, and you can now access custom certificates from the TaskRuns or PipelineRuns pods in the cluster. Before this update, on a cluster enabled with FIPS, the tekton-triggers-core-interceptors core interceptor used in the Trigger resource did not function after the Pipelines Operator was upgraded to version 1.9. This update resolves the issue. Now, OpenShift uses MInTLS 1.2 for all its components. As a result, the tekton-triggers-core-interceptors core interceptor updates to TLS version 1.2and its functionality runs accurately. Before this update, when using a pipeline run with an internal OpenShift image registry, the URL to the image had to be hardcoded in the pipeline run definition. For example: ... - name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/<test_namespace>/<test_pipelinerun>' ... When using a pipeline run in the context of Pipelines as Code, such hardcoded values prevented the pipeline run definitions from being used in different clusters and namespaces. With this update, you can use the dynamic template variables instead of hardcoding the values for namespaces and pipeline run names to generalize pipeline run definitions. For example: ... - name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/{{ target_namespace }}/USD(context.pipelineRun.name)' ... Before this update, Pipelines as Code used the same GitHub token to fetch a remote task available in the same host only on the default GitHub branch. This update resolves the issue. Now Pipelines as Code uses the same GitHub token to fetch a remote task from any GitHub branch. 4.1.4.8. Known issues The value for CATALOG_REFRESH_INTERVAL , a field in the Hub API ConfigMap object used in the Tekton Hub CR, is not getting updated with a custom value provided by the user. Workaround: None. You can track the issue SRVKP-2854 . 4.1.4.9. Breaking changes With this update, an OLM misconfiguration issue has been introduced, which prevents the upgrade of the OpenShift Container Platform. This issue will be fixed in a future release. 4.1.4.10. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.2 is available on OpenShift Container Platform 4.11, 4.12, and 4.13. 4.1.4.11. Fixed issues Before this update, an OLM misconfiguration issue had been introduced in the version of the release, which prevented the upgrade of OpenShift Container Platform. With this update, this misconfiguration issue has been fixed. 4.1.4.12. Release notes for Red Hat OpenShift Pipelines General Availability 1.9.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.9.3 is available on OpenShift Container Platform 4.10 in addition to 4.11, 4.12, and 4.13. 4.1.4.13. Fixed issues This update fixes the performance issues for huge pipelines. Now, the CPU usage is reduced by 61% and the memory usage is reduced by 44%. Before this update, a pipeline run would fail if a task did not run because of its when expression. This update fixes the issue by preventing the validation of a skipped task result in pipeline results. Now, the pipeline result is not emitted and the pipeline run does not fail because of a missing result. This update fixes the pipelineref.bundle conversion to the bundle resolver for the v1beta1 API. Now, the conversion feature sets the value of the kind field to Pipeline after conversion. Before this update, an issue in the Pipelines Operator prevented the user from setting the value of the spec.pipeline.enable-api-fields field to beta . This update fixes the issue. Now, you can set the value to beta along with alpha and stable in the TektonConfig custom resource. Before this update, when Pipelines as Code could not create a secret due to a cluster error, it would show the temporary token on the GitHub check run, which is public. This update fixes the issue. Now, the token is no longer displayed on the GitHub checks interface when the creation of the secret fails. 4.1.4.14. Known issues There is currently a known issue with the stop option for pipeline runs in the OpenShift Container Platform web console. The stop option in the Actions drop-down list is not working as expected and does not cancel the pipeline run. There is currently a known issue with upgrading to Pipelines version 1.9.x due to a failing custom resource definition conversion. Workaround: Before upgrading to Pipelines version 1.9.x, perform the step mentioned in the solution on the Red Hat Customer Portal. 4.1.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.8 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8 is available on OpenShift Container Platform 4.10, 4.11, and 4.12. 4.1.5.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.8. 4.1.5.1.1. Pipelines With this update, you can run Red Hat OpenShift Pipelines GA 1.8 and later on an OpenShift Container Platform cluster that is running on ARM hardware. This includes support for ClusterTask resources and the tkn CLI tool. Important Running Red Hat OpenShift Pipelines on ARM hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . This update implements Step and Sidecar overrides for TaskRun resources. This update adds minimal TaskRun and Run statuses within PipelineRun statuses. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, the graceful termination of pipeline runs feature is promoted from an alpha feature to a stable feature. As a result, the previously deprecated PipelineRunCancelled status remains deprecated and is planned to be removed in a future release. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. With this update, you can specify the workspace for a pipeline task by using the name of the workspace. This change makes it easier to specify a shared workspace for a pair of Pipeline and PipelineTask resources. You can also continue to map workspaces explicitly. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, parameters in embedded specifications are propagated without mutations. With this update, you can specify the required metadata of a Task resource referenced by a PipelineRun resource by using annotations and labels. This way, Task metadata that depends on the execution context is available during the pipeline run. This update adds support for object or dictionary types in params and results values. This change affects backward compatibility and sometimes breaks forward compatibility, such as using an earlier client with a later Red Hat OpenShift Pipelines version. This update changes the ArrayOrStruct structure, which affects projects that use the Go language API as a library. This update adds a SkippingReason value to the SkippedTasks field of the PipelineRun status fields so that users know why a given PipelineTask was skipped. This update supports an alpha feature in which you can use an array type for emitting results from a Task object. The result type is changed from string to ArrayOrString . For example, a task can specify a type to produce an array result: kind: Task apiVersion: tekton.dev/v1beta1 metadata: name: write-array annotations: description: | A simple task that writes array spec: results: - name: array-results type: array description: The array results ... Additionally, you can run a task script to populate the results with an array: USD echo -n "[\"hello\",\"world\"]" | tee USD(results.array-results.path) To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . This feature is in progress and is part of TEP-0076. 4.1.5.1.2. Triggers This update transitions the TriggerGroups field in the EventListener specification from an alpha feature to a stable feature. Using this field, you can specify a set of interceptors before selecting and running a group of triggers. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. With this update, the Trigger resource supports end-to-end secure connections by running the ClusterInterceptor server using HTTPS. 4.1.5.1.3. CLI With this update, you can use the tkn taskrun export command to export a live task run from a cluster to a YAML file, which you can use to import the task run to another cluster. With this update, you can add the -o name flag to the tkn pipeline start command to print the name of the pipeline run right after it starts. This update adds a list of available plug-ins to the output of the tkn --help command. With this update, while deleting a pipeline run or task run, you can use both the --keep and --keep-since flags together. With this update, you can use Cancelled as the value of the spec.status field rather than the deprecated PipelineRunCancelled value. 4.1.5.1.4. Operator With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom database rather than the default database. With this update, as a cluster administrator, if you enable your local Tekton Hub instance, it periodically refreshes the database so that changes in the catalog appear in the Tekton Hub web console. You can adjust the period between refreshes. Previously, to add the tasks and pipelines in the catalog to the database, you performed that task manually or set up a cron job to do it for you. With this update, you can install and run a Tekton Hub instance with minimal configuration. This way, you can start working with your teams to decide which additional customizations they might want. This update adds GIT_SSL_CAINFO to the git-clone task so you can clone secured repositories. 4.1.5.1.5. Tekton Chains Important Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, you can log in to a vault by using OIDC rather than a static token. This change means that Spire can generate the OIDC credential so that only trusted workloads are allowed to log in to the vault. Additionally, you can pass the vault address as a configuration value rather than inject it as an environment variable. The chains-config config map for Tekton Chains in the openshift-pipelines namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator because directly updating the config map is not supported when installed by using the Red Hat OpenShift Pipelines Operator. However, with this update, you can configure Tekton Chains by using the TektonChain custom resource. This feature enables your configuration to persist after upgrading, unlike the chains-config config map, which gets overwritten during upgrades. 4.1.5.1.6. Tekton Hub Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, if you install a fresh instance of Tekton Hub by using the Operator, the Tekton Hub login is disabled by default. To enable the login and rating features, you must create the Hub API secret while installing Tekton Hub. Note Because Tekton Hub login was enabled by default in Red Hat OpenShift Pipelines 1.7, if you upgrade the Operator, the login is enabled by default in Red Hat OpenShift Pipelines 1.8. To disable this login, see Disabling Tekton Hub login after upgrading from OpenShift Pipelines 1.7.x --> 1.8.x With this update, as an administrator, you can configure your local Tekton Hub instance to use a custom PostgreSQL 13 database rather than the default database. To do so, create a Secret resource named tekton-hub-db . For example: apiVersion: v1 kind: Secret metadata: name: tekton-hub-db labels: app: tekton-hub-db type: Opaque stringData: POSTGRES_HOST: <hostname> POSTGRES_DB: <database_name> POSTGRES_USER: <username> POSTGRES_PASSWORD: <password> POSTGRES_PORT: <listening_port_number> With this update, you no longer need to log in to the Tekton Hub web console to add resources from the catalog to the database. Now, these resources are automatically added when the Tekton Hub API starts running for the first time. This update automatically refreshes the catalog every 30 minutes by calling the catalog refresh API job. This interval is user-configurable. 4.1.5.1.7. Pipelines as Code Important Pipelines as Code (PAC) is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, as a developer, you get a notification from the tkn-pac CLI tool if you try to add a duplicate repository to a Pipelines as Code run. When you enter tkn pac create repository , each repository must have a unique URL. This notification also helps prevent hijacking exploits. With this update, as a developer, you can use the new tkn-pac setup cli command to add a Git repository to Pipelines as Code by using the webhook mechanism. This way, you can use Pipelines as Code even when using GitHub Apps is not feasible. This capability includes support for repositories on GitHub, GitLab, and BitBucket. With this update, Pipelines as Code supports GitLab integration with features such as the following: ACL (Access Control List) on project or group /ok-to-test support from allowed users /retest support. With this update, you can perform advanced pipeline filtering with Common Expression Language (CEL). With CEL, you can match pipeline runs with different Git provider events by using annotations in the PipelineRun resource. For example: ... annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch == "main" && source_branch == "wip" Previously, as a developer, you could have only one pipeline run in your .tekton directory for each Git event, such as a pull request. With this update, you can have multiple pipeline runs in your .tekton directory. The web console displays the status and reports of the runs. The pipeline runs operate in parallel and report back to the Git provider interface. With this update, you can test or retest a pipeline run by commenting /test or /retest on a pull request. You can also specify the pipeline run by name. For example, you can enter /test <pipelinerun_name> or /retest <pipelinerun-name> . With this update, you can delete a repository custom resource and its associated secrets by using the new tkn-pac delete repository command. 4.1.5.2. Breaking changes This update changes the default metrics level of TaskRun and PipelineRun resources to the following values: apiVersion: v1 kind: ConfigMap metadata: name: config-observability namespace: tekton-pipelines labels: app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines data: _example: | ... metrics.taskrun.level: "task" metrics.taskrun.duration-type: "histogram" metrics.pipelinerun.level: "pipeline" metrics.pipelinerun.duration-type: "histogram" With this update, if an annotation or label is present in both Pipeline and PipelineRun resources, the value in the Run type takes precedence. The same is true if an annotation or label is present in Task and TaskRun resources. In Red Hat OpenShift Pipelines 1.8, the previously deprecated PipelineRun.Spec.ServiceAccountNames field has been removed. Use the PipelineRun.Spec.TaskRunSpecs field instead. In Red Hat OpenShift Pipelines 1.8, the previously deprecated TaskRun.Status.ResourceResults.ResourceRef field has been removed. Use the TaskRun.Status.ResourceResults.ResourceName field instead. In Red Hat OpenShift Pipelines 1.8, the previously deprecated Conditions resource type has been removed. Remove the Conditions resource from Pipeline resource definitions that include it. Use when expressions in PipelineRun definitions instead. For Tekton Chains, the tekton-provenance format has been removed in this release. Use the in-toto format by setting "artifacts.taskrun.format": "in-toto" in the TektonChain custom resource instead. Red Hat OpenShift Pipelines 1.7.x shipped with Pipelines as Code 0.5.x. The current update ships with Pipelines as Code 0.10.x. This change creates a new route in the openshift-pipelines namespace for the new controller. You must update this route in GitHub Apps or webhooks that use Pipelines as Code. To fetch the route, use the following command: USD oc get route -n openshift-pipelines pipelines-as-code-controller \ --template='https://{{ .spec.host }}' With this update, Pipelines as Code renames the default secret keys for the Repository custom resource definition (CRD). In your CRD, replace token with provider.token , and replace secret with webhook.secret . With this update, Pipelines as Code replaces a special template variable with one that supports multiple pipeline runs for private repositories. In your pipeline runs, replace secret: pac-git-basic-auth-{{repo_owner}}-{{repo_name}} with secret: {{ git_auth_secret }} . With this update, Pipelines as Code updates the following commands in the tkn-pac CLI tool: Replace tkn pac repository create with tkn pac create repository . Replace tkn pac repository delete with tkn pac delete repository . Replace tkn pac repository list with tkn pac list . 4.1.5.3. Deprecated and removed features Starting with OpenShift Container Platform 4.11, the preview and stable channels for installing and upgrading the Red Hat OpenShift Pipelines Operator are removed. To install and upgrade the Operator, use the appropriate pipelines-<version> channel, or the latest channel for the most recent stable version. For example, to install the Pipelines Operator version 1.8.x , use the pipelines-1.8 channel. Note In OpenShift Container Platform 4.10 and earlier versions, you can use the preview and stable channels for installing and upgrading the Operator. Support for the tekton.dev/v1alpha1 API version, which was deprecated in Red Hat OpenShift Pipelines GA 1.6, is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. This change affects the pipeline component, which includes the TaskRun , PipelineRun , Task , Pipeline , and similar tekton.dev/v1alpha1 resources. As an alternative, update existing resources to use apiVersion: tekton.dev/v1beta1 as described in Migrating From Tekton v1alpha1 to Tekton v1beta1 . Bug fixes and support for the tekton.dev/v1alpha1 API version are provided only through the end of the current GA 1.8 lifecycle. Important For the Tekton Operator , the operator.tekton.dev/v1alpha1 API version is not deprecated. You do not need to make changes to this value. In Red Hat OpenShift Pipelines 1.8, the PipelineResource custom resource (CR) is available but no longer supported. The PipelineResource CR was a Tech Preview feature and part of the tekton.dev/v1alpha1 API, which had been deprecated and planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. In Red Hat OpenShift Pipelines 1.8, the Condition custom resource (CR) is removed. The Condition CR was part of the tekton.dev/v1alpha1 API, which has been deprecated and is planned to be removed in the upcoming Red Hat OpenShift Pipelines GA 1.9 release. In Red Hat OpenShift Pipelines 1.8, the gcr.io image for gsutil has been removed. This removal might break clusters with Pipeline resources that depend on this image. Bug fixes and support are provided only through the end of the Red Hat OpenShift Pipelines 1.7 lifecycle. In Red Hat OpenShift Pipelines 1.8, the PipelineRun.Status.TaskRuns and PipelineRun.Status.Runs fields are deprecated and are planned to be removed in a future release. See TEP-0100: Embedded TaskRuns and Runs Status in PipelineRuns . In Red Hat OpenShift Pipelines 1.8, the pipelineRunCancelled state is deprecated and planned to be removed in a future release. Graceful termination of PipelineRun objects is now promoted from an alpha feature to a stable feature. (See TEP-0058: Graceful Pipeline Run Termination .) As an alternative, you can use the Cancelled state, which replaces the pipelineRunCancelled state. You do not need to make changes to your Pipeline and Task resources. If you have tools that cancel pipeline runs, you must update tools in the release. This change also affects tools such as the CLI, IDE extensions, and so on, so that they support the new PipelineRun statuses. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. In Red Hat OpenShift Pipelines 1.8, the timeout field in PipelineRun has been deprecated. Instead, use the PipelineRun.Timeouts field, which is now promoted from an alpha feature to a stable feature. Because this feature is available by default, you no longer need to set the pipeline.enable-api-fields field to alpha in the TektonConfig custom resource definition. In Red Hat OpenShift Pipelines 1.8, init containers are omitted from the LimitRange object's default request calculations. 4.1.5.4. Known issues The s2i-nodejs pipeline cannot use the nodejs:14-ubi8-minimal image stream to perform source-to-image (S2I) builds. Using that image stream produces an error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 message. Workaround: Use nodejs:14-ubi8 rather than the nodejs:14-ubi8-minimal image stream. When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. Workaround: Specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . Tip Before you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub , verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name> On ARM, IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported. Implicit parameter mapping incorrectly passes parameters from the top-level Pipeline or PipelineRun definitions to the taskRef tasks. Mapping should only occur from a top-level resource to tasks with in-line taskSpec specifications. This issue only affects clusters where this feature was enabled by setting the enable-api-fields field to alpha in the pipeline section of the TektonConfig custom resource definition. 4.1.5.5. Fixed issues Before this update, the metrics for pipeline runs in the Developer view of the web console were incomplete and outdated. With this update, the issue has been fixed so that the metrics are correct. Before this update, if a pipeline had two parallel tasks that failed and one of them had retries=2 , the final tasks never ran, and the pipeline timed out and failed to run. For example, the pipelines-operator-subscription task failed intermittently with the following error message: Unable to connect to the server: EOF . With this update, the issue has been fixed so that the final tasks always run. Before this update, if a pipeline run stopped because a task run failed, other task runs might not complete their retries. As a result, no finally tasks were scheduled, which caused the pipeline to hang. This update resolves the issue. TaskRuns and Run objects can retry when a pipeline run has stopped, even by graceful stopping, so that pipeline runs can complete. This update changes how resource requirements are calculated when one or more LimitRange objects are present in the namespace where a TaskRun object exists. The scheduler now considers step containers and excludes all other app containers, such as sidecar containers, when factoring requests from LimitRange objects. Before this update, under specific conditions, the flag package might incorrectly parse a subcommand immediately following a double dash flag terminator, -- . In that case, it ran the entrypoint subcommand rather than the actual command. This update fixes this flag-parsing issue so that the entrypoint runs the correct command. Before this update, the controller might generate multiple panics if pulling an image failed, or its pull status was incomplete. This update fixes the issue by checking the step.ImageID value rather than the status.TaskSpec value. Before this update, canceling a pipeline run that contained an unscheduled custom task produced a PipelineRunCouldntCancel error. This update fixes the issue. You can cancel a pipeline run that contains an unscheduled custom task without producing that error. Before this update, if the <NAME> in USDparams["<NAME>"] or USDparams['<NAME>'] contained a dot character ( . ), any part of the name to the right of the dot was not extracted. For example, from USDparams["org.ipsum.lorem"] , only org was extracted. This update fixes the issue so that USDparams fetches the complete value. For example, USDparams["org.ipsum.lorem"] and USDparams['org.ipsum.lorem'] are valid and the entire value of <NAME> , org.ipsum.lorem , is extracted. It also throws an error if <NAME> is not enclosed in single or double quotes. For example, USDparams.org.ipsum.lorem is not valid and generates a validation error. With this update, Trigger resources support custom interceptors and ensure that the port of the custom interceptor service is the same as the port in the ClusterInterceptor definition file. Before this update, the tkn version command for Tekton Chains and Operator components did not work correctly. This update fixes the issue so that the command works correctly and returns version information for those components. Before this update, if you ran a tkn pr delete --ignore-running command and a pipeline run did not have a status.condition value, the tkn CLI tool produced a null-pointer error (NPE). This update fixes the issue so that the CLI tool now generates an error and correctly ignores pipeline runs that are still running. Before this update, if you used the tkn pr delete --keep <value> or tkn tr delete --keep <value> commands, and the number of pipeline runs or task runs was less than the value, the command did not return an error as expected. This update fixes the issue so that the command correctly returns an error under those conditions. Before this update, if you used the tkn pr delete or tkn tr delete commands with the -p or -t flags together with the --ignore-running flag, the commands incorrectly deleted running or pending resources. This update fixes the issue so that these commands correctly ignore running or pending resources. With this update, you can configure Tekton Chains by using the TektonChain custom resource. This feature enables your configuration to persist after upgrading, unlike the chains-config config map, which gets overwritten during upgrades. With this update, ClusterTask resources no longer run as root by default, except for the buildah and s2i cluster tasks. Before this update, tasks on Red Hat OpenShift Pipelines 1.7.1 failed when using init as a first argument followed by two or more arguments. With this update, the flags are parsed correctly, and the task runs are successful. Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to an invalid role binding, with the following error message: error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io "openshift-operators-prometheus-k8s-read-binding" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef This update fixes the issue so that the failure no longer occurs. Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the pipeline service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the pipeline service account. As a result, secrets attached to the pipeline service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly. With this update, Pipelines as Code pods run on infrastructure nodes if infrastructure node settings are configured in the TektonConfig custom resource (CR). Previously, with the resource pruner, each namespace Operator created a command that ran in a separate container. This design consumed too many resources in clusters with a high number of namespaces. For example, to run a single command, a cluster with 1000 namespaces produced 1000 containers in a pod. This update fixes the issue. It passes the namespace-based configuration to the job so that all the commands run in one container in a loop. In Tekton Chains, you must define a secret called signing-secrets to hold the key used for signing tasks and images. However, before this update, updating the Red Hat OpenShift Pipelines Operator reset or overwrote this secret, and the key was lost. This update fixes the issue. Now, if the secret is configured after installing Tekton Chains through the Operator, the secret persists, and it is not overwritten by upgrades. Before this update, all S2I build tasks failed with an error similar to the following message: Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted" time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)" With this update, the pipelines-scc security context constraint (SCC) is compatible with the SETFCAP capability necessary for Buildah and S2I cluster tasks. As a result, the Buildah and S2I build tasks can run successfully. To successfully run the Buildah cluster task and S2I build tasks for applications written in various languages and frameworks, add the following snippet for appropriate steps objects such as build and push : securityContext: capabilities: add: ["SETFCAP"] Before this update, installing the Red Hat OpenShift Pipelines Operator took longer than expected. This update optimizes some settings to speed up the installation process. With this update, Buildah and S2I cluster tasks have fewer steps than in versions. Some steps have been combined into a single step so that they work better with ResourceQuota and LimitRange objects and do not require more resources than necessary. This update upgrades the Buildah, tkn CLI tool, and skopeo CLI tool versions in cluster tasks. Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources. Before this update, pods for the prune cronjobs were not scheduled on infrastructure nodes, as expected. Instead, they were scheduled on worker nodes or not scheduled at all. With this update, these types of pods can now be scheduled on infrastructure nodes if configured in the TektonConfig custom resource (CR). 4.1.5.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.1 is available on OpenShift Container Platform 4.10, 4.11, and 4.12. 4.1.5.6.1. Known issues By default, the containers have restricted permissions for enhanced security. The restricted permissions apply to all controller pods in the Red Hat OpenShift Pipelines Operator, and to some cluster tasks. Due to restricted permissions, the git-clone cluster task fails under certain configurations. Workaround: None. You can track the issue SRVKP-2634 . When installer sets are in a failed state, the status of the TektonConfig custom resource is incorrectly displayed as True instead of False . Example: Failed installer sets USD oc get tektoninstallerset NAME READY REASON addon-clustertasks-nx5xz False Error addon-communityclustertasks-cfb2p True addon-consolecli-ftrb8 True addon-openshift-67dj2 True addon-pac-cf7pz True addon-pipelines-fvllm True addon-triggers-b2wtt True addon-versioned-clustertasks-1-8-hqhnw False Error pipeline-w75ww True postpipeline-lrs22 True prepipeline-ldlhw True rhosp-rbac-4dmgb True trigger-hfg64 True validating-mutating-webhoook-28rf7 True Example: Incorrect TektonConfig status USD oc get tektonconfig config NAME VERSION READY REASON config 1.8.1 True 4.1.5.6.2. Fixed issues Before this update, the pruner deleted task runs of running pipelines and displayed the following warning: some tasks were indicated completed without ancestors being done . With this update, the pruner retains the task runs that are part of running pipelines. Before this update, pipeline-1.8 was the default channel for installing the Red Hat OpenShift Pipelines Operator 1.8.x. With this update, latest is the default channel. Before this update, the Pipelines as Code controller pods did not have access to certificates exposed by the user. With this update, Pipelines as Code can now access routes and Git repositories guarded by a self-signed or a custom certificate. Before this update, the task failed with RBAC errors after upgrading from Red Hat OpenShift Pipelines 1.7.2 to 1.8.0. With this update, the tasks run successfully without any RBAC errors. Before this update, using the tkn CLI tool, you could not remove task runs and pipeline runs that contained a result object whose type was array . With this update, you can use the tkn CLI tool to remove task runs and pipeline runs that contain a result object whose type is array . Before this update, if a pipeline specification contained a task with an ENV_VARS parameter of array type, the pipeline run failed with the following error: invalid input params for task func-buildpacks: param types don't match the user-specified type: [ENV_VARS] . With this update, pipeline runs with such pipeline and task specifications do not fail. Before this update, cluster administrators could not provide a config.json file to the Buildah cluster task for accessing a container registry. With this update, cluster administrators can provide the Buildah cluster task with a config.json file by using the dockerconfig workspace. 4.1.5.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.8.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.8.2 is available on OpenShift Container Platform 4.10, 4.11, and 4.12. 4.1.5.7.1. Fixed issues Before this update, the git-clone task failed when cloning a repository using SSH keys. With this update, the role of the non-root user in the git-init task is removed, and the SSH program looks in the USDHOME/.ssh/ directory for the correct keys. 4.1.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.7 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7 is available on OpenShift Container Platform 4.9, 4.10, and 4.11. 4.1.6.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.7. 4.1.6.1.1. Pipelines With this update, pipelines-<version> is the default channel to install the Red Hat OpenShift Pipelines Operator. For example, the default channel to install the Pipelines Operator version 1.7 is pipelines-1.7 . Cluster administrators can also use the latest channel to install the most recent stable version of the Operator. Note The preview and stable channels will be deprecated and removed in a future release. When you run a command in a user namespace, your container runs as root (user id 0 ) but has user privileges on the host. With this update, to run pods in the user namespace, you must pass the annotations that CRI-O expects. To add these annotations for all users, run the oc edit clustertask buildah command and edit the buildah cluster task. To add the annotations to a specific namespace, export the cluster task as a task to that namespace. Before this update, if certain conditions were not met, the when expression skipped a Task object and its dependent tasks. With this update, you can scope the when expression to guard the Task object only, not its dependent tasks. To enable this update, set the scope-when-expressions-to-task flag to true in the TektonConfig CRD. Note The scope-when-expressions-to-task flag is deprecated and will be removed in a future release. As a best practice for Pipelines, use when expressions scoped to the guarded Task only. With this update, you can use variable substitution in the subPath field of a workspace within a task. With this update, you can reference parameters and results by using a bracket notation with single or double quotes. Prior to this update, you could only use the dot notation. For example, the following are now equivalent: USD(param.myparam) , USD(param['myparam']) , and USD(param["myparam"]) . You can use single or double quotes to enclose parameter names that contain problematic characters, such as "." . For example, USD(param['my.param']) and USD(param["my.param"]) . With this update, you can include the onError parameter of a step in the task definition without enabling the enable-api-fields flag. 4.1.6.1.2. Triggers With this update, the feature-flag-triggers config map has a new field labels-exclusion-pattern . You can set the value of this field to a regular expression (regex) pattern. The controller filters out labels that match the regex pattern from propagating from the event listener to the resources created for the event listener. With this update, the TriggerGroups field is added to the EventListener specification. Using this field, you can specify a set of interceptors to run before selecting and running a group of triggers. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, Trigger resources support custom runs defined by a TriggerTemplate template. With this update, Triggers support emitting Kubernetes events from an EventListener pod. With this update, count metrics are available for the following objects: ClusterInteceptor , EventListener , TriggerTemplate , ClusterTriggerBinding , and TriggerBinding . This update adds the ServicePort specification to Kubernetes resource. You can use this specification to modify which port exposes the event listener service. The default port is 8080 . With this update, you can use the targetURI field in the EventListener specification to send cloud events during trigger processing. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . With this update, the tekton-triggers-eventlistener-roles object now has a patch verb, in addition to the create verb that already exists. With this update, the securityContext.runAsUser parameter is removed from event listener deployment. 4.1.6.1.3. CLI With this update, the tkn [pipeline | pipelinerun] export command exports a pipeline or pipeline run as a YAML file. For example: Export a pipeline named test_pipeline in the openshift-pipelines namespace: USD tkn pipeline export test_pipeline -n openshift-pipelines Export a pipeline run named test_pipeline_run in the openshift-pipelines namespace: USD tkn pipelinerun export test_pipeline_run -n openshift-pipelines With this update, the --grace option is added to the tkn pipelinerun cancel . Use the --grace option to terminate a pipeline run gracefully instead of forcing the termination. To enable this feature, in the TektonConfig custom resource definition, in the pipeline section, you must set the enable-api-fields field to alpha . This update adds the Operator and Chains versions to the output of the tkn version command. Important Tekton Chains is a Technology Preview feature. With this update, the tkn pipelinerun describe command displays all canceled task runs, when you cancel a pipeline run. Before this fix, only one task run was displayed. With this update, you can skip supplying the asking specifications for optional workspace when you run the tkn [t | p | ct] start command skips with the --skip-optional-workspace flag. You can also skip it when running in interactive mode. With this update, you can use the tkn chains command to manage Tekton Chains. You can also use the --chains-namespace option to specify the namespace where you want to install Tekton Chains. Important Tekton Chains is a Technology Preview feature. 4.1.6.1.4. Operator With this update, you can use the Red Hat OpenShift Pipelines Operator to install and deploy Tekton Hub and Tekton Chains. Important Tekton Chains and deployment of Tekton Hub on a cluster are Technology Preview features. With this update, you can find and use Pipelines as Code (PAC) as an add-on option. Important Pipelines as Code is a Technology Preview feature. With this update, you can now disable the installation of community cluster tasks by setting the communityClusterTasks parameter to false . For example: ... spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "true" - name: pipelineTemplates value: "true" - name: communityClusterTasks value: "false" ... With this update, you can disable the integration of Tekton Hub with the Developer perspective by setting the enable-devconsole-integration flag in the TektonConfig custom resource to false . For example: ... hub: params: - name: enable-devconsole-integration value: "true" ... With this update, the operator-config.yaml config map enables the output of the tkn version command to display of the Operator version. With this update, the version of the argocd-task-sync-and-wait tasks is modified to v0.2 . With this update to the TektonConfig CRD, the oc get tektonconfig command displays the OPerator version. With this update, service monitor is added to the Triggers metrics. 4.1.6.1.5. Hub Important Deploying Tekton Hub on a cluster is a Technology Preview feature. Tekton Hub helps you discover, search, and share reusable tasks and pipelines for your CI/CD workflows. A public instance of Tekton Hub is available at hub.tekton.dev . Staring with Red Hat OpenShift Pipelines 1.7, cluster administrators can also install and deploy a custom instance of Tekton Hub on enterprise clusters. You can curate a catalog with reusable tasks and pipelines specific to your organization. 4.1.6.1.6. Chains Important Tekton Chains is a Technology Preview feature. Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines. By default, Tekton Chains monitors the task runs in your OpenShift Container Platform cluster. Chains takes snapshots of completed task runs, converts them to one or more standard payload formats, and signs and stores all artifacts. Tekton Chains supports the following features: You can sign task runs, task run results, and OCI registry images with cryptographic key types and services such as cosign . You can use attestation formats such as in-toto . You can securely store signatures and signed artifacts using OCI repository as a storage backend. 4.1.6.1.7. Pipelines as Code (PAC) Important Pipelines as Code is a Technology Preview feature. With Pipelines as Code, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, the feature runs the pipeline and reports status. Pipelines as Code supports the following features: Pull request status. When iterating over a pull request, the status and control of the pull request is exercised on the platform hosting the Git repository. GitHub checks the API to set the status of a pipeline run, including rechecks. GitHub pull request and commit events. Pull request actions in comments, such as /retest . Git events filtering, and a separate pipeline for each event. Automatic task resolution in Pipelines for local tasks, Tekton Hub, and remote URLs. Use of GitHub blobs and objects API for retrieving configurations. Access Control List (ACL) over a GitHub organization, or using a Prow-style OWNER file. The tkn pac plugin for the tkn CLI tool, which you can use to manage Pipelines as Code repositories and bootstrapping. Support for GitHub Application, GitHub Webhook, Bitbucket Server, and Bitbucket Cloud. 4.1.6.2. Deprecated features Breaking change: This update removes the disable-working-directory-overwrite and disable-home-env-overwrite fields from the TektonConfig custom resource (CR). As a result, the TektonConfig CR no longer automatically sets the USDHOME environment variable and workingDir parameter. You can still set the USDHOME environment variable and workingDir parameter by using the env and workingDir fields in the Task custom resource definition (CRD). The Conditions custom resource definition (CRD) type is deprecated and planned to be removed in a future release. Instead, use the recommended When expression. Breaking change: The Triggers resource validates the templates and generates an error if you do not specify the EventListener and TriggerBinding values. 4.1.6.3. Known issues When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . Tip Before you install tasks that are based on the Tekton Catalog on ARM, IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub , verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name> On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported. You cannot use the nodejs:14-ubi8-minimal image stream because doing so generates the following errors: STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 time="2021-11-04T13:05:26Z" level=error msg="exit status 127" Implicit parameter mapping incorrectly passes parameters from the top-level Pipeline or PipelineRun definitions to the taskRef tasks. Mapping should only occur from a top-level resource to tasks with in-line taskSpec specifications. This issue only affects clusters where this feature was enabled by setting the enable-api-fields field to alpha in the pipeline section of the TektonConfig custom resource definition. 4.1.6.4. Fixed issues With this update, if metadata such as labels and annotations are present in both Pipeline and PipelineRun object definitions, the values in the PipelineRun type takes precedence. You can observe similar behavior for Task and TaskRun objects. With this update, if the timeouts.tasks field or the timeouts.finally field is set to 0 , then the timeouts.pipeline is also set to 0 . With this update, the -x set flag is removed from scripts that do not use a shebang. The fix reduces potential data leak from script execution. With this update, any backslash character present in the usernames in Git credentials is escaped with an additional backslash in the .gitconfig file. With this update, the finalizer property of the EventListener object is not necessary for cleaning up logging and config maps. With this update, the default HTTP client associated with the event listener server is removed, and a custom HTTP client added. As a result, the timeouts have improved. With this update, the Triggers cluster role now works with owner references. With this update, the race condition in the event listener does not happen when multiple interceptors return extensions. With this update, the tkn pr delete command does not delete the pipeline runs with the ignore-running flag. With this update, the Operator pods do not continue restarting when you modify any add-on parameters. With this update, the tkn serve CLI pod is scheduled on infra nodes, if not configured in the subscription and config custom resources. With this update, cluster tasks with specified versions are not deleted during upgrade. 4.1.6.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.1 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.1 is available on OpenShift Container Platform 4.9, 4.10, and 4.11. 4.1.6.5.1. Fixed issues Before this update, upgrading the Red Hat OpenShift Pipelines Operator deleted the data in the database associated with Tekton Hub and installed a new database. With this update, an Operator upgrade preserves the data. Before this update, only cluster administrators could access pipeline metrics in the OpenShift Container Platform console. With this update, users with other cluster roles also can access the pipeline metrics. Before this update, pipeline runs failed for pipelines containing tasks that emit large termination messages. The pipeline runs failed because the total size of termination messages of all containers in a pod cannot exceed 12 KB. With this update, the place-tools and step-init initialization containers that uses the same image are merged to reduce the number of containers running in each tasks's pod. The solution reduces the chance of failed pipeline runs by minimizing the number of containers running in a task's pod. However, it does not remove the limitation of the maximum allowed size of a termination message. Before this update, attempts to access resource URLs directly from the Tekton Hub web console resulted in an Nginx 404 error. With this update, the Tekton Hub web console image is fixed to allow accessing resource URLs directly from the Tekton Hub web console. Before this update, for each namespace the resource pruner job created a separate container to prune resources. With this update, the resource pruner job runs commands for all namespaces as a loop in one container. 4.1.6.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.2 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.2 is available on OpenShift Container Platform 4.9, 4.10, and the upcoming version. 4.1.6.6.1. Known issues The chains-config config map for Tekton Chains in the openshift-pipelines namespace is automatically reset to default after upgrading the Red Hat OpenShift Pipelines Operator. Currently, there is no workaround for this issue. 4.1.6.6.2. Fixed issues Before this update, tasks on Pipelines 1.7.1 failed on using init as the first argument, followed by two or more arguments. With this update, the flags are parsed correctly and the task runs are successful. Before this update, installation of the Red Hat OpenShift Pipelines Operator on OpenShift Container Platform 4.9 and 4.10 failed due to invalid role binding, with the following error message: error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io "openshift-operators-prometheus-k8s-read-binding" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"openshift-operator-read"}: cannot change roleRef With this update, the Red Hat OpenShift Pipelines Operator installs with distinct role binding namespaces to avoid conflict with installation of other Operators. Before this update, upgrading the Operator triggered a reset of the signing-secrets secret key for Tekton Chains to its default value. With this update, the custom secret key persists after you upgrade the Operator. Note Upgrading to Red Hat OpenShift Pipelines 1.7.2 resets the key. However, when you upgrade to future releases, the key is expected to persist. Before this update, all S2I build tasks failed with an error similar to the following message: Error: error writing "0 0 4294967295\n" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time="2022-03-04T09:47:57Z" level=error msg="error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted" time="2022-03-04T09:47:57Z" level=error msg="(unable to determine exit status)" With this update, the pipelines-scc security context constraint (SCC) is compatible with the SETFCAP capability necessary for Buildah and S2I cluster tasks. As a result, the Buildah and S2I build tasks can run successfully. To successfully run the Buildah cluster task and S2I build tasks for applications written in various languages and frameworks, add the following snippet for appropriate steps objects such as build and push : securityContext: capabilities: add: ["SETFCAP"] 4.1.6.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.7.3 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.7.3 is available on OpenShift Container Platform 4.9, 4.10, and 4.11. 4.1.6.7.1. Fixed issues Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources. Previously, upgrading the Red Hat OpenShift Pipelines Operator caused the pipeline service account to be recreated, which meant that the secrets linked to the service account were lost. This update fixes the issue. During upgrades, the Operator no longer recreates the pipeline service account. As a result, secrets attached to the pipeline service account persist after upgrades, and the resources (tasks and pipelines) continue to work correctly. 4.1.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.6 With this update, Red Hat OpenShift Pipelines General Availability (GA) 1.6 is available on OpenShift Container Platform 4.9. 4.1.7.1. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.6. With this update, you can configure a pipeline or task start command to return a YAML or JSON-formatted string by using the --output <string> , where <string> is yaml or json . Otherwise, without the --output option, the start command returns a human-friendly message that is hard for other programs to parse. Returning a YAML or JSON-formatted string is useful for continuous integration (CI) environments. For example, after a resource is created, you can use yq or jq to parse the YAML or JSON-formatted message about the resource and wait until that resource is terminated without using the showlog option. With this update, you can authenticate to a registry using the auth.json authentication file of Podman. For example, you can use tkn bundle push to push to a remote registry using Podman instead of Docker CLI. With this update, if you use the tkn [taskrun | pipelinerun] delete --all command, you can preserve runs that are younger than a specified number of minutes by using the new --keep-since <minutes> option. For example, to keep runs that are less than five minutes old, you enter tkn [taskrun | pipelinerun] delete -all --keep-since 5 . With this update, when you delete task runs or pipeline runs, you can use the --parent-resource and --keep-since options together. For example, the tkn pipelinerun delete --pipeline pipelinename --keep-since 5 command preserves pipeline runs whose parent resource is named pipelinename and whose age is five minutes or less. The tkn tr delete -t <taskname> --keep-since 5 and tkn tr delete --clustertask <taskname> --keep-since 5 commands work similarly for task runs. This update adds support for the triggers resources to work with v1beta1 resources. This update adds an ignore-running option to the tkn pipelinerun delete and tkn taskrun delete commands. This update adds a create subcommand to the tkn task and tkn clustertask commands. With this update, when you use the tkn pipelinerun delete --all command, you can use the new --label <string> option to filter the pipeline runs by label. Optionally, you can use the --label option with = and == as equality operators, or != as an inequality operator. For example, the tkn pipelinerun delete --all --label asdf and tkn pipelinerun delete --all --label==asdf commands both delete all the pipeline runs that have the asdf label. With this update, you can fetch the version of installed Tekton components from the config map or, if the config map is not present, from the deployment controller. With this update, triggers support the feature-flags and config-defaults config map to configure feature flags and to set default values respectively. This update adds a new metric, eventlistener_event_count , that you can use to count events received by the EventListener resource. This update adds v1beta1 Go API types. With this update, triggers now support the v1beta1 API version. With the current release, the v1alpha1 features are now deprecated and will be removed in a future release. Begin using the v1beta1 features instead. In the current release, auto-prunning of resources is enabled by default. In addition, you can configure auto-prunning of task run and pipeline run for each namespace separately, by using the following new annotations: operator.tekton.dev/prune.schedule : If the value of this annotation is different from the value specified at the TektonConfig custom resource definition, a new cron job in that namespace is created. operator.tekton.dev/prune.skip : When set to true , the namespace for which it is configured will not be prunned. operator.tekton.dev/prune.resources : This annotation accepts a comma-separated list of resources. To prune a single resource such as a pipeline run, set this annotation to "pipelinerun" . To prune multiple resources, such as task run and pipeline run, set this annotation to "taskrun, pipelinerun" . operator.tekton.dev/prune.keep : Use this annotation to retain a resource without prunning. operator.tekton.dev/prune.keep-since : Use this annotation to retain resources based on their age. The value for this annotation must be equal to the age of the resource in minutes. For example, to retain resources which were created not more than five days ago, set keep-since to 7200 . Note The keep and keep-since annotations are mutually exclusive. For any resource, you must configure only one of them. operator.tekton.dev/prune.strategy : Set the value of this annotation to either keep or keep-since . Administrators can disable the creation of the pipeline service account for the entire cluster, and prevent privilege escalation by misusing the associated SCC, which is very similar to anyuid . You can now configure feature flags and components by using the TektonConfig custom resource (CR) and the CRs for individual components, such as TektonPipeline and TektonTriggers . This level of granularity helps customize and test alpha features such as the Tekton OCI bundle for individual components. You can now configure optional Timeouts field for the PipelineRun resource. For example, you can configure timeouts separately for a pipeline run, each task run, and the finally tasks. The pods generated by the TaskRun resource now sets the activeDeadlineSeconds field of the pods. This enables OpenShift to consider them as terminating, and allows you to use specifically scoped ResourceQuota object for the pods. You can use configmaps to eliminate metrics tags or labels type on a task run, pipeline run, task, and pipeline. In addition, you can configure different types of metrics for measuring duration, such as a histogram, gauge, or last value. You can define requests and limits on a pod coherently, as Tekton now fully supports the LimitRange object by considering the Min , Max , Default , and DefaultRequest fields. The following alpha features are introduced: A pipeline run can now stop after running the finally tasks, rather than the behavior of stopping the execution of all task run directly. This update adds the following spec.status values: StoppedRunFinally will stop the currently running tasks after they are completed, and then run the finally tasks. CancelledRunFinally will immediately cancel the running tasks, and then run the finally tasks. Cancelled will retain the behavior provided by the PipelineRunCancelled status. Note The Cancelled status replaces the deprecated PipelineRunCancelled status, which will be removed in the v1 version. You can now use the oc debug command to put a task run into debug mode, which pauses the execution and allows you to inspect specific steps in a pod. When you set the onError field of a step to continue , the exit code for the step is recorded and passed on to subsequent steps. However, the task run does not fail and the execution of the rest of the steps in the task continues. To retain the existing behavior, you can set the value of the onError field to stopAndFail . Tasks can now accept more parameters than are actually used. When the alpha feature flag is enabled, the parameters can implicitly propagate to inlined specs. For example, an inlined task can access parameters of its parent pipeline run, without explicitly defining each parameter for the task. If you enable the flag for the alpha features, the conditions under When expressions will only apply to the task with which it is directly associated, and not the dependents of the task. To apply the When expressions to the associated task and its dependents, you must associate the expression with each dependent task separately. Note that, going forward, this will be the default behavior of the When expressions in any new API versions of Tekton. The existing default behavior will be deprecated in favor of this update. The current release enables you to configure node selection by specifying the nodeSelector and tolerations values in the TektonConfig custom resource (CR). The Operator adds these values to all the deployments that it creates. To configure node selection for the Operator's controller and webhook deployment, you edit the config.nodeSelector and config.tolerations fields in the specification for the Subscription CR, after installing the Operator. To deploy the rest of the control plane pods of OpenShift Pipelines on an infrastructure node, update the TektonConfig CR with the nodeSelector and tolerations fields. The modifications are then applied to all the pods created by Operator. 4.1.7.2. Deprecated features In CLI 0.21.0, support for all v1alpha1 resources for clustertask , task , taskrun , pipeline , and pipelinerun commands are deprecated. These resources are now deprecated and will be removed in a future release. In Tekton Triggers v0.16.0, the redundant status label is removed from the metrics for the EventListener resource. Important Breaking change: The status label has been removed from the eventlistener_http_duration_seconds_* metric. Remove queries that are based on the status label. With the current release, the v1alpha1 features are now deprecated and will be removed in a future release. With this update, you can begin using the v1beta1 Go API types instead. Triggers now supports the v1beta1 API version. With the current release, the EventListener resource sends a response before the triggers finish processing. Important Breaking change: With this change, the EventListener resource stops responding with a 201 Created status code when it creates resources. Instead, it responds with a 202 Accepted response code. The current release removes the podTemplate field from the EventListener resource. Important Breaking change: The podTemplate field, which was deprecated as part of #1100 , has been removed. The current release removes the deprecated replicas field from the specification for the EventListener resource. Important Breaking change: The deprecated replicas field has been removed. In Red Hat OpenShift Pipelines 1.6, the values of HOME="/tekton/home" and workingDir="/workspace" are removed from the specification of the Step objects. Instead, Red Hat OpenShift Pipelines sets HOME and workingDir to the values defined by the containers running the Step objects. You can override these values in the specification of your Step objects. To use the older behavior, you can change the disable-working-directory-overwrite and disable-home-env-overwrite fields in the TektonConfig CR to false : apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false ... Important The disable-working-directory-overwrite and disable-home-env-overwrite fields in the TektonConfig CR are now deprecated and will be removed in a future release. 4.1.7.3. Known issues When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task is unsupported. Before you install tasks based on the Tekton Catalog on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) using tkn hub , verify if the task can be executed on these platforms. To check if ppc64le and s390x are listed in the "Platforms" section of the task information, you can run the following command: tkn hub info task <name> You cannot use the nodejs:14-ubi8-minimal image stream because doing so generates the following errors: STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP "RUN /usr/libexec/s2i/assemble": exit status 127 time="2021-11-04T13:05:26Z" level=error msg="exit status 127" 4.1.7.4. Fixed issues The tkn hub command is now supported on IBM Power Systems, IBM Z, and LinuxONE. Before this update, the terminal was not available after the user ran a tkn command, and the pipeline run was done, even if retries were specified. Specifying a timeout in the task run or pipeline run had no effect. This update fixes the issue so that the terminal is available after running the command. Before this update, running tkn pipelinerun delete --all would delete all resources. This update prevents the resources in the running state from getting deleted. Before this update, using the tkn version --component=<component> command did not return the component version. This update fixes the issue so that this command returns the component version. Before this update, when you used the tkn pr logs command, it displayed the pipelines output logs in the wrong task order. This update resolves the issue so that logs of completed PipelineRuns are listed in the appropriate TaskRun execution order. Before this update, editing the specification of a running pipeline might prevent the pipeline run from stopping when it was complete. This update fixes the issue by fetching the definition only once and then using the specification stored in the status for verification. This change reduces the probability of a race condition when a PipelineRun or a TaskRun refers to a Pipeline or Task that changes while it is running. When expression values can now have array parameter references, such as: values: [USD(params.arrayParam[*])] . 4.1.7.5. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.1 4.1.7.5.1. Known issues After upgrading to Red Hat OpenShift Pipelines 1.6.1 from an older version, Pipelines might enter an inconsistent state where you are unable to perform any operations (create/delete/apply) on Tekton resources (tasks and pipelines). For example, while deleting a resource, you might encounter the following error: Error from server (InternalError): Internal error occurred: failed calling webhook "validation.webhook.pipeline.tekton.dev": Post "https://tekton-pipelines-webhook.openshift-pipelines.svc:443/resource-validation?timeout=10s": service "tekton-pipelines-webhook" not found. 4.1.7.5.2. Fixed issues The SSL_CERT_DIR environment variable ( /tekton-custom-certs ) set by Red Hat OpenShift Pipelines will not override the following default system directories with certificate files: /etc/pki/tls/certs /etc/ssl/certs /system/etc/security/cacerts The Horizontal Pod Autoscaler can manage the replica count of deployments controlled by the Red Hat OpenShift Pipelines Operator. From this release onward, if the count is changed by an end user or an on-cluster agent, the Red Hat OpenShift Pipelines Operator will not reset the replica count of deployments managed by it. However, the replicas will be reset when you upgrade the Red Hat OpenShift Pipelines Operator. The pod serving the tkn CLI will now be scheduled on nodes, based on the node selector and toleration limits specified in the TektonConfig custom resource. 4.1.7.6. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.2 4.1.7.6.1. Known issues When you create a new project, the creation of the pipeline service account is delayed, and removal of existing cluster tasks and pipeline templates takes more than 10 minutes. 4.1.7.6.2. Fixed issues Before this update, multiple instances of Tekton installer sets were created for a pipeline after upgrading to Red Hat OpenShift Pipelines 1.6.1 from an older version. With this update, the Operator ensures that only one instance of each type of TektonInstallerSet exists after an upgrade. Before this update, all the reconcilers in the Operator used the component version to decide resource recreation during an upgrade to Red Hat OpenShift Pipelines 1.6.1 from an older version. As a result, those resources were not recreated whose component versions did not change in the upgrade. With this update, the Operator uses the Operator version instead of the component version to decide resource recreation during an upgrade. Before this update, the pipelines webhook service was missing in the cluster after an upgrade. This was due to an upgrade deadlock on the config maps. With this update, a mechanism is added to disable webhook validation if the config maps are absent in the cluster. As a result, the pipelines webhook service persists in the cluster after an upgrade. Before this update, cron jobs for auto-pruning got recreated after any configuration change to the namespace. With this update, cron jobs for auto-pruning get recreated only if there is a relevant annotation change in the namespace. The upstream version of Tekton Pipelines is revised to v0.28.3 , which has the following fixes: Fix PipelineRun or TaskRun objects to allow label or annotation propagation. For implicit params: Do not apply the PipelineSpec parameters to the TaskRefs object. Disable implicit param behavior for the Pipeline objects. 4.1.7.7. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.3 4.1.7.7.1. Fixed issues Before this update, the Red Hat OpenShift Pipelines Operator installed pod security policies from components such as Pipelines and Triggers. However, the pod security policies shipped as part of the components were deprecated in an earlier release. With this update, the Operator stops installing pod security policies from components. As a result, the following upgrade paths are affected: Upgrading from Pipelines 1.6.1 or 1.6.2 to Pipelines 1.6.3 deletes the pod security policies, including those from the Pipelines and Triggers components. Upgrading from Pipelines 1.5.x to 1.6.3 retains the pod security policies installed from components. As a cluster administrator, you can delete them manually. Note When you upgrade to future releases, the Red Hat OpenShift Pipelines Operator will automatically delete all obsolete pod security policies. Before this update, only cluster administrators could access pipeline metrics in the OpenShift Container Platform console. With this update, users with other cluster roles also can access the pipeline metrics. Before this update, role-based access control (RBAC) issues with the Pipelines Operator caused problems upgrading or installing components. This update improves the reliability and consistency of installing various Red Hat OpenShift Pipelines components. Before this update, setting the clusterTasks and pipelineTemplates fields to false in the TektonConfig CR slowed the removal of cluster tasks and pipeline templates. This update improves the speed of lifecycle management of Tekton resources such as cluster tasks and pipeline templates. 4.1.7.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.6.4 4.1.7.8.1. Known issues After upgrading from Red Hat OpenShift Pipelines 1.5.2 to 1.6.4, accessing the event listener routes returns a 503 error. Workaround: Modify the target port in the YAML file for the event listener's route. Extract the route name for the relevant namespace. USD oc get route -n <namespace> Edit the route to modify the value of the targetPort field. USD oc edit route -n <namespace> <el-route_name> Example: Existing event listener route ... spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: 8000 to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None ... Example: Modified event listener route ... spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: http-listener to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None ... 4.1.7.8.2. Fixed issues Before this update, the Operator failed when creating RBAC resources if any namespace was in a Terminating state. With this update, the Operator ignores namespaces in a Terminating state and creates the RBAC resources. Before this update, the task runs failed or restarted due to absence of annotation specifying the release version of the associated Tekton controller. With this update, the inclusion of the appropriate annotations are automated, and the tasks run without failure or restarts. 4.1.8. Release notes for Red Hat OpenShift Pipelines General Availability 1.5 Red Hat OpenShift Pipelines General Availability (GA) 1.5 is now available on OpenShift Container Platform 4.8. 4.1.8.1. Compatibility and support matrix Some features in this release are currently in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP Technology Preview GA General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 4.2. Compatibility and support matrix Feature Version Support Status Pipelines 0.24 GA CLI 0.19 GA Catalog 0.24 GA Triggers 0.14 TP Pipeline resources - TP For questions and feedback, you can send an email to the product team at [email protected] . 4.1.8.2. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.5. Pipeline run and task runs will be automatically pruned by a cron job in the target namespace. The cron job uses the IMAGE_JOB_PRUNER_TKN environment variable to get the value of tkn image . With this enhancement, the following fields are introduced to the TektonConfig custom resource: ... pruner: resources: - pipelinerun - taskrun schedule: "*/5 * * * *" # cron schedule keep: 2 # delete all keeping n ... In OpenShift Container Platform, you can customize the installation of the Tekton Add-ons component by modifying the values of the new parameters clusterTasks and pipelinesTemplates in the TektonConfig custom resource: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "true" - name: pipelineTemplates value: "true" ... The customization is allowed if you create the add-on using TektonConfig , or directly by using Tekton Add-ons. However, if the parameters are not passed, the controller adds parameters with default values. Note If add-on is created using the TektonConfig custom resource, and you change the parameter values later in the Addon custom resource, then the values in the TektonConfig custom resource overwrites the changes. You can set the value of the pipelinesTemplates parameter to true only when the value of the clusterTasks parameter is true . The enableMetrics parameter is added to the TektonConfig custom resource. You can use it to disable the service monitor, which is part of Tekton Pipelines for OpenShift Container Platform. apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines pipeline: params: - name: enableMetrics value: "true" ... Eventlistener OpenCensus metrics, which captures metrics at process level, is added. Triggers now has label selector; you can configure triggers for an event listener using labels. The ClusterInterceptor custom resource definition for registering interceptors is added, which allows you to register new Interceptor types that you can plug in. In addition, the following relevant changes are made: In the trigger specifications, you can configure interceptors using a new API that includes a ref field to refer to a cluster interceptor. In addition, you can use the params field to add parameters that pass on to the interceptors for processing. The bundled interceptors CEL, GitHub, GitLab, and BitBucket, have been migrated. They are implemented using the new ClusterInterceptor custom resource definition. Core interceptors are migrated to the new format, and any new triggers created using the old syntax automatically switch to the new ref or params based syntax. To disable prefixing the name of the task or step while displaying logs, use the --prefix option for log commands. To display the version of a specific component, use the new --component flag in the tkn version command. The tkn hub check-upgrade command is added, and other commands are revised to be based on the pipeline version. In addition, catalog names are displayed in the search command output. Support for optional workspaces are added to the start command. If the plugins are not present in the plugins directory, they are searched in the current path. The tkn start [task | clustertask | pipeline] command starts interactively and ask for the params value, even when you specify the default parameters are specified. To stop the interactive prompts, pass the --use-param-defaults flag at the time of invoking the command. For example: USD tkn pipeline start build-and-deploy \ -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml \ -p deployment-name=pipelines-vote-api \ -p git-url=https://github.com/openshift/pipelines-vote-api.git \ -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api \ --use-param-defaults The version field is added in the tkn task describe command. The option to automatically select resources such as TriggerTemplate , or TriggerBinding , or ClusterTriggerBinding , or Eventlistener , is added in the describe command, if only one is present. In the tkn pr describe command, a section for skipped tasks is added. Support for the tkn clustertask logs is added. The YAML merge and variable from config.yaml is removed. In addition, the release.yaml file can now be more easily consumed by tools such as kustomize and ytt . The support for resource names to contain the dot character (".") is added. The hostAliases array in the PodTemplate specification is added to the pod-level override of hostname resolution. It is achieved by modifying the /etc/hosts file. A variable USD(tasks.status) is introduced to access the aggregate execution status of tasks. An entry-point binary build for Windows is added. 4.1.8.3. Deprecated features In the when expressions, support for fields written is PascalCase is removed. The when expressions only support fields written in lowercase. Note If you had applied a pipeline with when expressions in Tekton Pipelines v0.16 (Operator v1.2.x ), you have to reapply it. When you upgrade the Red Hat OpenShift Pipelines Operator to v1.5 , the openshift-client and the openshift-client-v-1-5-0 cluster tasks have the SCRIPT parameter. However, the ARGS parameter and the git resource are removed from the specification of the openshift-client cluster task. This is a breaking change, and only those cluster tasks that do not have a specific version in the name field of the ClusterTask resource upgrade seamlessly. To prevent the pipeline runs from breaking, use the SCRIPT parameter after the upgrade because it moves the values previously specified in the ARGS parameter into the SCRIPT parameter of the cluster task. For example: ... - name: deploy params: - name: SCRIPT value: oc rollout status <deployment-name> runAfter: - build taskRef: kind: ClusterTask name: openshift-client ... When you upgrade from Red Hat OpenShift Pipelines Operator v1.4 to v1.5 , the profile names in which the TektonConfig custom resource is installed now change. Table 4.3. Profiles for TektonConfig custom resource Profiles in Pipelines 1.5 Corresponding profile in Pipelines 1.4 Installed Tekton components All ( default profile ) All ( default profile ) Pipelines, Triggers, Add-ons Basic Default Pipelines, Triggers Lite Basic Pipelines Note If you used profile: all in the config instance of the TektonConfig custom resource, no change is necessary in the resource specification. However, if the installed Operator is either in the Default or the Basic profile before the upgrade, you must edit the config instance of the TektonConfig custom resource after the upgrade. For example, if the configuration was profile: basic before the upgrade, ensure that it is profile: lite after upgrading to Pipelines 1.5. The disable-home-env-overwrite and disable-working-dir-overwrite fields are now deprecated and will be removed in a future release. For this release, the default value of these flags is set to true for backward compatibility. Note In the release (Red Hat OpenShift Pipelines 1.6), the HOME environment variable will not be automatically set to /tekton/home , and the default working directory will not be set to /workspace for task runs. These defaults collide with any value set by image Dockerfile of the step. The ServiceType and podTemplate fields are removed from the EventListener spec. The controller service account no longer requests cluster-wide permission to list and watch namespaces. The status of the EventListener resource has a new condition called Ready . Note In the future, the other status conditions for the EventListener resource will be deprecated in favor of the Ready status condition. The eventListener and namespace fields in the EventListener response are deprecated. Use the eventListenerUID field instead. The replicas field is deprecated from the EventListener spec. Instead, the spec.replicas field is moved to spec.resources.kubernetesResource.replicas in the KubernetesResource spec. Note The replicas field will be removed in a future release. The old method of configuring the core interceptors is deprecated. However, it continues to work until it is removed in a future release. Instead, interceptors in a Trigger resource are now configured using a new ref and params based syntax. The resulting default webhook automatically switch the usages of the old syntax to the new syntax for new triggers. Use rbac.authorization.k8s.io/v1 instead of the deprecated rbac.authorization.k8s.io/v1beta1 for the ClusterRoleBinding resource. In cluster roles, the cluster-wide write access to resources such as serviceaccounts , secrets , configmaps , and limitranges are removed. In addition, cluster-wide access to resources such as deployments , statefulsets , and deployment/finalizers are removed. The image custom resource definition in the caching.internal.knative.dev group is not used by Tekton anymore, and is excluded in this release. 4.1.8.4. Known issues The git-cli cluster task is built off the alpine/git base image, which expects /root as the user's home directory. However, this is not explicitly set in the git-cli cluster task. In Tekton, the default home directory is overwritten with /tekton/home for every step of a task, unless otherwise specified. This overwriting of the USDHOME environment variable of the base image causes the git-cli cluster task to fail. This issue is expected to be fixed in the upcoming releases. For Red Hat OpenShift Pipelines 1.5 and earlier versions, you can use any one of the following workarounds to avoid the failure of the git-cli cluster task: Set the USDHOME environment variable in the steps, so that it is not overwritten. [OPTIONAL] If you installed Red Hat OpenShift Pipelines using the Operator, then clone the git-cli cluster task into a separate task. This approach ensures that the Operator does not overwrite the changes made to the cluster task. Execute the oc edit clustertasks git-cli command. Add the expected HOME environment variable to the YAML of the step: ... steps: - name: git env: - name: HOME value: /root image: USD(params.BASE_IMAGE) workingDir: USD(workspaces.source.path) ... Warning For Red Hat OpenShift Pipelines installed by the Operator, if you do not clone the git-cli cluster task into a separate task before changing the HOME environment variable, then the changes are overwritten during Operator reconciliation. Disable overwriting the HOME environment variable in the feature-flags config map. Execute the oc edit -n openshift-pipelines configmap feature-flags command. Set the value of the disable-home-env-overwrite flag to true . Warning If you installed Red Hat OpenShift Pipelines using the Operator, then the changes are overwritten during Operator reconciliation. Modifying the default value of the disable-home-env-overwrite flag can break other tasks and cluster tasks, as it changes the default behavior for all tasks. Use a different service account for the git-cli cluster task, as the overwriting of the HOME environment variable happens when the default service account for pipelines is used. Create a new service account. Link your Git secret to the service account you just created. Use the service account while executing a task or a pipeline. On IBM Power Systems, IBM Z, and LinuxONE, the s2i-dotnet cluster task and the tkn hub command are unsupported. When you run Maven and Jib-Maven cluster tasks, the default container image is supported only on Intel (x86) architecture. Therefore, tasks will fail on IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters. As a workaround, you can specify a custom image by setting the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . 4.1.8.5. Fixed issues The when expressions in dag tasks are not allowed to specify the context variable accessing the execution status ( USD(tasks.<pipelineTask>.status) ) of any other task. Use Owner UIDs instead of Owner names, as it helps avoid race conditions created by deleting a volumeClaimTemplate PVC, in situations where a PipelineRun resource is quickly deleted and then recreated. A new Dockerfile is added for pullrequest-init for build-base image triggered by non-root users. When a pipeline or task is executed with the -f option and the param in its definition does not have a type defined, a validation error is generated instead of the pipeline or task run failing silently. For the tkn start [task | pipeline | clustertask] commands, the description of the --workspace flag is now consistent. While parsing the parameters, if an empty array is encountered, the corresponding interactive help is displayed as an empty string now. 4.1.9. Release notes for Red Hat OpenShift Pipelines General Availability 1.4 Red Hat OpenShift Pipelines General Availability (GA) 1.4 is now available on OpenShift Container Platform 4.7. Note In addition to the stable and preview Operator channels, the Red Hat OpenShift Pipelines Operator 1.4.0 comes with the ocp-4.6, ocp-4.5, and ocp-4.4 deprecated channels. These deprecated channels and support for them will be removed in the following release of Red Hat OpenShift Pipelines. 4.1.9.1. Compatibility and support matrix Some features in this release are currently in Technology Preview . These experimental features are not intended for production use. In the table, features are marked with the following statuses: TP Technology Preview GA General Availability Note the following scope of support on the Red Hat Customer Portal for these features: Table 4.4. Compatibility and support matrix Feature Version Support Status Pipelines 0.22 GA CLI 0.17 GA Catalog 0.22 GA Triggers 0.12 TP Pipeline resources - TP For questions and feedback, you can send an email to the product team at [email protected] . 4.1.9.2. New features In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.4. The custom tasks have the following enhancements: Pipeline results can now refer to results produced by custom tasks. Custom tasks can now use workspaces, service accounts, and pod templates to build more complex custom tasks. The finally task has the following enhancements: The when expressions are supported in finally tasks, which provides efficient guarded execution and improved reusability of tasks. A finally task can be configured to consume the results of any task within the same pipeline. Note Support for when expressions and finally tasks are unavailable in the OpenShift Container Platform 4.7 web console. Support for multiple secrets of the type dockercfg or dockerconfigjson is added for authentication at runtime. Functionality to support sparse-checkout with the git-clone task is added. This enables you to clone only a subset of the repository as your local copy, and helps you to restrict the size of the cloned repositories. You can create pipeline runs in a pending state without actually starting them. In clusters that are under heavy load, this allows Operators to have control over the start time of the pipeline runs. Ensure that you set the SYSTEM_NAMESPACE environment variable manually for the controller; this was previously set by default. A non-root user is now added to the build-base image of pipelines so that git-init can clone repositories as a non-root user. Support to validate dependencies between resolved resources before a pipeline run starts is added. All result variables in the pipeline must be valid, and optional workspaces from a pipeline can only be passed to tasks expecting it for the pipeline to start running. The controller and webhook runs as a non-root group, and their superfluous capabilities have been removed to make them more secure. You can use the tkn pr logs command to see the log streams for retried task runs. You can use the --clustertask option in the tkn tr delete command to delete all the task runs associated with a particular cluster task. Support for using Knative service with the EventListener resource is added by introducing a new customResource field. An error message is displayed when an event payload does not use the JSON format. The source control interceptors such as GitLab, BitBucket, and GitHub, now use the new InterceptorRequest or InterceptorResponse type interface. A new CEL function marshalJSON is implemented so that you can encode a JSON object or an array to a string. An HTTP handler for serving the CEL and the source control core interceptors is added. It packages four core interceptors into a single HTTP server that is deployed in the tekton-pipelines namespace. The EventListener object forwards events over the HTTP server to the interceptor. Each interceptor is available at a different path. For example, the CEL interceptor is available on the /cel path. The pipelines-scc Security Context Constraint (SCC) is used with the default pipeline service account for pipelines. This new service account is similar to anyuid , but with a minor difference as defined in the YAML for SCC of OpenShift Container Platform 4.7: fsGroup: type: MustRunAs 4.1.9.3. Deprecated features The build-gcs sub-type in the pipeline resource storage, and the gcs-fetcher image, are not supported. In the taskRun field of cluster tasks, the label tekton.dev/task is removed. For webhooks, the value v1beta1 corresponding to the field admissionReviewVersions is removed. The creds-init helper image for building and deploying is removed. In the triggers spec and binding, the deprecated field template.name is removed in favor of template.ref . You should update all eventListener definitions to use the ref field. Note Upgrade from Pipelines 1.3.x and earlier versions to Pipelines 1.4.0 breaks event listeners because of the unavailability of the template.name field. For such cases, use Pipelines 1.4.1 to avail the restored template.name field. For EventListener custom resources/objects, the fields PodTemplate and ServiceType are deprecated in favor of Resource . The deprecated spec style embedded bindings is removed. The spec field is removed from the triggerSpecBinding . The event ID representation is changed from a five-character random string to a UUID. 4.1.9.4. Known issues In the Developer perspective, the pipeline metrics and triggers features are available only on OpenShift Container Platform 4.7.6 or later versions. On IBM Power Systems, IBM Z, and LinuxONE, the tkn hub command is not supported. When you run Maven and Jib Maven cluster tasks on an IBM Power Systems (ppc64le), IBM Z, and LinuxONE (s390x) clusters, set the MAVEN_IMAGE parameter value to maven:3.6.3-adoptopenjdk-11 . Triggers throw error resulting from bad handling of the JSON format, if you have the following configuration in the trigger binding: params: - name: github_json value: USD(body) To resolve the issue: If you are using triggers v0.11.0 and above, use the marshalJSON CEL function, which takes a JSON object or array and returns the JSON encoding of that object or array as a string. If you are using older triggers version, add the following annotation in the trigger template: annotations: triggers.tekton.dev/old-escape-quotes: "true" When upgrading from Pipelines 1.3.x to 1.4.x, you must recreate the routes. 4.1.9.5. Fixed issues Previously, the tekton.dev/task label was removed from the task runs of cluster tasks, and the tekton.dev/clusterTask label was introduced. The problems resulting from that change is resolved by fixing the clustertask describe and delete commands. In addition, the lastrun function for tasks is modified, to fix the issue of the tekton.dev/task label being applied to the task runs of both tasks and cluster tasks in older versions of pipelines. When doing an interactive tkn pipeline start pipelinename , a PipelineResource is created interactively. The tkn p start command prints the resource status if the resource status is not nil . Previously, the tekton.dev/task=name label was removed from the task runs created from cluster tasks. This fix modifies the tkn clustertask start command with the --last flag to check for the tekton.dev/task=name label in the created task runs. When a task uses an inline task specification, the corresponding task run now gets embedded in the pipeline when you run the tkn pipeline describe command, and the task name is returned as embedded. The tkn version command is fixed to display the version of the installed Tekton CLI tool, without a configured kubeConfiguration namespace or access to a cluster. If an argument is unexpected or more than one arguments are used, the tkn completion command gives an error. Previously, pipeline runs with the finally tasks nested in a pipeline specification would lose those finally tasks, when converted to the v1alpha1 version and restored back to the v1beta1 version. This error occurring during conversion is fixed to avoid potential data loss. Pipeline runs with the finally tasks nested in a pipeline specification is now serialized and stored on the alpha version, only to be deserialized later. Previously, there was an error in the pod generation when a service account had the secrets field as {} . The task runs failed with CouldntGetTask because the GET request with an empty secret name returned an error, indicating that the resource name may not be empty. This issue is fixed by avoiding an empty secret name in the kubeclient GET request. Pipelines with the v1beta1 API versions can now be requested along with the v1alpha1 version, without losing the finally tasks. Applying the returned v1alpha1 version will store the resource as v1beta1 , with the finally section restored to its original state. Previously, an unset selfLink field in the controller caused an error in the Kubernetes v1.20 clusters. As a temporary fix, the CloudEvent source field is set to a value that matches the current source URI, without the value of the auto-populated selfLink field. Previously, a secret name with dots such as gcr.io led to a task run creation failure. This happened because of the secret name being used internally as part of a volume mount name. The volume mount name conforms to the RFC1123 DNS label and disallows dots as part of the name. This issue is fixed by replacing the dot with a dash that results in a readable name. Context variables are now validated in the finally tasks. Previously, when the task run reconciler was passed a task run that did not have a status update containing the name of the pod it created, the task run reconciler listed the pods associated with the task run. The task run reconciler used the labels of the task run, which were propagated to the pod, to find the pod. Changing these labels while the task run was running, caused the code to not find the existing pod. As a result, duplicate pods were created. This issue is fixed by changing the task run reconciler to only use the tekton.dev/taskRun Tekton-controlled label when finding the pod. Previously, when a pipeline accepted an optional workspace and passed it to a pipeline task, the pipeline run reconciler stopped with an error if the workspace was not provided, even if a missing workspace binding is a valid state for an optional workspace. This issue is fixed by ensuring that the pipeline run reconciler does not fail to create a task run, even if an optional workspace is not provided. The sorted order of step statuses matches the order of step containers. Previously, the task run status was set to unknown when a pod encountered the CreateContainerConfigError reason, which meant that the task and the pipeline ran until the pod timed out. This issue is fixed by setting the task run status to false , so that the task is set as failed when the pod encounters the CreateContainerConfigError reason. Previously, pipeline results were resolved on the first reconciliation, after a pipeline run was completed. This could fail the resolution resulting in the Succeeded condition of the pipeline run being overwritten. As a result, the final status information was lost, potentially confusing any services watching the pipeline run conditions. This issue is fixed by moving the resolution of pipeline results to the end of a reconciliation, when the pipeline run is put into a Succeeded or True condition. Execution status variable is now validated. This avoids validating task results while validating context variables to access execution status. Previously, a pipeline result that contained an invalid variable would be added to the pipeline run with the literal expression of the variable intact. Therefore, it was difficult to assess whether the results were populated correctly. This issue is fixed by filtering out the pipeline run results that reference failed task runs. Now, a pipeline result that contains an invalid variable will not be emitted by the pipeline run at all. The tkn eventlistener describe command is fixed to avoid crashing without a template. It also displays the details about trigger references. Upgrades from Pipelines 1.3.x and earlier versions to Pipelines 1.4.0 breaks event listeners because of the unavailability of template.name . In Pipelines 1.4.1, the template.name has been restored to avoid breaking event listeners in triggers. In Pipelines 1.4.1, the ConsoleQuickStart custom resource has been updated to align with OpenShift Container Platform 4.7 capabilities and behavior. 4.1.10. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.3 4.1.10.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.3 is now available on OpenShift Container Platform 4.7. Red Hat OpenShift Pipelines TP 1.3 is updated to support: Tekton Pipelines 0.19.0 Tekton tkn CLI 0.15.0 Tekton Triggers 0.10.2 cluster tasks based on Tekton Catalog 0.19.0 IBM Power Systems on OpenShift Container Platform 4.7 IBM Z and LinuxONE on OpenShift Container Platform 4.7 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.3. 4.1.10.1.1. Pipelines Tasks that build images, such as S2I and Buildah tasks, now emit a URL of the image built that includes the image SHA. Conditions in pipeline tasks that reference custom tasks are disallowed because the Condition custom resource definition (CRD) has been deprecated. Variable expansion is now added in the Task CRD for the following fields: spec.steps[].imagePullPolicy and spec.sidecar[].imagePullPolicy . You can disable the built-in credential mechanism in Tekton by setting the disable-creds-init feature-flag to true . Resolved when expressions are now listed in the Skipped Tasks and the Task Runs sections in the Status field of the PipelineRun configuration. The git init command can now clone recursive submodules. A Task CR author can now specify a timeout for a step in the Task spec. You can now base the entry point image on the distroless/static:nonroot image and give it a mode to copy itself to the destination, without relying on the cp command being present in the base image. You can now use the configuration flag require-git-ssh-secret-known-hosts to disallow omitting known hosts in the Git SSH secret. When the flag value is set to true , you must include the known_host field in the Git SSH secret. The default value for the flag is false . The concept of optional workspaces is now introduced. A task or pipeline might declare a workspace optional and conditionally change their behavior based on its presence. A task run or pipeline run might also omit that workspace, thereby modifying the task or pipeline behavior. The default task run workspaces are not added in place of an omitted optional workspace. Credentials initialization in Tekton now detects an SSH credential that is used with a non-SSH URL, and vice versa in Git pipeline resources, and logs a warning in the step containers. The task run controller emits a warning event if the affinity specified by the pod template is overwritten by the affinity assistant. The task run reconciler now records metrics for cloud events that are emitted once a task run is completed. This includes retries. 4.1.10.1.2. Pipelines CLI Support for --no-headers flag is now added to the following commands: tkn condition list , tkn triggerbinding list , tkn eventlistener list , tkn clustertask list , tkn clustertriggerbinding list . When used together, the --last or --use options override the --prefix-name and --timeout options. The tkn eventlistener logs command is now added to view the EventListener logs. The tekton hub commands are now integrated into the tkn CLI. The --nocolour option is now changed to --no-color . The --all-namespaces flag is added to the following commands: tkn triggertemplate list , tkn condition list , tkn triggerbinding list , tkn eventlistener list . 4.1.10.1.3. Triggers You can now specify your resource information in the EventListener template. It is now mandatory for EventListener service accounts to have the list and watch verbs, in addition to the get verb for all the triggers resources. This enables you to use Listers to fetch data from EventListener , Trigger , TriggerBinding , TriggerTemplate , and ClusterTriggerBinding resources. You can use this feature to create a Sink object rather than specifying multiple informers, and directly make calls to the API server. A new Interceptor interface is added to support immutable input event bodies. Interceptors can now add data or fields to a new extensions field, and cannot modify the input bodies making them immutable. The CEL interceptor uses this new Interceptor interface. A namespaceSelector field is added to the EventListener resource. Use it to specify the namespaces from where the EventListener resource can fetch the Trigger object for processing events. To use the namespaceSelector field, the service account for the EventListener resource must have a cluster role. The triggers EventListener resource now supports end-to-end secure connection to the eventlistener pod. The escaping parameters behavior in the TriggerTemplates resource by replacing " with \" is now removed. A new resources field, supporting Kubernetes resources, is introduced as part of the EventListener spec. A new functionality for the CEL interceptor, with support for upper and lower-casing of ASCII strings, is added. You can embed TriggerBinding resources by using the name and value fields in a trigger, or an event listener. The PodSecurityPolicy configuration is updated to run in restricted environments. It ensures that containers must run as non-root. In addition, the role-based access control for using the pod security policy is moved from cluster-scoped to namespace-scoped. This ensures that the triggers cannot use other pod security policies that are unrelated to a namespace. Support for embedded trigger templates is now added. You can either use the name field to refer to an embedded template or embed the template inside the spec field. 4.1.10.2. Deprecated features Pipeline templates that use PipelineResources CRDs are now deprecated and will be removed in a future release. The template.name field is deprecated in favor of the template.ref field and will be removed in a future release. The -c shorthand for the --check command has been removed. In addition, global tkn flags are added to the version command. 4.1.10.3. Known issues CEL overlays add fields to a new top-level extensions function, instead of modifying the incoming event body. TriggerBinding resources can access values within this new extensions function using the USD(extensions.<key>) syntax. Update your binding to use the USD(extensions.<key>) syntax instead of the USD(body.<overlay-key>) syntax. The escaping parameters behavior by replacing " with \" is now removed. If you need to retain the old escaping parameters behavior add the tekton.dev/old-escape-quotes: true" annotation to your TriggerTemplate specification. You can embed TriggerBinding resources by using the name and value fields inside a trigger or an event listener. However, you cannot specify both name and ref fields for a single binding. Use the ref field to refer to a TriggerBinding resource and the name field for embedded bindings. An interceptor cannot attempt to reference a secret outside the namespace of an EventListener resource. You must include secrets in the namespace of the `EventListener`resource. In Triggers 0.9.0 and later, if a body or header based TriggerBinding parameter is missing or malformed in an event payload, the default values are used instead of displaying an error. Tasks and pipelines created with WhenExpression objects using Tekton Pipelines 0.16.x must be reapplied to fix their JSON annotations. When a pipeline accepts an optional workspace and gives it to a task, the pipeline run stalls if the workspace is not provided. To use the Buildah cluster task in a disconnected environment, ensure that the Dockerfile uses an internal image stream as the base image, and then use it in the same manner as any S2I cluster task. 4.1.10.4. Fixed issues Extensions added by a CEL Interceptor are passed on to webhook interceptors by adding the Extensions field within the event body. The activity timeout for log readers is now configurable using the LogOptions field. However, the default behavior of timeout in 10 seconds is retained. The log command ignores the --follow flag when a task run or pipeline run is complete, and reads available logs instead of live logs. References to the following Tekton resources: EventListener , TriggerBinding , ClusterTriggerBinding , Condition , and TriggerTemplate are now standardized and made consistent across all user-facing messages in tkn commands. Previously, if you started a canceled task run or pipeline run with the --use-taskrun <canceled-task-run-name> , --use-pipelinerun <canceled-pipeline-run-name> or --last flags, the new run would be canceled. This bug is now fixed. The tkn pr desc command is now enhanced to ensure that it does not fail in case of pipeline runs with conditions. When you delete a task run using the tkn tr delete command with the --task option, and a cluster task exists with the same name, the task runs for the cluster task also get deleted. As a workaround, filter the task runs by using the TaskRefKind field. The tkn triggertemplate describe command would display only part of the apiVersion value in the output. For example, only triggers.tekton.dev was displayed instead of triggers.tekton.dev/v1alpha1 . This bug is now fixed. The webhook, under certain conditions, would fail to acquire a lease and not function correctly. This bug is now fixed. Pipelines with when expressions created in v0.16.3 can now be run in v0.17.1 and later. After an upgrade, you do not need to reapply pipeline definitions created in versions because both the uppercase and lowercase first letters for the annotations are now supported. By default, the leader-election-ha field is now enabled for high availability. When the disable-ha controller flag is set to true , it disables high availability support. Issues with duplicate cloud events are now fixed. Cloud events are now sent only when a condition changes the state, reason, or message. When a service account name is missing from a PipelineRun or TaskRun spec, the controller uses the service account name from the config-defaults config map. If the service account name is also missing in the config-defaults config map, the controller now sets it to default in the spec. Validation for compatibility with the affinity assistant is now supported when the same persistent volume claim is used for multiple workspaces, but with different subpaths. 4.1.11. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.2 4.1.11.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.2 is now available on OpenShift Container Platform 4.6. Red Hat OpenShift Pipelines TP 1.2 is updated to support: Tekton Pipelines 0.16.3 Tekton tkn CLI 0.13.1 Tekton Triggers 0.8.1 cluster tasks based on Tekton Catalog 0.16 IBM Power Systems on OpenShift Container Platform 4.6 IBM Z and LinuxONE on OpenShift Container Platform 4.6 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.2. 4.1.11.1.1. Pipelines This release of Red Hat OpenShift Pipelines adds support for a disconnected installation. Note Installations in restricted environments are currently not supported on IBM Power Systems, IBM Z, and LinuxONE. You can now use the when field, instead of conditions resource, to run a task only when certain criteria are met. The key components of WhenExpression resources are Input , Operator , and Values . If all the when expressions evaluate to True , then the task is run. If any of the when expressions evaluate to False , the task is skipped. Step statuses are now updated if a task run is canceled or times out. Support for Git Large File Storage (LFS) is now available to build the base image used by git-init . You can now use the taskSpec field to specify metadata, such as labels and annotations, when a task is embedded in a pipeline. Cloud events are now supported by pipeline runs. Retries with backoff are now enabled for cloud events sent by the cloud event pipeline resource. You can now set a default Workspace configuration for any workspace that a Task resource declares, but that a TaskRun resource does not explicitly provide. Support is available for namespace variable interpolation for the PipelineRun namespace and TaskRun namespace. Validation for TaskRun objects is now added to check that not more than one persistent volume claim workspace is used when a TaskRun resource is associated with an Affinity Assistant. If more than one persistent volume claim workspace is used, the task run fails with a TaskRunValidationFailed condition. Note that by default, the Affinity Assistant is disabled in Red Hat OpenShift Pipelines, so you will need to enable the assistant to use it. 4.1.11.1.2. Pipelines CLI The tkn task describe , tkn taskrun describe , tkn clustertask describe , tkn pipeline describe , and tkn pipelinerun describe commands now: Automatically select the Task , TaskRun , ClusterTask , Pipeline and PipelineRun resource, respectively, if only one of them is present. Display the results of the Task , TaskRun , ClusterTask , Pipeline and PipelineRun resource in their outputs, respectively. Display workspaces declared in the Task , TaskRun , ClusterTask , Pipeline and PipelineRun resource in their outputs, respectively. You can now use the --prefix-name option with the tkn clustertask start command to specify a prefix for the name of a task run. Interactive mode support has now been provided to the tkn clustertask start command. You can now specify PodTemplate properties supported by pipelines using local or remote file definitions for TaskRun and PipelineRun objects. You can now use the --use-params-defaults option with the tkn clustertask start command to use the default values set in the ClusterTask configuration and create the task run. The --use-param-defaults flag for the tkn pipeline start command now prompts the interactive mode if the default values have not been specified for some of the parameters. 4.1.11.1.3. Triggers The Common Expression Language (CEL) function named parseYAML has been added to parse a YAML string into a map of strings. Error messages for parsing CEL expressions have been improved to make them more granular while evaluating expressions and when parsing the hook body for creating the evaluation environment. Support is now available for marshaling boolean values and maps if they are used as the values of expressions in a CEL overlay mechanism. The following fields have been added to the EventListener object: The replicas field enables the event listener to run more than one pod by specifying the number of replicas in the YAML file. The NodeSelector field enables the EventListener object to schedule the event listener pod to a specific node. Webhook interceptors can now parse the EventListener-Request-URL header to extract parameters from the original request URL being handled by the event listener. Annotations from the event listener can now be propagated to the deployment, services, and other pods. Note that custom annotations on services or deployment are overwritten, and hence, must be added to the event listener annotations so that they are propagated. Proper validation for replicas in the EventListener specification is now available for cases when a user specifies the spec.replicas values as negative or zero . You can now specify the TriggerCRD object inside the EventListener spec as a reference using the TriggerRef field to create the TriggerCRD object separately and then bind it inside the EventListener spec. Validation and defaults for the TriggerCRD object are now available. 4.1.11.2. Deprecated features USD(params) parameters are now removed from the triggertemplate resource and replaced by USD(tt.params) to avoid confusion between the resourcetemplate and triggertemplate resource parameters. The ServiceAccount reference of the optional EventListenerTrigger -based authentication level has changed from an object reference to a ServiceAccountName string. This ensures that the ServiceAccount reference is in the same namespace as the EventListenerTrigger object. The Conditions custom resource definition (CRD) is now deprecated; use the WhenExpressions CRD instead. The PipelineRun.Spec.ServiceAccountNames object is being deprecated and replaced by the PipelineRun.Spec.TaskRunSpec[].ServiceAccountName object. 4.1.11.3. Known issues This release of Red Hat OpenShift Pipelines adds support for a disconnected installation. However, some images used by the cluster tasks must be mirrored for them to work in disconnected clusters. Pipelines in the openshift namespace are not deleted after you uninstall the Red Hat OpenShift Pipelines Operator. Use the oc delete pipelines -n openshift --all command to delete the pipelines. Uninstalling the Red Hat OpenShift Pipelines Operator does not remove the event listeners. As a workaround, to remove the EventListener and Pod CRDs: Edit the EventListener object with the foregroundDeletion finalizers: USD oc patch el/<eventlistener_name> -p '{"metadata":{"finalizers":["foregroundDeletion"]}}' --type=merge For example: USD oc patch el/github-listener-interceptor -p '{"metadata":{"finalizers":["foregroundDeletion"]}}' --type=merge Delete the EventListener CRD: USD oc patch crd/eventlisteners.triggers.tekton.dev -p '{"metadata":{"finalizers":[]}}' --type=merge When you run a multi-arch container image task without command specification on an IBM Power Systems (ppc64le) or IBM Z (s390x) cluster, the TaskRun resource fails with the following error: Error executing command: fork/exec /bin/bash: exec format error As a workaround, use an architecture specific container image or specify the sha256 digest to point to the correct architecture. To get the sha256 digest enter: USD skopeo inspect --raw <image_name>| jq '.manifests[] | select(.platform.architecture == "<architecture>") | .digest' 4.1.11.4. Fixed issues A simple syntax validation to check the CEL filter, overlays in the Webhook validator, and the expressions in the interceptor has now been added. Triggers no longer overwrite annotations set on the underlying deployment and service objects. Previously, an event listener would stop accepting events. This fix adds an idle timeout of 120 seconds for the EventListener sink to resolve this issue. Previously, canceling a pipeline run with a Failed(Canceled) state gave a success message. This has been fixed to display an error instead. The tkn eventlistener list command now provides the status of the listed event listeners, thus enabling you to easily identify the available ones. Consistent error messages are now displayed for the triggers list and triggers describe commands when triggers are not installed or when a resource cannot be found. Previously, a large number of idle connections would build up during cloud event delivery. The DisableKeepAlives: true parameter was added to the cloudeventclient config to fix this issue. Thus, a new connection is set up for every cloud event. Previously, the creds-init code would write empty files to the disk even if credentials of a given type were not provided. This fix modifies the creds-init code to write files for only those credentials that have actually been mounted from correctly annotated secrets. 4.1.12. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.1 4.1.12.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.1 is now available on OpenShift Container Platform 4.5. Red Hat OpenShift Pipelines TP 1.1 is updated to support: Tekton Pipelines 0.14.3 Tekton tkn CLI 0.11.0 Tekton Triggers 0.6.1 cluster tasks based on Tekton Catalog 0.14 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.1. 4.1.12.1.1. Pipelines Workspaces can now be used instead of pipeline resources. It is recommended that you use workspaces in OpenShift Pipelines, as pipeline resources are difficult to debug, limited in scope, and make tasks less reusable. For more details on workspaces, see the Understanding OpenShift Pipelines section. Workspace support for volume claim templates has been added: The volume claim template for a pipeline run and task run can now be added as a volume source for workspaces. The tekton-controller then creates a persistent volume claim (PVC) using the template that is seen as a PVC for all task runs in the pipeline. Thus you do not need to define the PVC configuration every time it binds a workspace that spans multiple tasks. Support to find the name of the PVC when a volume claim template is used as a volume source is now available using variable substitution. Support for improving audits: The PipelineRun.Status field now contains the status of every task run in the pipeline and the pipeline specification used to instantiate a pipeline run to monitor the progress of the pipeline run. Pipeline results have been added to the pipeline specification and PipelineRun status. The TaskRun.Status field now contains the exact task specification used to instantiate the TaskRun resource. Support to apply the default parameter to conditions. A task run created by referencing a cluster task now adds the tekton.dev/clusterTask label instead of the tekton.dev/task label. The kube config writer now adds the ClientKeyData and the ClientCertificateData configurations in the resource structure to enable replacement of the pipeline resource type cluster with the kubeconfig-creator task. The names of the feature-flags and the config-defaults config maps are now customizable. Support for the host network in the pod template used by the task run is now available. An Affinity Assistant is now available to support node affinity in task runs that share workspace volume. By default, this is disabled on OpenShift Pipelines. The pod template has been updated to specify imagePullSecrets to identify secrets that the container runtime should use to authorize container image pulls when starting a pod. Support for emitting warning events from the task run controller if the controller fails to update the task run. Standard or recommended k8s labels have been added to all resources to identify resources belonging to an application or component. The Entrypoint process is now notified for signals and these signals are then propagated using a dedicated PID Group of the Entrypoint process. The pod template can now be set on a task level at runtime using task run specs. Support for emitting Kubernetes events: The controller now emits events for additional task run lifecycle events - taskrun started and taskrun running . The pipeline run controller now emits an event every time a pipeline starts. In addition to the default Kubernetes events, support for cloud events for task runs is now available. The controller can be configured to send any task run events, such as create, started, and failed, as cloud events. Support for using the USDcontext.<task|taskRun|pipeline|pipelineRun>.name variable to reference the appropriate name when in pipeline runs and task runs. Validation for pipeline run parameters is now available to ensure that all the parameters required by the pipeline are provided by the pipeline run. This also allows pipeline runs to provide extra parameters in addition to the required parameters. You can now specify tasks within a pipeline that will always execute before the pipeline exits, either after finishing all tasks successfully or after a task in the pipeline failed, using the finally field in the pipeline YAML file. The git-clone cluster task is now available. 4.1.12.1.2. Pipelines CLI Support for embedded trigger binding is now available to the tkn evenlistener describe command. Support to recommend subcommands and make suggestions if an incorrect subcommand is used. The tkn task describe command now auto selects the task if only one task is present in the pipeline. You can now start a task using default parameter values by specifying the --use-param-defaults flag in the tkn task start command. You can now specify a volume claim template for pipeline runs or task runs using the --workspace option with the tkn pipeline start or tkn task start commands. The tkn pipelinerun logs command now displays logs for the final tasks listed in the finally section. Interactive mode support has now been provided to the tkn task start command and the describe subcommand for the following tkn resources: pipeline , pipelinerun , task , taskrun , clustertask , and pipelineresource . The tkn version command now displays the version of the triggers installed in the cluster. The tkn pipeline describe command now displays parameter values and timeouts specified for tasks used in the pipeline. Support added for the --last option for the tkn pipelinerun describe and the tkn taskrun describe commands to describe the most recent pipeline run or task run, respectively. The tkn pipeline describe command now displays the conditions applicable to the tasks in the pipeline. You can now use the --no-headers and --all-namespaces flags with the tkn resource list command. 4.1.12.1.3. Triggers The following Common Expression Language (CEL) functions are now available: parseURL to parse and extract portions of a URL parseJSON to parse JSON value types embedded in a string in the payload field of the deployment webhook A new interceptor for webhooks from Bitbucket has been added. Event listeners now display the Address URL and the Available status as additional fields when listed with the kubectl get command. trigger template params now use the USD(tt.params.<paramName>) syntax instead of USD(params.<paramName>) to reduce the confusion between trigger template and resource templates params. You can now add tolerations in the EventListener CRD to ensure that event listeners are deployed with the same configuration even if all nodes are tainted due to security or management issues. You can now add a Readiness Probe for event listener Deployment at URL/live . Support for embedding TriggerBinding specifications in event listener triggers is now added. Trigger resources are now annotated with the recommended app.kubernetes.io labels. 4.1.12.2. Deprecated features The following items are deprecated in this release: The --namespace or -n flags for all cluster-wide commands, including the clustertask and clustertriggerbinding commands, are deprecated. It will be removed in a future release. The name field in triggers.bindings within an event listener has been deprecated in favor of the ref field and will be removed in a future release. Variable interpolation in trigger templates using USD(params) has been deprecated in favor of using USD(tt.params) to reduce confusion with the pipeline variable interpolation syntax. The USD(params.<paramName>) syntax will be removed in a future release. The tekton.dev/task label is deprecated on cluster tasks. The TaskRun.Status.ResourceResults.ResourceRef field is deprecated and will be removed. The tkn pipeline create , tkn task create , and tkn resource create -f subcommands have been removed. Namespace validation has been removed from tkn commands. The default timeout of 1h and the -t flag for the tkn ct start command have been removed. The s2i cluster task has been deprecated. 4.1.12.3. Known issues Conditions do not support workspaces. The --workspace option and the interactive mode is not supported for the tkn clustertask start command. Support of backward compatibility for USD(params.<paramName>) syntax forces you to use trigger templates with pipeline specific params as the trigger s webhook is unable to differentiate trigger params from pipelines params. Pipeline metrics report incorrect values when you run a promQL query for tekton_taskrun_count and tekton_taskrun_duration_seconds_count . pipeline runs and task runs continue to be in the Running and Running(Pending) states respectively even when a non existing PVC name is given to a workspace. 4.1.12.4. Fixed issues Previously, the tkn task delete <name> --trs command would delete both the task and cluster task if the name of the task and cluster task were the same. With this fix, the command deletes only the task runs that are created by the task <name> . Previously the tkn pr delete -p <name> --keep 2 command would disregard the -p flag when used with the --keep flag and would delete all the pipeline runs except the latest two. With this fix, the command deletes only the pipeline runs that are created by the pipeline <name> , except for the latest two. The tkn triggertemplate describe output now displays resource templates in a table format instead of YAML format. Previously the buildah cluster task failed when a new user was added to a container. With this fix, the issue has been resolved. 4.1.13. Release notes for Red Hat OpenShift Pipelines Technology Preview 1.0 4.1.13.1. New features Red Hat OpenShift Pipelines Technology Preview (TP) 1.0 is now available on OpenShift Container Platform 4.4. Red Hat OpenShift Pipelines TP 1.0 is updated to support: Tekton Pipelines 0.11.3 Tekton tkn CLI 0.9.0 Tekton Triggers 0.4.0 cluster tasks based on Tekton Catalog 0.11 In addition to the fixes and stability improvements, the following sections highlight what is new in Red Hat OpenShift Pipelines 1.0. 4.1.13.1.1. Pipelines Support for v1beta1 API Version. Support for an improved limit range. Previously, limit range was specified exclusively for the task run and the pipeline run. Now there is no need to explicitly specify the limit range. The minimum limit range across the namespace is used. Support for sharing data between tasks using task results and task params. Pipelines can now be configured to not overwrite the HOME environment variable and the working directory of steps. Similar to task steps, sidecars now support script mode. You can now specify a different scheduler name in task run podTemplate resource. Support for variable substitution using Star Array Notation. Tekton controller can now be configured to monitor an individual namespace. A new description field is now added to the specification of pipelines, tasks, cluster tasks, resources, and conditions. Addition of proxy parameters to Git pipeline resources. 4.1.13.1.2. Pipelines CLI The describe subcommand is now added for the following tkn resources: EventListener , Condition , TriggerTemplate , ClusterTask , and TriggerSBinding . Support added for v1beta1 to the following resources along with backward compatibility for v1alpha1 : ClusterTask , Task , Pipeline , PipelineRun , and TaskRun . The following commands can now list output from all namespaces using the --all-namespaces flag option: tkn task list , tkn pipeline list , tkn taskrun list , tkn pipelinerun list The output of these commands is also enhanced to display information without headers using the --no-headers flag option. You can now start a pipeline using default parameter values by specifying --use-param-defaults flag in the tkn pipelines start command. Support for workspace is now added to tkn pipeline start and tkn task start commands. A new clustertriggerbinding command is now added with the following subcommands: describe , delete , and list . You can now directly start a pipeline run using a local or remote yaml file. The describe subcommand now displays an enhanced and detailed output. With the addition of new fields, such as description , timeout , param description , and sidecar status , the command output now provides more detailed information about a specific tkn resource. The tkn task log command now displays logs directly if only one task is present in the namespace. 4.1.13.1.3. Triggers Triggers can now create both v1alpha1 and v1beta1 pipeline resources. Support for new Common Expression Language (CEL) interceptor function - compareSecret . This function securely compares strings to secrets in CEL expressions. Support for authentication and authorization at the event listener trigger level. 4.1.13.2. Deprecated features The following items are deprecated in this release: The environment variable USDHOME , and variable workingDir in the Steps specification are deprecated and might be changed in a future release. Currently in a Step container, the HOME and workingDir variables are overwritten to /tekton/home and /workspace variables, respectively. In a later release, these two fields will not be modified, and will be set to values defined in the container image and the Task YAML. For this release, use the disable-home-env-overwrite and disable-working-directory-overwrite flags to disable overwriting of the HOME and workingDir variables. The following commands are deprecated and might be removed in the future release: tkn pipeline create , tkn task create . The -f flag with the tkn resource create command is now deprecated. It might be removed in the future release. The -t flag and the --timeout flag (with seconds format) for the tkn clustertask create command are now deprecated. Only duration timeout format is now supported, for example 1h30s . These deprecated flags might be removed in the future release. 4.1.13.3. Known issues If you are upgrading from an older version of Red Hat OpenShift Pipelines, you must delete your existing deployments before upgrading to Red Hat OpenShift Pipelines version 1.0. To delete an existing deployment, you must first delete Custom Resources and then uninstall the Red Hat OpenShift Pipelines Operator. For more details, see the uninstalling Red Hat OpenShift Pipelines section. Submitting the same v1alpha1 tasks more than once results in an error. Use the oc replace command instead of oc apply when re-submitting a v1alpha1 task. The buildah cluster task does not work when a new user is added to a container. When the Operator is installed, the --storage-driver flag for the buildah cluster task is not specified, therefore the flag is set to its default value. In some cases, this causes the storage driver to be set incorrectly. When a new user is added, the incorrect storage-driver results in the failure of the buildah cluster task with the following error: As a workaround, manually set the --storage-driver flag value to overlay in the buildah-task.yaml file: Login to your cluster as a cluster-admin : Use the oc edit command to edit buildah cluster task: The current version of the buildah clustertask YAML file opens in the editor set by your EDITOR environment variable. Under the Steps field, locate the following command field: Replace the command field with the following: Save the file and exit. Alternatively, you can also modify the buildah cluster task YAML file directly on the web console by navigating to Pipelines Cluster Tasks buildah . Select Edit Cluster Task from the Actions menu and replace the command field as shown in the procedure. 4.1.13.4. Fixed issues Previously, the DeploymentConfig task triggered a new deployment build even when an image build was already in progress. This caused the deployment of the pipeline to fail. With this fix, the deploy task command is now replaced with the oc rollout status command which waits for the in-progress deployment to finish. Support for APP_NAME parameter is now added in pipeline templates. Previously, the pipeline template for Java S2I failed to look up the image in the registry. With this fix, the image is looked up using the existing image pipeline resources instead of the user provided IMAGE_NAME parameter. All the OpenShift Pipelines images are now based on the Red Hat Universal Base Images (UBI). Previously, when the pipeline was installed in a namespace other than tekton-pipelines , the tkn version command displayed the pipeline version as unknown . With this fix, the tkn version command now displays the correct pipeline version in any namespace. The -c flag is no longer supported for the tkn version command. Non-admin users can now list the cluster trigger bindings. The event listener CompareSecret function is now fixed for the CEL Interceptor. The list , describe , and start subcommands for tasks and cluster tasks now correctly display the output in case a task and cluster task have the same name. Previously, the OpenShift Pipelines Operator modified the privileged security context constraints (SCCs), which caused an error during cluster upgrade. This error is now fixed. In the tekton-pipelines namespace, the timeouts of all task runs and pipeline runs are now set to the value of default-timeout-minutes field using the config map. Previously, the pipelines section in the web console was not displayed for non-admin users. This issue is now resolved. 4.2. Understanding OpenShift Pipelines Red Hat OpenShift Pipelines is a cloud-native, continuous integration and continuous delivery (CI/CD) solution based on Kubernetes resources. It uses Tekton building blocks to automate deployments across multiple platforms by abstracting away the underlying implementation details. Tekton introduces a number of standard custom resource definitions (CRDs) for defining CI/CD pipelines that are portable across Kubernetes distributions. 4.2.1. Key features Red Hat OpenShift Pipelines is a serverless CI/CD system that runs pipelines with all the required dependencies in isolated containers. Red Hat OpenShift Pipelines are designed for decentralized teams that work on microservice-based architecture. Red Hat OpenShift Pipelines use standard CI/CD pipeline definitions that are easy to extend and integrate with the existing Kubernetes tools, enabling you to scale on-demand. You can use Red Hat OpenShift Pipelines to build images with Kubernetes tools such as Source-to-Image (S2I), Buildah, Buildpacks, and Kaniko that are portable across any Kubernetes platform. You can use the OpenShift Container Platform web console Developer perspective to create Tekton resources, view logs of pipeline runs, and manage pipelines in your OpenShift Container Platform namespaces. 4.2.2. OpenShift Pipeline Concepts This guide provides a detailed view of the various pipeline concepts. 4.2.2.1. Tasks Tasks are the building blocks of a pipeline and consists of sequentially executed steps. It is essentially a function of inputs and outputs. A task can run individually or as a part of the pipeline. Tasks are reusable and can be used in multiple Pipelines. Steps are a series of commands that are sequentially executed by the task and achieve a specific goal, such as building an image. Every task runs as a pod, and each step runs as a container within that pod. Because steps run within the same pod, they can access the same volumes for caching files, config maps, and secrets. The following example shows the apply-manifests task. apiVersion: tekton.dev/v1beta1 1 kind: Task 2 metadata: name: apply-manifests 3 spec: 4 workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: "k8s" steps: - name: apply image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest workingDir: /workspace/source command: ["/bin/bash", "-c"] args: - |- echo Applying manifests in USD(params.manifest_dir) directory oc apply -f USD(params.manifest_dir) echo ----------------------------------- 1 The task API version, v1beta1 . 2 The type of Kubernetes object, Task . 3 The unique name of this task. 4 The list of parameters and steps in the task and the workspace used by the task. This task starts the pod and runs a container inside that pod using the specified image to run the specified commands. Note Starting with Pipelines 1.6, the following defaults from the step YAML file are removed: The HOME environment variable does not default to the /tekton/home directory The workingDir field does not default to the /workspace directory Instead, the container for the step defines the HOME environment variable and the workingDir field. However, you can override the default values by specifying the custom values in the YAML file for the step. As a temporary measure, to maintain backward compatibility with the older Pipelines versions, you can set the following fields in the TektonConfig custom resource definition to false : 4.2.2.2. When expression When expressions guard task execution by setting criteria for the execution of tasks within a pipeline. They contain a list of components that allows a task to run only when certain criteria are met. When expressions are also supported in the final set of tasks that are specified using the finally field in the pipeline YAML file. The key components of a when expression are as follows: input : Specifies static inputs or variables such as a parameter, task result, and execution status. You must enter a valid input. If you do not enter a valid input, its value defaults to an empty string. operator : Specifies the relationship of an input to a set of values . Enter in or notin as your operator values. values : Specifies an array of string values. Enter a non-empty array of static values or variables such as parameters, results, and a bound state of a workspace. The declared when expressions are evaluated before the task is run. If the value of a when expression is True , the task is run. If the value of a when expression is False , the task is skipped. You can use the when expressions in various use cases. For example, whether: The result of a task is as expected. A file in a Git repository has changed in the commits. An image exists in the registry. An optional workspace is available. The following example shows the when expressions for a pipeline run. The pipeline run will execute the create-file task only if the following criteria are met: the path parameter is README.md , and the echo-file-exists task executed only if the exists result from the check-file task is yes . apiVersion: tekton.dev/v1beta1 kind: PipelineRun 1 metadata: generateName: guarded-pr- spec: serviceAccountName: 'pipeline' pipelineSpec: params: - name: path type: string description: The path of the file to be created workspaces: - name: source description: | This workspace is shared among all the pipeline tasks to read/write common resources tasks: - name: create-file 2 when: - input: "USD(params.path)" operator: in values: ["README.md"] workspaces: - name: source workspace: source taskSpec: workspaces: - name: source description: The workspace to create the readme file in steps: - name: write-new-stuff image: ubuntu script: 'touch USD(workspaces.source.path)/README.md' - name: check-file params: - name: path value: "USD(params.path)" workspaces: - name: source workspace: source runAfter: - create-file taskSpec: params: - name: path workspaces: - name: source description: The workspace to check for the file results: - name: exists description: indicates whether the file exists or is missing steps: - name: check-file image: alpine script: | if test -f USD(workspaces.source.path)/USD(params.path); then printf yes | tee /tekton/results/exists else printf no | tee /tekton/results/exists fi - name: echo-file-exists when: 3 - input: "USD(tasks.check-file.results.exists)" operator: in values: ["yes"] taskSpec: steps: - name: echo image: ubuntu script: 'echo file exists' ... - name: task-should-be-skipped-1 when: 4 - input: "USD(params.path)" operator: notin values: ["README.md"] taskSpec: steps: - name: echo image: ubuntu script: exit 1 ... finally: - name: finally-task-should-be-executed when: 5 - input: "USD(tasks.echo-file-exists.status)" operator: in values: ["Succeeded"] - input: "USD(tasks.status)" operator: in values: ["Succeeded"] - input: "USD(tasks.check-file.results.exists)" operator: in values: ["yes"] - input: "USD(params.path)" operator: in values: ["README.md"] taskSpec: steps: - name: echo image: ubuntu script: 'echo finally done' params: - name: path value: README.md workspaces: - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Mi 1 Specifies the type of Kubernetes object. In this example, PipelineRun . 2 Task create-file used in the Pipeline. 3 when expression that specifies to execute the echo-file-exists task only if the exists result from the check-file task is yes . 4 when expression that specifies to skip the task-should-be-skipped-1 task only if the path parameter is README.md . 5 when expression that specifies to execute the finally-task-should-be-executed task only if the execution status of the echo-file-exists task and the task status is Succeeded , the exists result from the check-file task is yes , and the path parameter is README.md . The Pipeline Run details page of the OpenShift Container Platform web console shows the status of the tasks and when expressions as follows: All the criteria are met: Tasks and the when expression symbol, which is represented by a diamond shape are green. Any one of the criteria are not met: Task is skipped. Skipped tasks and the when expression symbol are grey. None of the criteria are met: Task is skipped. Skipped tasks and the when expression symbol are grey. Task run fails: Failed tasks and the when expression symbol are red. 4.2.2.3. Finally tasks The finally tasks are the final set of tasks specified using the finally field in the pipeline YAML file. A finally task always executes the tasks within the pipeline, irrespective of whether the pipeline runs are executed successfully. The finally tasks are executed in parallel after all the pipeline tasks are run, before the corresponding pipeline exits. You can configure a finally task to consume the results of any task within the same pipeline. This approach does not change the order in which this final task is run. It is executed in parallel with other final tasks after all the non-final tasks are executed. The following example shows a code snippet of the clone-cleanup-workspace pipeline. This code clones the repository into a shared workspace and cleans up the workspace. After executing the pipeline tasks, the cleanup task specified in the finally section of the pipeline YAML file cleans up the workspace. apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: clone-cleanup-workspace 1 spec: workspaces: - name: git-source 2 tasks: - name: clone-app-repo 3 taskRef: name: git-clone-from-catalog params: - name: url value: https://github.com/tektoncd/community.git - name: subdirectory value: application workspaces: - name: output workspace: git-source finally: - name: cleanup 4 taskRef: 5 name: cleanup-workspace workspaces: 6 - name: source workspace: git-source - name: check-git-commit params: 7 - name: commit value: USD(tasks.clone-app-repo.results.commit) taskSpec: 8 params: - name: commit steps: - name: check-commit-initialized image: alpine script: | if [[ ! USD(params.commit) ]]; then exit 1 fi 1 Unique name of the Pipeline. 2 The shared workspace where the git repository is cloned. 3 The task to clone the application repository to the shared workspace. 4 The task to clean-up the shared workspace. 5 A reference to the task that is to be executed in the TaskRun. 6 A shared storage volume that a Task in a Pipeline needs at runtime to receive input or provide output. 7 A list of parameters required for a task. If a parameter does not have an implicit default value, you must explicitly set its value. 8 Embedded task definition. 4.2.2.4. TaskRun A TaskRun instantiates a Task for execution with specific inputs, outputs, and execution parameters on a cluster. It can be invoked on its own or as part of a PipelineRun for each Task in a pipeline. A Task consists of one or more Steps that execute container images, and each container image performs a specific piece of build work. A TaskRun executes the Steps in a Task in the specified order, until all Steps execute successfully or a failure occurs. A TaskRun is automatically created by a PipelineRun for each Task in a Pipeline. The following example shows a TaskRun that runs the apply-manifests Task with the relevant input parameters: apiVersion: tekton.dev/v1beta1 1 kind: TaskRun 2 metadata: name: apply-manifests-taskrun 3 spec: 4 serviceAccountName: pipeline taskRef: 5 kind: Task name: apply-manifests workspaces: 6 - name: source persistentVolumeClaim: claimName: source-pvc 1 TaskRun API version v1beta1 . 2 Specifies the type of Kubernetes object. In this example, TaskRun . 3 Unique name to identify this TaskRun. 4 Definition of the TaskRun. For this TaskRun, the Task and the required workspace are specified. 5 Name of the Task reference used for this TaskRun. This TaskRun executes the apply-manifests Task. 6 Workspace used by the TaskRun. 4.2.2.5. Pipelines A Pipeline is a collection of Task resources arranged in a specific order of execution. They are executed to construct complex workflows that automate the build, deployment and delivery of applications. You can define a CI/CD workflow for your application using pipelines containing one or more tasks. A Pipeline resource definition consists of a number of fields or attributes, which together enable the pipeline to accomplish a specific goal. Each Pipeline resource definition must contain at least one Task resource, which ingests specific inputs and produces specific outputs. The pipeline definition can also optionally include Conditions , Workspaces , Parameters , or Resources depending on the application requirements. The following example shows the build-and-deploy pipeline, which builds an application image from a Git repository using the buildah ClusterTask resource: apiVersion: tekton.dev/v1beta1 1 kind: Pipeline 2 metadata: name: build-and-deploy 3 spec: 4 workspaces: 5 - name: shared-workspace params: 6 - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: "pipelines-1.10" - name: IMAGE type: string description: image to be built from the code tasks: 7 - name: fetch-repository taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: USD(params.git-url) - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: USD(params.git-revision) - name: build-image 8 taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: "false" - name: IMAGE value: USD(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests 9 taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: 10 - build-image - name: update-deployment taskRef: name: update-deployment workspaces: - name: source workspace: shared-workspace params: - name: deployment value: USD(params.deployment-name) - name: IMAGE value: USD(params.IMAGE) runAfter: - apply-manifests 1 Pipeline API version v1beta1 . 2 Specifies the type of Kubernetes object. In this example, Pipeline . 3 Unique name of this Pipeline. 4 Specifies the definition and structure of the Pipeline. 5 Workspaces used across all the Tasks in the Pipeline. 6 Parameters used across all the Tasks in the Pipeline. 7 Specifies the list of Tasks used in the Pipeline. 8 Task build-image , which uses the buildah ClusterTask to build application images from a given Git repository. 9 Task apply-manifests , which uses a user-defined Task with the same name. 10 Specifies the sequence in which Tasks are run in a Pipeline. In this example, the apply-manifests Task is run only after the build-image Task is completed. Note The Red Hat OpenShift Pipelines Operator installs the Buildah cluster task and creates the pipeline service account with sufficient permission to build and push an image. The Buildah cluster task can fail when associated with a different service account with insufficient permissions. 4.2.2.6. PipelineRun A PipelineRun is a type of resource that binds a pipeline, workspaces, credentials, and a set of parameter values specific to a scenario to run the CI/CD workflow. A pipeline run is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a task run for each task in the pipeline run. The pipeline runs the tasks sequentially until they are complete or a task fails. The status field tracks and the progress of each task run and stores it for monitoring and auditing purposes. The following example runs the build-and-deploy pipeline with relevant resources and parameters: apiVersion: tekton.dev/v1beta1 1 kind: PipelineRun 2 metadata: name: build-deploy-api-pipelinerun 3 spec: pipelineRef: name: build-and-deploy 4 params: 5 - name: deployment-name value: vote-api - name: git-url value: https://github.com/openshift-pipelines/vote-api.git - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api workspaces: 6 - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi 1 Pipeline run API version v1beta1 . 2 The type of Kubernetes object. In this example, PipelineRun . 3 Unique name to identify this pipeline run. 4 Name of the pipeline to be run. In this example, build-and-deploy . 5 The list of parameters required to run the pipeline. 6 Workspace used by the pipeline run. Additional resources Authenticating pipelines using git secret 4.2.2.7. Workspaces Note It is recommended that you use Workspaces instead of PipelineResources in OpenShift Pipelines, as PipelineResources are difficult to debug, limited in scope, and make Tasks less reusable. Workspaces declare shared storage volumes that a Task in a Pipeline needs at runtime to receive input or provide output. Instead of specifying the actual location of the volumes, Workspaces enable you to declare the filesystem or parts of the filesystem that would be required at runtime. A Task or Pipeline declares the Workspace and you must provide the specific location details of the volume. It is then mounted into that Workspace in a TaskRun or a PipelineRun. This separation of volume declaration from runtime storage volumes makes the Tasks reusable, flexible, and independent of the user environment. With Workspaces, you can: Store Task inputs and outputs Share data among Tasks Use it as a mount point for credentials held in Secrets Use it as a mount point for configurations held in ConfigMaps Use it as a mount point for common tools shared by an organization Create a cache of build artifacts that speed up jobs You can specify Workspaces in the TaskRun or PipelineRun using: A read-only ConfigMaps or Secret An existing PersistentVolumeClaim shared with other Tasks A PersistentVolumeClaim from a provided VolumeClaimTemplate An emptyDir that is discarded when the TaskRun completes The following example shows a code snippet of the build-and-deploy Pipeline, which declares a shared-workspace Workspace for the build-image and apply-manifests Tasks as defined in the Pipeline. apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: 1 - name: shared-workspace params: ... tasks: 2 - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: "false" - name: IMAGE value: USD(params.IMAGE) workspaces: 3 - name: source 4 workspace: shared-workspace 5 runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: 6 - name: source workspace: shared-workspace runAfter: - build-image ... 1 List of Workspaces shared between the Tasks defined in the Pipeline. A Pipeline can define as many Workspaces as required. In this example, only one Workspace named shared-workspace is declared. 2 Definition of Tasks used in the Pipeline. This snippet defines two Tasks, build-image and apply-manifests , which share a common Workspace. 3 List of Workspaces used in the build-image Task. A Task definition can include as many Workspaces as it requires. However, it is recommended that a Task uses at most one writable Workspace. 4 Name that uniquely identifies the Workspace used in the Task. This Task uses one Workspace named source . 5 Name of the Pipeline Workspace used by the Task. Note that the Workspace source in turn uses the Pipeline Workspace named shared-workspace . 6 List of Workspaces used in the apply-manifests Task. Note that this Task shares the source Workspace with the build-image Task. Workspaces help tasks share data, and allow you to specify one or more volumes that each task in the pipeline requires during execution. You can create a persistent volume claim or provide a volume claim template that creates a persistent volume claim for you. The following code snippet of the build-deploy-api-pipelinerun PipelineRun uses a volume claim template to create a persistent volume claim for defining the storage volume for the shared-workspace Workspace used in the build-and-deploy Pipeline. apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: build-deploy-api-pipelinerun spec: pipelineRef: name: build-and-deploy params: ... workspaces: 1 - name: shared-workspace 2 volumeClaimTemplate: 3 spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi 1 Specifies the list of Pipeline Workspaces for which volume binding will be provided in the PipelineRun. 2 The name of the Workspace in the Pipeline for which the volume is being provided. 3 Specifies a volume claim template that creates a persistent volume claim to define the storage volume for the workspace. 4.2.2.8. Triggers Use Triggers in conjunction with pipelines to create a full-fledged CI/CD system where Kubernetes resources define the entire CI/CD execution. Triggers capture the external events, such as a Git pull request, and process them to extract key pieces of information. Mapping this event data to a set of predefined parameters triggers a series of tasks that can then create and deploy Kubernetes resources and instantiate the pipeline. For example, you define a CI/CD workflow using Red Hat OpenShift Pipelines for your application. The pipeline must start for any new changes to take effect in the application repository. Triggers automate this process by capturing and processing any change event and by triggering a pipeline run that deploys the new image with the latest changes. Triggers consist of the following main resources that work together to form a reusable, decoupled, and self-sustaining CI/CD system: The TriggerBinding resource extracts the fields from an event payload and stores them as parameters. The following example shows a code snippet of the TriggerBinding resource, which extracts the Git repository information from the received event payload: apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerBinding 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url value: USD(body.repository.url) - name: git-repo-name value: USD(body.repository.name) - name: git-revision value: USD(body.head_commit.id) 1 The API version of the TriggerBinding resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, TriggerBinding . 3 Unique name to identify the TriggerBinding resource. 4 List of parameters which will be extracted from the received event payload and passed to the TriggerTemplate resource. In this example, the Git repository URL, name, and revision are extracted from the body of the event payload. The TriggerTemplate resource acts as a standard for the way resources must be created. It specifies the way parameterized data from the TriggerBinding resource should be used. A trigger template receives input from the trigger binding, and then performs a series of actions that results in creation of new pipeline resources, and initiation of a new pipeline run. The following example shows a code snippet of a TriggerTemplate resource, which creates a pipeline run using the Git repository information received from the TriggerBinding resource you just created: apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerTemplate 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.10 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: 5 - apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: build-deploy-USD(tt.params.git-repo-name)-USD(uid) spec: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: USD(tt.params.git-repo-name) - name: git-url value: USD(tt.params.git-repo-url) - name: git-revision value: USD(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/USD(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi 1 The API version of the TriggerTemplate resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, TriggerTemplate . 3 Unique name to identify the TriggerTemplate resource. 4 Parameters supplied by the TriggerBinding resource. 5 List of templates that specify the way resources must be created using the parameters received through the TriggerBinding or EventListener resources. The Trigger resource combines the TriggerBinding and TriggerTemplate resources, and optionally, the interceptors event processor. Interceptors process all the events for a specific platform that runs before the TriggerBinding resource. You can use interceptors to filter the payload, verify events, define and test trigger conditions, and implement other useful processing. Interceptors use secret for event verification. Once the event data passes through an interceptor, it then goes to the trigger before you pass the payload data to the trigger binding. You can also use an interceptor to modify the behavior of the associated trigger referenced in the EventListener specification. The following example shows a code snippet of a Trigger resource, named vote-trigger that connects the TriggerBinding and TriggerTemplate resources, and the interceptors event processor. apiVersion: triggers.tekton.dev/v1beta1 1 kind: Trigger 2 metadata: name: vote-trigger 3 spec: serviceAccountName: pipeline 4 interceptors: - ref: name: "github" 5 params: 6 - name: "secretRef" value: secretName: github-secret secretKey: secretToken - name: "eventTypes" value: ["push"] bindings: - ref: vote-app 7 template: 8 ref: vote-app --- apiVersion: v1 kind: Secret 9 metadata: name: github-secret type: Opaque stringData: secretToken: "1234567" 1 The API version of the Trigger resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, Trigger . 3 Unique name to identify the Trigger resource. 4 Service account name to be used. 5 Interceptor name to be referenced. In this example, github . 6 Desired parameters to be specified. 7 Name of the TriggerBinding resource to be connected to the TriggerTemplate resource. 8 Name of the TriggerTemplate resource to be connected to the TriggerBinding resource. 9 Secret to be used to verify events. The EventListener resource provides an endpoint, or an event sink, that listens for incoming HTTP-based events with a JSON payload. It extracts event parameters from each TriggerBinding resource, and then processes this data to create Kubernetes resources as specified by the corresponding TriggerTemplate resource. The EventListener resource also performs lightweight event processing or basic filtering on the payload using event interceptors , which identify the type of payload and optionally modify it. Currently, pipeline triggers support five types of interceptors: Webhook Interceptors , GitHub Interceptors , GitLab Interceptors , Bitbucket Interceptors , and Common Expression Language (CEL) Interceptors . The following example shows an EventListener resource, which references the Trigger resource named vote-trigger . apiVersion: triggers.tekton.dev/v1beta1 1 kind: EventListener 2 metadata: name: vote-app 3 spec: serviceAccountName: pipeline 4 triggers: - triggerRef: vote-trigger 5 1 The API version of the EventListener resource. In this example, v1beta1 . 2 Specifies the type of Kubernetes object. In this example, EventListener . 3 Unique name to identify the EventListener resource. 4 Service account name to be used. 5 Name of the Trigger resource referenced by the EventListener resource. 4.2.3. Additional resources For information on installing pipelines, see Installing OpenShift Pipelines . For more details on creating custom CI/CD solutions, see Creating applications with CI/CD Pipelines . For more details on re-encrypt TLS termination, see Re-encryption Termination . For more details on secured routes, see the Secured routes section. 4.3. Installing OpenShift Pipelines This guide walks cluster administrators through the process of installing the Red Hat OpenShift Pipelines Operator to an OpenShift Container Platform cluster. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed oc CLI. You have installed OpenShift Pipelines ( tkn ) CLI on your local system. 4.3.1. Installing the Red Hat OpenShift Pipelines Operator in web console You can install Red Hat OpenShift Pipelines using the Operator listed in the OpenShift Container Platform OperatorHub. When you install the Red Hat OpenShift Pipelines Operator, the custom resources (CRs) required for the pipelines configuration are automatically installed along with the Operator. The default Operator custom resource definition (CRD) config.operator.tekton.dev is now replaced by tektonconfigs.operator.tekton.dev . In addition, the Operator provides the following additional CRDs to individually manage OpenShift Pipelines components: tektonpipelines.operator.tekton.dev , tektontriggers.operator.tekton.dev and tektonaddons.operator.tekton.dev . If you have OpenShift Pipelines already installed on your cluster, the existing installation is seamlessly upgraded. The Operator will replace the instance of config.operator.tekton.dev on your cluster with an instance of tektonconfigs.operator.tekton.dev and additional objects of the other CRDs as necessary. Warning If you manually changed your existing installation, such as, changing the target namespace in the config.operator.tekton.dev CRD instance by making changes to the resource name - cluster field, then the upgrade path is not smooth. In such cases, the recommended workflow is to uninstall your installation and reinstall the Red Hat OpenShift Pipelines Operator. The Red Hat OpenShift Pipelines Operator now provides the option to choose the components that you want to install by specifying profiles as part of the TektonConfig CR. The TektonConfig CR is automatically installed when the Operator is installed. The supported profiles are: Lite: This installs only Tekton Pipelines. Basic: This installs Tekton Pipelines and Tekton Triggers. All: This is the default profile used when the TektonConfig CR is installed. This profile installs all of the Tekton components: Tekton Pipelines, Tekton Triggers, Tekton Addons (which include ClusterTasks , ClusterTriggerBindings , ConsoleCLIDownload , ConsoleQuickStart and ConsoleYAMLSample resources). Procedure In the Administrator perspective of the web console, navigate to Operators OperatorHub . Use the Filter by keyword box to search for Red Hat OpenShift Pipelines Operator in the catalog. Click the Red Hat OpenShift Pipelines Operator tile. Read the brief description about the Operator on the Red Hat OpenShift Pipelines Operator page. Click Install . On the Install Operator page: Select All namespaces on the cluster (default) for the Installation Mode . This mode installs the Operator in the default openshift-operators namespace, which enables the Operator to watch and be made available to all namespaces in the cluster. Select Automatic for the Approval Strategy . This ensures that the future upgrades to the Operator are handled automatically by the Operator Lifecycle Manager (OLM). If you select the Manual approval strategy, OLM creates an update request. As a cluster administrator, you must then manually approve the OLM update request to update the Operator to the new version. Select an Update Channel . The pipelines-<version> channel is the default channel to install the Red Hat OpenShift Pipelines Operator. For example, the default channel to install the Red Hat OpenShift Pipelines Operator version 1.7 is pipelines-1.7 . The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. Note The preview and stable channels will be deprecated and removed in a future release. Click Install . You will see the Operator listed on the Installed Operators page. Note The Operator is installed automatically into the openshift-operators namespace. Verify that the Status is set to Succeeded Up to date to confirm successful installation of Red Hat OpenShift Pipelines Operator. Warning The success status may show as Succeeded Up to date even if installation of other components is in-progress. Therefore, it is important to verify the installation manually in the terminal. Verify that all components of the Red Hat OpenShift Pipelines Operator were installed successfully. Login to the cluster on the terminal, and run the following command: USD oc get tektonconfig config Example output If the READY condition is True , the Operator and its components have been installed successfully. Additonally, check the components' versions by running the following command: USD oc get tektonpipeline,tektontrigger,tektonaddon,pac Example output 4.3.2. Installing the OpenShift Pipelines Operator using the CLI You can install Red Hat OpenShift Pipelines Operator from the OperatorHub using the CLI. Procedure Create a Subscription object YAML file to subscribe a namespace to the Red Hat OpenShift Pipelines Operator, for example, sub.yaml : Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel name> 1 name: openshift-pipelines-operator-rh 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4 1 The channel name of the Operator. The pipelines-<version> channel is the default channel. For example, the default channel for Red Hat OpenShift Pipelines Operator version 1.7 is pipelines-1.7 . The latest channel enables installation of the most recent stable version of the Red Hat OpenShift Pipelines Operator. 2 Name of the Operator to subscribe to. 3 Name of the CatalogSource that provides the Operator. 4 Namespace of the CatalogSource. Use openshift-marketplace for the default OperatorHub CatalogSources. Create the Subscription object: The Red Hat OpenShift Pipelines Operator is now installed in the default target namespace openshift-operators . 4.3.3. Red Hat OpenShift Pipelines Operator in a restricted environment The Red Hat OpenShift Pipelines Operator enables support for installation of pipelines in a restricted network environment. The Operator installs a proxy webhook that sets the proxy environment variables in the containers of the pod created by tekton-controllers based on the cluster proxy object. It also sets the proxy environment variables in the TektonPipelines , TektonTriggers , Controllers , Webhooks , and Operator Proxy Webhook resources. By default, the proxy webhook is disabled for the openshift-pipelines namespace. To disable it for any other namespace, you can add the operator.tekton.dev/disable-proxy: true label to the namespace object. 4.3.4. Additional resources You can learn more about installing Operators on OpenShift Container Platform in the adding Operators to a cluster section. To install Tekton Chains using the Red Hat OpenShift Pipelines Operator, see Using Tekton Chains for Red Hat OpenShift Pipelines supply chain security . To install and deploy in-cluster Tekton Hub, see Using Tekton Hub with Red Hat OpenShift Pipelines . For more information on using pipelines in a restricted environment, see: Mirroring images to run pipelines in a restricted environment Configuring Samples Operator for a restricted cluster Creating a cluster with a mirrored registry 4.4. Uninstalling OpenShift Pipelines Cluster administrators can uninstall the Red Hat OpenShift Pipelines Operator by performing the following steps: Delete the Custom Resources (CRs) that were added by default when you installed the Red Hat OpenShift Pipelines Operator. Delete the CRs of the optional components such as Tekton Hub that depend on the Operator. Caution If you uninstall the Operator without removing the CRs of optional components, you cannot remove them later. Uninstall the Red Hat OpenShift Pipelines Operator. Uninstalling only the Operator will not remove the Red Hat OpenShift Pipelines components created by default when the Operator is installed. 4.4.1. Deleting the Red Hat OpenShift Pipelines components and Custom Resources Delete the Custom Resources (CRs) created by default during installation of the Red Hat OpenShift Pipelines Operator. Procedure In the Administrator perspective of the web console, navigate to Administration Custom Resource Definition . Type config.operator.tekton.dev in the Filter by name box to search for the Red Hat OpenShift Pipelines Operator CRs. Click CRD Config to see the Custom Resource Definition Details page. Click the Actions drop-down menu and select Delete Custom Resource Definition . Note Deleting the CRs will delete the Red Hat OpenShift Pipelines components, and all the Tasks and Pipelines on the cluster will be lost. Click Delete to confirm the deletion of the CRs. Important Repeat the procedure to find and remove CRs of optional components such as Tekton Hub before uninstalling the Operator. If you uninstall the Operator without removing the CRs of optional components, you cannot remove them later. 4.4.2. Uninstalling the Red Hat OpenShift Pipelines Operator You can uninstall the Red Hat OpenShift Pipelines Operator by using the Administrator perspective in the web console. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for the Red Hat OpenShift Pipelines Operator. Click the Red Hat OpenShift Pipelines Operator tile. The Operator tile indicates that the Operator is installed. In the Red Hat OpenShift Pipelines Operator description page, click Uninstall . Additional resources You can learn more about uninstalling Operators on OpenShift Container Platform in the deleting Operators from a cluster section. 4.5. Creating CI/CD solutions for applications using OpenShift Pipelines With Red Hat OpenShift Pipelines, you can create a customized CI/CD solution to build, test, and deploy your application. To create a full-fledged, self-serving CI/CD pipeline for an application, perform the following tasks: Create custom tasks, or install existing reusable tasks. Create and define the delivery pipeline for your application. Provide a storage volume or filesystem that is attached to a workspace for the pipeline execution, using one of the following approaches: Specify a volume claim template that creates a persistent volume claim Specify a persistent volume claim Create a PipelineRun object to instantiate and invoke the pipeline. Add triggers to capture events in the source repository. This section uses the pipelines-tutorial example to demonstrate the preceding tasks. The example uses a simple application which consists of: A front-end interface, pipelines-vote-ui , with the source code in the pipelines-vote-ui Git repository. A back-end interface, pipelines-vote-api , with the source code in the pipelines-vote-api Git repository. The apply-manifests and update-deployment tasks in the pipelines-tutorial Git repository. 4.5.1. Prerequisites You have access to an OpenShift Container Platform cluster. You have installed OpenShift Pipelines using the Red Hat OpenShift Pipelines Operator listed in the OpenShift OperatorHub. After it is installed, it is applicable to the entire cluster. You have installed OpenShift Pipelines CLI . You have forked the front-end pipelines-vote-ui and back-end pipelines-vote-api Git repositories using your GitHub ID, and have administrator access to these repositories. Optional: You have cloned the pipelines-tutorial Git repository. 4.5.2. Creating a project and checking your pipeline service account Procedure Log in to your OpenShift Container Platform cluster: Create a project for the sample application. For this example workflow, create the pipelines-tutorial project: Note If you create a project with a different name, be sure to update the resource URLs used in the example with your project name. View the pipeline service account: Red Hat OpenShift Pipelines Operator adds and configures a service account named pipeline that has sufficient permissions to build and push an image. This service account is used by the PipelineRun object. 4.5.3. Creating pipeline tasks Procedure Install the apply-manifests and update-deployment task resources from the pipelines-tutorial repository, which contains a list of reusable tasks for pipelines: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/01_apply_manifest_task.yaml USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/02_update_deployment_task.yaml Use the tkn task list command to list the tasks you created: USD tkn task list The output verifies that the apply-manifests and update-deployment task resources were created: NAME DESCRIPTION AGE apply-manifests 1 minute ago update-deployment 48 seconds ago Use the tkn clustertasks list command to list the Operator-installed additional cluster tasks such as buildah and s2i-python : Note To use the buildah cluster task in a restricted environment, you must ensure that the Dockerfile uses an internal image stream as the base image. USD tkn clustertasks list The output lists the Operator-installed ClusterTask resources: NAME DESCRIPTION AGE buildah 1 day ago git-clone 1 day ago s2i-python 1 day ago tkn 1 day ago Additional resources Managing non-versioned and versioned cluster tasks 4.5.4. Assembling a pipeline A pipeline represents a CI/CD flow and is defined by the tasks to be executed. It is designed to be generic and reusable in multiple applications and environments. A pipeline specifies how the tasks interact with each other and their order of execution using the from and runAfter parameters. It uses the workspaces field to specify one or more volumes that each task in the pipeline requires during execution. In this section, you will create a pipeline that takes the source code of the application from GitHub, and then builds and deploys it on OpenShift Container Platform. The pipeline performs the following tasks for the back-end application pipelines-vote-api and front-end application pipelines-vote-ui : Clones the source code of the application from the Git repository by referring to the git-url and git-revision parameters. Builds the container image using the buildah cluster task. Pushes the image to the OpenShift image registry by referring to the image parameter. Deploys the new image on OpenShift Container Platform by using the apply-manifests and update-deployment tasks. Procedure Copy the contents of the following sample pipeline YAML file and save it: apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: - name: shared-workspace params: - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: "pipelines-1.10" - name: IMAGE type: string description: image to be built from the code tasks: - name: fetch-repository taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: USD(params.git-url) - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: USD(params.git-revision) - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: IMAGE value: USD(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: - build-image - name: update-deployment taskRef: name: update-deployment params: - name: deployment value: USD(params.deployment-name) - name: IMAGE value: USD(params.IMAGE) runAfter: - apply-manifests The pipeline definition abstracts away the specifics of the Git source repository and image registries. These details are added as params when a pipeline is triggered and executed. Create the pipeline: Alternatively, you can also execute the YAML file directly from the Git repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/04_pipeline.yaml Use the tkn pipeline list command to verify that the pipeline is added to the application: The output verifies that the build-and-deploy pipeline was created: 4.5.5. Mirroring images to run pipelines in a restricted environment To run OpenShift Pipelines in a disconnected cluster or a cluster provisioned in a restricted environment, ensure that either the Samples Operator is configured for a restricted network, or a cluster administrator has created a cluster with a mirrored registry. The following procedure uses the pipelines-tutorial example to create a pipeline for an application in a restricted environment using a cluster with a mirrored registry. To ensure that the pipelines-tutorial example works in a restricted environment, you must mirror the respective builder images from the mirror registry for the front-end interface, pipelines-vote-ui ; back-end interface, pipelines-vote-api ; and the cli . Procedure Mirror the builder image from the mirror registry for the front-end interface, pipelines-vote-ui . Verify that the required images tag is not imported: USD oc describe imagestream python -n openshift Example output Name: python Namespace: openshift [...] 3.8-ubi8 (latest) tagged from registry.redhat.io/ubi8/python-38:latest prefer registry pullthrough when referencing this tag Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. Tags: builder, python Supports: python:3.8, python Example Repo: https://github.com/sclorg/django-ex.git [...] Mirror the supported image tag to the private registry: USD oc image mirror registry.redhat.io/ubi8/python-38:latest <mirror-registry>:<port>/ubi8/python-38 Import the image: USD oc tag <mirror-registry>:<port>/ubi8/python-38 python:latest --scheduled -n openshift You must periodically re-import the image. The --scheduled flag enables automatic re-import of the image. Verify that the images with the given tag have been imported: USD oc describe imagestream python -n openshift Example output Name: python Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/ubi8/python-38 * <mirror-registry>:<port>/ubi8/python-38@sha256:3ee3c2e70251e75bfeac25c0c33356add9cc4abcbc9c51d858f39e4dc29c5f58 [...] Mirror the builder image from the mirror registry for the back-end interface, pipelines-vote-api . Verify that the required images tag is not imported: USD oc describe imagestream golang -n openshift Example output Name: golang Namespace: openshift [...] 1.14.7-ubi8 (latest) tagged from registry.redhat.io/ubi8/go-toolset:1.14.7 prefer registry pullthrough when referencing this tag Build and run Go applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/golang-container/blob/master/README.md. Tags: builder, golang, go Supports: golang Example Repo: https://github.com/sclorg/golang-ex.git [...] Mirror the supported image tag to the private registry: USD oc image mirror registry.redhat.io/ubi8/go-toolset:1.14.7 <mirror-registry>:<port>/ubi8/go-toolset Import the image: USD oc tag <mirror-registry>:<port>/ubi8/go-toolset golang:latest --scheduled -n openshift You must periodically re-import the image. The --scheduled flag enables automatic re-import of the image. Verify that the images with the given tag have been imported: USD oc describe imagestream golang -n openshift Example output Name: golang Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/ubi8/go-toolset * <mirror-registry>:<port>/ubi8/go-toolset@sha256:59a74d581df3a2bd63ab55f7ac106677694bf612a1fe9e7e3e1487f55c421b37 [...] Mirror the builder image from the mirror registry for the cli . Verify that the required images tag is not imported: USD oc describe imagestream cli -n openshift Example output Name: cli Namespace: openshift [...] latest updates automatically from registry quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 [...] Mirror the supported image tag to the private registry: USD oc image mirror quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev:latest Import the image: USD oc tag <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev cli:latest --scheduled -n openshift You must periodically re-import the image. The --scheduled flag enables automatic re-import of the image. Verify that the images with the given tag have been imported: USD oc describe imagestream cli -n openshift Example output Name: cli Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev * <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 [...] Additional resources Configuring Samples Operator for a restricted cluster Creating a cluster with a mirrored registry 4.5.6. Running a pipeline A PipelineRun resource starts a pipeline and ties it to the Git and image resources that should be used for the specific invocation. It automatically creates and starts the TaskRun resources for each task in the pipeline. Procedure Start the pipeline for the back-end application: USD tkn pipeline start build-and-deploy \ -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml \ -p deployment-name=pipelines-vote-api \ -p git-url=https://github.com/openshift/pipelines-vote-api.git \ -p IMAGE='image-registry.openshift-image-registry.svc:5000/USD(context.pipelineRun.namespace)/pipelines-vote-api' \ --use-param-defaults The command uses a volume claim template, which creates a persistent volume claim for the pipeline execution. To track the progress of the pipeline run, enter the following command:: USD tkn pipelinerun logs <pipelinerun_id> -f The <pipelinerun_id> in the above command is the ID for the PipelineRun that was returned in the output of the command. Start the pipeline for the front-end application: USD tkn pipeline start build-and-deploy \ -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml \ -p deployment-name=pipelines-vote-ui \ -p git-url=https://github.com/openshift/pipelines-vote-ui.git \ -p IMAGE='image-registry.openshift-image-registry.svc:5000/USD(context.pipelineRun.namespace)/pipelines-vote-ui' \ --use-param-defaults To track the progress of the pipeline run, enter the following command: USD tkn pipelinerun logs <pipelinerun_id> -f The <pipelinerun_id> in the above command is the ID for the PipelineRun that was returned in the output of the command. After a few minutes, use tkn pipelinerun list command to verify that the pipeline ran successfully by listing all the pipeline runs: USD tkn pipelinerun list The output lists the pipeline runs: NAME STARTED DURATION STATUS build-and-deploy-run-xy7rw 1 hour ago 2 minutes Succeeded build-and-deploy-run-z2rz8 1 hour ago 19 minutes Succeeded Get the application route: USD oc get route pipelines-vote-ui --template='http://{{.spec.host}}' Note the output of the command. You can access the application using this route. To rerun the last pipeline run, using the pipeline resources and service account of the pipeline, run: USD tkn pipeline start build-and-deploy --last Additional resources Authenticating pipelines using git secret 4.5.7. Adding triggers to a pipeline Triggers enable pipelines to respond to external GitHub events, such as push events and pull requests. After you assemble and start a pipeline for the application, add the TriggerBinding , TriggerTemplate , Trigger , and EventListener resources to capture the GitHub events. Procedure Copy the content of the following sample TriggerBinding YAML file and save it: apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerBinding metadata: name: vote-app spec: params: - name: git-repo-url value: USD(body.repository.url) - name: git-repo-name value: USD(body.repository.name) - name: git-revision value: USD(body.head_commit.id) Create the TriggerBinding resource: USD oc create -f <triggerbinding-yaml-file-name.yaml> Alternatively, you can create the TriggerBinding resource directly from the pipelines-tutorial Git repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/01_binding.yaml Copy the content of the following sample TriggerTemplate YAML file and save it: apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerTemplate metadata: name: vote-app spec: params: - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.10 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: - apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: build-deploy-USD(tt.params.git-repo-name)- spec: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: USD(tt.params.git-repo-name) - name: git-url value: USD(tt.params.git-repo-url) - name: git-revision value: USD(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/USD(context.pipelineRun.namespace)/USD(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi The template specifies a volume claim template to create a persistent volume claim for defining the storage volume for the workspace. Therefore, you do not need to create a persistent volume claim to provide data storage. Create the TriggerTemplate resource: USD oc create -f <triggertemplate-yaml-file-name.yaml> Alternatively, you can create the TriggerTemplate resource directly from the pipelines-tutorial Git repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/02_template.yaml Copy the contents of the following sample Trigger YAML file and save it: apiVersion: triggers.tekton.dev/v1beta1 kind: Trigger metadata: name: vote-trigger spec: serviceAccountName: pipeline bindings: - ref: vote-app template: ref: vote-app Create the Trigger resource: USD oc create -f <trigger-yaml-file-name.yaml> Alternatively, you can create the Trigger resource directly from the pipelines-tutorial Git repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/03_trigger.yaml Copy the contents of the following sample EventListener YAML file and save it: apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: vote-app spec: serviceAccountName: pipeline triggers: - triggerRef: vote-trigger Alternatively, if you have not defined a trigger custom resource, add the binding and template spec to the EventListener YAML file, instead of referring to the name of the trigger: apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: vote-app spec: serviceAccountName: pipeline triggers: - bindings: - ref: vote-app template: ref: vote-app Create the EventListener resource by performing the following steps: To create an EventListener resource using a secure HTTPS connection: Add a label to enable the secure HTTPS connection to the Eventlistener resource: USD oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled Create the EventListener resource: USD oc create -f <eventlistener-yaml-file-name.yaml> Alternatively, you can create the EvenListener resource directly from the pipelines-tutorial Git repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/04_event_listener.yaml Create a route with the re-encrypt TLS termination: USD oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname> Alternatively, you can create a re-encrypt TLS termination YAML file to create a secured route. Example Re-encrypt TLS Termination YAML of the Secured Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: <hostname> to: kind: Service name: frontend 2 tls: termination: reencrypt 3 key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- 4 -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- 1 2 The name of the object, which is limited to 63 characters. 3 The termination field is set to reencrypt . This is the only required tls field. 4 Required for re-encryption. destinationCACertificate specifies a CA certificate to validate the endpoint certificate, securing the connection from the router to the destination pods. If the service is using a service signing certificate, or the administrator has specified a default CA certificate for the router and the service has a certificate signed by that CA, this field can be omitted. See oc create route reencrypt --help for more options. To create an EventListener resource using an insecure HTTP connection: Create the EventListener resource. Expose the EventListener service as an OpenShift Container Platform route to make it publicly accessible: USD oc expose svc el-vote-app 4.5.8. Configuring event listeners to serve multiple namespaces Note You can skip this section if you want to create a basic CI/CD pipeline. However, if your deployment strategy involves multiple namespaces, you can configure event listeners to serve multiple namespaces. To increase reusability of EvenListener objects, cluster administrators can configure and deploy them as multi-tenant event listeners that serve multiple namespaces. Procedure Configure cluster-wide fetch permission for the event listener. Set a service account name to be used in the ClusterRoleBinding and EventListener objects. For example, el-sa . Example ServiceAccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: el-sa --- In the rules section of the ClusterRole.yaml file, set appropriate permissions for every event listener deployment to function cluster-wide. Example ClusterRole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: el-sel-clusterrole rules: - apiGroups: ["triggers.tekton.dev"] resources: ["eventlisteners", "clustertriggerbindings", "clusterinterceptors", "triggerbindings", "triggertemplates", "triggers"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["configmaps", "secrets"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["serviceaccounts"] verbs: ["impersonate"] ... Configure cluster role binding with the appropriate service account name and cluster role name. Example ClusterRoleBinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: el-mul-clusterrolebinding subjects: - kind: ServiceAccount name: el-sa namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: el-sel-clusterrole ... In the spec parameter of the event listener, add the service account name, for example el-sa . Fill the namespaceSelector parameter with names of namespaces where event listener is intended to serve. Example EventListener.yaml apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: namespace-selector-listener spec: serviceAccountName: el-sa namespaceSelector: matchNames: - default - foo ... Create a service account with the necessary permissions, for example foo-trigger-sa . Use it for role binding the triggers. Example ServiceAccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: foo-trigger-sa namespace: foo ... Example RoleBinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: triggercr-rolebinding namespace: foo subjects: - kind: ServiceAccount name: foo-trigger-sa namespace: foo roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tekton-triggers-eventlistener-roles ... Create a trigger with the appropriate trigger template, trigger binding, and service account name. Example Trigger.yaml apiVersion: triggers.tekton.dev/v1beta1 kind: Trigger metadata: name: trigger namespace: foo spec: serviceAccountName: foo-trigger-sa interceptors: - ref: name: "github" params: - name: "secretRef" value: secretName: github-secret secretKey: secretToken - name: "eventTypes" value: ["push"] bindings: - ref: vote-app template: ref: vote-app ... 4.5.9. Creating webhooks Webhooks are HTTP POST messages that are received by the event listeners whenever a configured event occurs in your repository. The event payload is then mapped to trigger bindings, and processed by trigger templates. The trigger templates eventually start one or more pipeline runs, leading to the creation and deployment of Kubernetes resources. In this section, you will configure a webhook URL on your forked Git repositories pipelines-vote-ui and pipelines-vote-api . This URL points to the publicly accessible EventListener service route. Note Adding webhooks requires administrative privileges to the repository. If you do not have administrative access to your repository, contact your system administrator for adding webhooks. Procedure Get the webhook URL: For a secure HTTPS connection: For an HTTP (insecure) connection: Note the URL obtained in the output. Configure webhooks manually on the front-end repository: Open the front-end Git repository pipelines-vote-ui in your browser. Click Settings Webhooks Add Webhook On the Webhooks/Add Webhook page: Enter the webhook URL from step 1 in Payload URL field Select application/json for the Content type Specify the secret in the Secret field Ensure that the Just the push event is selected Select Active Click Add Webhook Repeat step 2 for the back-end repository pipelines-vote-api . 4.5.10. Triggering a pipeline run Whenever a push event occurs in the Git repository, the configured webhook sends an event payload to the publicly exposed EventListener service route. The EventListener service of the application processes the payload, and passes it to the relevant TriggerBinding and TriggerTemplate resource pairs. The TriggerBinding resource extracts the parameters, and the TriggerTemplate resource uses these parameters and specifies the way the resources must be created. This may rebuild and redeploy the application. In this section, you push an empty commit to the front-end pipelines-vote-ui repository, which then triggers the pipeline run. Procedure From the terminal, clone your forked Git repository pipelines-vote-ui : USD git clone [email protected]:<your GitHub ID>/pipelines-vote-ui.git -b pipelines-1.10 Push an empty commit: USD git commit -m "empty-commit" --allow-empty && git push origin pipelines-1.10 Check if the pipeline run was triggered: Notice that a new pipeline run was initiated. 4.5.11. Enabling monitoring of event listeners for Triggers for user-defined projects As a cluster administrator, to gather event listener metrics for the Triggers service in a user-defined project and display them in the OpenShift Container Platform web console, you can create a service monitor for each event listener. On receiving an HTTP request, event listeners for the Triggers service return three metrics - eventlistener_http_duration_seconds , eventlistener_event_count , and eventlistener_triggered_resources . Prerequisites You have logged in to the OpenShift Container Platform web console. You have installed the Red Hat OpenShift Pipelines Operator. You have enabled monitoring for user-defined projects. Procedure For each event listener, create a service monitor. For example, to view the metrics for the github-listener event listener in the test namespace, create the following service monitor: apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/managed-by: EventListener app.kubernetes.io/part-of: Triggers eventlistener: github-listener annotations: networkoperator.openshift.io/ignore-errors: "" name: el-monitor namespace: test spec: endpoints: - interval: 10s port: http-metrics jobLabel: name namespaceSelector: matchNames: - test selector: matchLabels: app.kubernetes.io/managed-by: EventListener app.kubernetes.io/part-of: Triggers eventlistener: github-listener ... Test the service monitor by sending a request to the event listener. For example, push an empty commit: USD git commit -m "empty-commit" --allow-empty && git push origin main On the OpenShift Container Platform web console, navigate to Administrator Observe Metrics . To view a metric, search by its name. For example, to view the details of the eventlistener_http_resources metric for the github-listener event listener, search using the eventlistener_http_resources keyword. Additional resources Enabling monitoring for user-defined projects 4.5.12. Additional resources To include pipelines as code along with the application source code in the same repository, see Using Pipelines as code . For more details on pipelines in the Developer perspective, see the working with pipelines in the web console section. To learn more about Security Context Constraints (SCCs), see the Managing Security Context Constraints section. For more examples of reusable tasks, see the OpenShift Catalog repository. Additionally, you can also see the Tekton Catalog in the Tekton project. To install and deploy a custom instance of Tekton Hub for reusable tasks and pipelines, see Using Tekton Hub with Red Hat OpenShift Pipelines . For more details on re-encrypt TLS termination, see Re-encryption Termination . For more details on secured routes, see the Secured routes section. 4.6. Managing non-versioned and versioned cluster tasks As a cluster administrator, installing the Red Hat OpenShift Pipelines Operator creates variants of each default cluster task known as versioned cluster tasks (VCT) and non-versioned cluster tasks (NVCT). For example, installing the Red Hat OpenShift Pipelines Operator v1.7 creates a buildah-1-7-0 VCT and a buildah NVCT. Both NVCT and VCT have the same metadata, behavior, and specifications, including params , workspaces , and steps . However, they behave differently when you disable them or upgrade the Operator. 4.6.1. Differences between non-versioned and versioned cluster tasks Non-versioned and versioned cluster tasks have different naming conventions. And, the Red Hat OpenShift Pipelines Operator upgrades them differently. Table 4.5. Differences between non-versioned and versioned cluster tasks Non-versioned cluster task Versioned cluster task Nomenclature The NVCT only contains the name of the cluster task. For example, the name of the NVCT of Buildah installed with Operator v1.7 is buildah . The VCT contains the name of the cluster task, followed by the version as a suffix. For example, the name of the VCT of Buildah installed with Operator v1.7 is buildah-1-7-0 . Upgrade When you upgrade the Operator, it updates the non-versioned cluster task with the latest changes. The name of the NVCT remains unchanged. Upgrading the Operator installs the latest version of the VCT and retains the earlier version. The latest version of a VCT corresponds to the upgraded Operator. For example, installing Operator 1.7 installs buildah-1-7-0 and retains buildah-1-6-0 . 4.6.2. Advantages and disadvantages of non-versioned and versioned cluster tasks Before adopting non-versioned or versioned cluster tasks as a standard in production environments, cluster administrators might consider their advantages and disadvantages. Table 4.6. Advantages and disadvantages of non-versioned and versioned cluster tasks Cluster task Advantages Disadvantages Non-versioned cluster task (NVCT) If you prefer deploying pipelines with the latest updates and bug fixes, use the NVCT. Upgrading the Operator upgrades the non-versioned cluster tasks, which consume fewer resources than multiple versioned cluster tasks. If you deploy pipelines that use NVCT, they might break after an Operator upgrade if the automatically upgraded cluster tasks are not backward-compatible. Versioned cluster task (VCT) If you prefer stable pipelines in production, use the VCT. The earlier version is retained on the cluster even after the later version of a cluster task is installed. You can continue using the earlier cluster tasks. If you continue using an earlier version of a cluster task, you might miss the latest features and critical security updates. The earlier versions of cluster tasks that are not operational consume cluster resources. * After it is upgraded, the Operator cannot manage the earlier VCT. You can delete the earlier VCT manually by using the oc delete clustertask command, but you cannot restore it. 4.6.3. Disabling non-versioned and versioned cluster tasks As a cluster administrator, you can disable cluster tasks that the Pipelines Operator installed. Procedure To delete all non-versioned cluster tasks and latest versioned cluster tasks, edit the TektonConfig custom resource definition (CRD) and set the clusterTasks parameter in spec.addon.params to false . Example TektonConfig CR apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: "false" profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: "false" ... When you disable cluster tasks, the Operator removes all the non-versioned cluster tasks and only the latest version of the versioned cluster tasks from the cluster. Note Re-enabling cluster tasks installs the non-versioned cluster tasks. Optional: To delete earlier versions of the versioned cluster tasks, use any one of the following methods: To delete individual earlier versioned cluster tasks, use the oc delete clustertask command followed by the versioned cluster task name. For example: USD oc delete clustertask buildah-1-6-0 To delete all versioned cluster tasks created by an old version of the Operator, you can delete the corresponding installer set. For example: USD oc delete tektoninstallerset versioned-clustertask-1-6-k98as Caution If you delete an old versioned cluster task, you cannot restore it. You can only restore versioned and non-versioned cluster tasks that the current version of the Operator has created. 4.7. Using Tekton Hub with OpenShift Pipelines Important Tekton Hub is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Tekton Hub helps you discover, search, and share reusable tasks and pipelines for your CI/CD workflows. A public instance of Tekton Hub is available at hub.tekton.dev . Cluster administrators can also install and deploy a custom instance of Tekton Hub for enterprise use. 4.7.1. Installing and deploying Tekton Hub on a OpenShift Container Platform cluster Tekton Hub is an optional component; cluster administrators cannot install it using the TektonConfig custom resource (CR). To install and manage Tekton Hub, use the TektonHub CR. Note If you are using Github Enterprise or Gitlab Enterprise, install and deploy Tekton Hub in the same network as the enterprise server. For example, if the enterprise server is running behind a VPN, deploy Tekton Hub on a cluster that is also behind the VPN. Prerequisites Ensure that the Red Hat OpenShift Pipelines Operator is installed in the default openshift-pipelines namespace on the cluster. Procedure Create a fork of the Tekton Hub repository. Clone the forked repository. Update the config.yaml file to include at least one user with the following scopes: A user with agent:create scope who can set up a cron job that refreshes the Tekton Hub database after an interval, if there are any changes in the catalog. A user with the catalog:refresh scope who can refresh the catalog and all resources in the database of the Tekton Hub. A user with the config:refresh scope who can get additional scopes. ... scopes: - name: agent:create users: <username_registered_with_the_Git_repository_hosting_service_provider> - name: catalog:refresh users: <username_registered_with_the_Git_repository_hosting_service_provider> - name: config:refresh users: <username_registered_with_the_Git_repository_hosting_service_provider> ... The supported service providers are GitHub, GitLab, and BitBucket. Create an OAuth application with your Git repository hosting provider, and note the Client ID and Client Secret. For a GitHub OAuth application, set the Homepage URL and the Authorization callback URL as <auth-route> . For a GitLab OAuth application, set the REDIRECT_URI as <auth-route>/auth/gitlab/callback . For a BitBucket OAuth application, set the Callback URL as <auth-route> . Edit the following fields in the <tekton_hub_repository>/config/02-api/20-api-secret.yaml file for the Tekton Hub API secret: GH_CLIENT_ID : The Client ID from the OAuth application created with the Git repository hosting service provider. GH_CLIENT_SECRET : The Client Secret from the OAuth application created with the Git repository hosting service provider. GHE_URL : GitHub Enterprise URL, if you are authenticating using GitHub Enterprise. Do not provide the URL to the catalog as a value for this field. GL_CLIENT_ID : The Client ID from the GitLab OAuth application. GL_CLIENT_SECRET : The Client Secret from the GitLab OAuth application. GLE_URL : GitLab Enterprise URL, if you are authenticating using GitLab Enterprise. Do not provide the URL to the catalog as a value for this field. BB_CLIENT_ID : The Client ID from the BitBucket OAuth application. BB_CLIENT_SECRET : The Client Secret from the BitBucket OAuth application. JWT_SIGNING_KEY : A long, random string used to sign the JSON Web Token (JWT) created for users. ACCESS_JWT_EXPIRES_IN : Add the time limit after which the access token expires. For example, 1m , where m denotes minutes. The supported units of time are seconds ( s ), minutes ( m ), hours ( h ), days ( d ), and weeks ( w ). REFRESH_JWT_EXPIRES_IN : Add the time limit after which the refresh token expires. For example, 1m , where m denotes minutes. The supported units of time are seconds ( s ), minutes ( m ), hours ( h ), days ( d ), and weeks ( w ). Ensure that the expiry time set for token refresh is greater than the expiry time set for token access. AUTH_BASE_URL : Route URL for the OAuth application. Note Use the fields related to Client ID and Client Secret for any one of the supported Git repository hosting service providers. The account credentials registered with the Git repository hosting service provider enables the users with catalog: refresh scope to authenticate and load all catalog resources to the database. Commit and push the changes to your forked repository. Ensure that the TektonHub CR is similar to the following example: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonHub metadata: name: hub spec: targetNamespace: openshift-pipelines 1 api: hubConfigUrl: https://raw.githubusercontent.com/tektoncd/hub/main/config.yaml 2 1 The namespace in which Tekton Hub must be installed; default is openshift-pipelines . 2 Substitute with the URL of the config.yaml file of your forked repository. Install the Tekton Hub. USD oc apply -f TektonHub.yaml 1 1 The file name or path of the TektonConfig CR. Check the status of the installation. USD oc get tektonhub.operator.tekton.dev NAME VERSION READY REASON APIURL UIURL hub v1.7.2 True https://api.route.url/ https://ui.route.url/ 4.7.1.1. Manually refreshing the catalog in Tekton Hub When you install and deploy Tekton Hub on a OpenShift Container Platform cluster, a Postgres database is also installed. Initially, the database is empty. To add the tasks and pipelines available in the catalog to the database, cluster administrators must refresh the catalog. Prerequisites Ensure that you are in the <tekton_hub_repository>/config/ directory. Procedure In the Tekton Hub UI, click Login --> Sign In With GitHub . Note GitHub is used as an example from the publicly available Tekton Hub UI. For custom installation on your cluster, all Git repository hosting service providers for which you have provided Client ID and Client Secret are listed. On the home page, click the user profile and copy the token. Call the Catalog Refresh API. To refresh a catalog with a specific name, run the following command: USD curl -X POST -H "Authorization: <jwt-token>" \ 1 <api-url>/catalog/<catalog_name>/refresh 2 1 The Tekton Hub token copied from UI. 2 The API pod URL and name of the catalog. Sample output: [{"id":1,"catalogName":"tekton","status":"queued"}] To refresh all catalogs, run the following command: USD curl -X POST -H "Authorization: <jwt-token>" \ 1 <api-url>/catalog/refresh 2 1 The Tekton Hub token copied from UI 2 The API pod URL. Refresh the page in the browser. 4.7.1.2. Optional: Setting a cron job for refreshing catalog in Tekton Hub Cluster administrators can optionally set up a cron job to refresh the database after a fixed interval, so that changes in the catalog appear in the Tekton Hub web console. Note If resources are added to the catalog or updated, refreshing the catalog displays these changes in the Tekton Hub UI. However, if a resource is deleted from the catalog, refreshing the catalog does not remove the resource from the database. The Tekton Hub UI continues displaying the deleted resource. Prerequisites Ensure that you are in the <project_root>/config/ directory, where <project_root> is the top level directory of the cloned Tekton Hub repository. Ensure that you have a JSON web token (JWT) token with a scope of refreshing the catalog. Procedure Create an agent-based JWT token for longer use. USD curl -X PUT --header "Content-Type: application/json" \ -H "Authorization: <access-token>" \ 1 --data '{"name":"catalog-refresh-agent","scopes": ["catalog:refresh"]}' \ <api-route>/system/user/agent 1 The JWT token. The agent token with the necessary scopes are returned in the {"token":"<agent_jwt_token>"} format. Note the returned token and preserve it for the catalog refresh cron job. Edit the 05-catalog-refresh-cj/50-catalog-refresh-secret.yaml file to set the HUB_TOKEN parameter to the <agent_jwt_token> returned in the step. apiVersion: v1 kind: Secret metadata: name: catalog-refresh type: Opaque stringData: HUB_TOKEN: <hub_token> 1 1 The <agent_jwt_token> returned in the step. Apply the modified YAML files. USD oc apply -f 05-catalog-refresh-cj/ -n openshift-pipelines. Optional: By default, the cron job is configured to run every 30 minutes. To change the interval, modify the value of the schedule parameter in the 05-catalog-refresh-cj/51-catalog-refresh-cronjob.yaml file. apiVersion: batch/v1 kind: CronJob metadata: name: catalog-refresh labels: app: tekton-hub-api spec: schedule: "*/30 * * * *" ... 4.7.1.3. Optional: Adding new users in Tekton Hub configuration Procedure Depending on the intended scope, cluster administrators can add new users in the config.yaml file. ... scopes: - name: agent:create users: [<username_1>, <username_2>] 1 - name: catalog:refresh users: [<username_3>, <username_4>] - name: config:refresh users: [<username_5>, <username_6>] default: scopes: - rating:read - rating:write ... 1 The usernames registered with the Git repository hosting service provider. Note When any user logs in for the first time, they will have only the default scope even if they are added in the config.yaml . To activate additional scopes, ensure the user has logged in at least once. Ensure that in the config.yaml file, you have the config-refresh scope. Refresh the configuration. USD curl -X POST -H "Authorization: <access-token>" \ 1 --header "Content-Type: application/json" \ --data '{"force": true} \ <api-route>/system/config/refresh 1 The JWT token. 4.7.2. Opting out of Tekton Hub in the Developer perspective Cluster administrators can opt out of displaying Tekton Hub resources, such as tasks and pipelines, in the Pipeline builder page of the Developer perspective of an OpenShift Container Platform cluster. Prerequisite Ensure that the Red Hat OpenShift Pipelines Operator is installed on the cluster, and the oc command line tool is available. Procedure To opt of displaying Tekton Hub resources in the Developer perspective, set the value of the enable-devconsole-integration field in the TektonConfig custom resource (CR) to false . apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: targetNamespace: openshift-pipelines ... hub: params: - name: enable-devconsole-integration value: "false" ... By default, the TektonConfig CR does not include the enable-devconsole-integration field, and the Red Hat OpenShift Pipelines Operator assumes that the value is true . 4.7.3. Additional resources GitHub repository of Tekton Hub . Installing OpenShift Pipelines Red Hat OpenShift Pipelines release notes 4.8. Using Pipelines as Code With Pipelines as Code, cluster administrators and users with the required privileges can define pipeline templates as part of source code Git repositories. When triggered by a source code push or a pull request for the configured Git repository, the feature runs the pipeline and reports the status. 4.8.1. Key features Pipelines as Code supports the following features: Pull request status and control on the platform hosting the Git repository. GitHub Checks API to set the status of a pipeline run, including rechecks. GitHub pull request and commit events. Pull request actions in comments, such as /retest . Git events filtering and a separate pipeline for each event. Automatic task resolution in Pipelines, including local tasks, Tekton Hub, and remote URLs. Retrieval of configurations using GitHub blobs and objects API. Access Control List (ACL) over a GitHub organization, or using a Prow style OWNER file. The tkn pac CLI plugin for managing bootstrapping and Pipelines as Code repositories. Support for GitHub App, GitHub Webhook, Bitbucket Server, and Bitbucket Cloud. 4.8.2. Installing Pipelines as Code on an OpenShift Container Platform Pipelines as Code is installed by default when you install the Red Hat OpenShift Pipelines Operator. If you are using Pipelines 1.7 or later versions, skip the procedure for manual installation of Pipelines as Code. To disable the default installation of Pipelines as Code with the Operator, set the value of the enable parameter to false in the TektonConfig custom resource. ... spec: platforms: openshift: pipelinesAsCode: enable: false settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: "false" bitbucket-cloud-check-source-ip: "true" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: "true" secret-auto-create: "true" ... Optionally, you can run the following command: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}' To enable the default installation of Pipelines as Code with the Red Hat OpenShift Pipelines Operator, set the value of the enable parameter to true in the TektonConfig custom resource: ... spec: addon: enablePipelinesAsCode: false ... Optionally, you can run the following command: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": true}}}}}' 4.8.3. Installing Pipelines as Code CLI Cluster administrators can use the tkn pac and opc CLI tools on local machines or as containers for testing. The tkn pac and opc CLI tools are installed automatically when you install the tkn CLI for Red Hat OpenShift Pipelines. You can install the tkn pac and opc version 1.9.1 binaries for the supported platforms: Linux (x86_64, amd64) Linux on IBM Z and LinuxONE (s390x) Linux on IBM Power Systems (ppc64le) Mac Windows Note The binaries are compatible with tkn version 0.23.1 . 4.8.4. Using Pipelines as Code with a Git repository hosting service provider After installing Pipelines as Code, cluster administrators can configure a Git repository hosting service provider. Currently, the following services are supported: GitHub App GitHub Webhook GitLab Bitbucket Server Bitbucket Cloud Note GitHub App is the recommended service for using with Pipelines as Code. 4.8.5. Using Pipelines as Code with a GitHub App GitHub Apps act as a point of integration with Red Hat OpenShift Pipelines and bring the advantage of Git-based workflows to OpenShift Pipelines. Cluster administrators can configure a single GitHub App for all cluster users. For GitHub Apps to work with Pipelines as Code, ensure that the webhook of the GitHub App points to the Pipelines as Code event listener route (or ingress endpoint) that listens for GitHub events. 4.8.5.1. Configuring a GitHub App Cluster administrators can create a GitHub App by running the following command: USD tkn pac bootstrap github-app If the tkn pac CLI plugin is not installed, you can create the GitHub App manually. Procedure To create and configure a GitHub App manually for Pipelines as Code, perform the following steps: Sign in to your GitHub account. Go to Settings Developer settings GitHub Apps , and click New GitHub App . Provide the following information in the GitHub App form: GitHub Application Name : OpenShift Pipelines Homepage URL : OpenShift Console URL Webhook URL : The Pipelines as Code route or ingress URL. You can find it by running the command echo https://USD(oc get route -n openshift-pipelines pipelines-as-code-controller -o jsonpath='{.spec.host}') . Webhook secret : An arbitrary secret. You can generate a secret by executing the command openssl rand -hex 20 . Select the following Repository permissions : Checks : Read & Write Contents : Read & Write Issues : Read & Write Metadata : Read-only Pull request : Read & Write Select the following Organization permissions : Members : Readonly Plan : Readonly Select the following User permissions : Commit comment Issue comment Pull request Pull request review Pull request review comment Push Click Create GitHub App . On the Details page of the newly created GitHub App, note the App ID displayed at the top. In the Private keys section, click Generate Private key to automatically generate and download a private key for the GitHub app. Securely store the private key for future reference and usage. 4.8.5.2. Configuring Pipelines as Code to access a GitHub App To configure Pipelines as Code to access the newly created GitHub App, execute the following command: + USD oc -n openshift-pipelines create secret generic pipelines-as-code-secret \ --from-literal github-private-key="USD(cat <PATH_PRIVATE_KEY>)" \ 1 --from-literal github-application-id="<APP_ID>" \ 2 --from-literal webhook.secret="<WEBHOOK_SECRET>" 3 1 The path to the private key you downloaded while configuring the GitHub App. 2 The App ID of the GitHub App. 3 The webhook secret provided when you created the GitHub App. Note Pipelines as Code works automatically with GitHub Enterprise by detecting the header set from GitHub Enterprise and using it for the GitHub Enterprise API authorization URL. 4.8.5.3. Creating a GitHub App in administrator perspective As a cluster administrator, you can configure your GitHub App with the OpenShift Container Platform cluster to use Pipelines as Code. This configuration allows you to execute a set of tasks required for build deployment. Prerequisites You have installed the Red Hat OpenShift Pipelines pipelines-1.10 operator from the Operator Hub. Procedure In the administrator perspective, navigate to Pipelines using the navigation pane. Click Setup GitHub App on the Pipelines page. Enter your GitHub App name. For example, pipelines-ci-clustername-testui . Click Setup . Enter your Git password when prompted in the browser. Click Create GitHub App for <username> , where <username> is your GitHub user name. Verification After successful creation of the GitHub App, the OpenShift Container Platform web console opens and displays the details about the application. The details of the GitHub App are saved as a secret in the openShift-pipelines namespace. To view details such as name, link, and secret associated with the GitHub applications, navigate to Pipelines and click View GitHub App . 4.8.6. Using Pipelines as Code with GitHub Webhook Use Pipelines as Code with GitHub Webhook on your repository if you cannot create a GitHub App. However, using Pipelines as Code with GitHub Webhook does not give you access to the GitHub Check Runs API. The status of the tasks is added as comments on the pull request and is unavailable under the Checks tab. Note Pipelines as Code with GitHub Webhook does not support GitOps comments such as /retest and /ok-to-test . To restart the continuous integration (CI), create a new commit to the repository. For example, to create a new commit without any changes, you can use the following command: USD git --amend -a --no-edit && git push --force-with-lease <origin> <branchname> Prerequisites Ensure that Pipelines as Code is installed on the cluster. For authorization, create a personal access token on GitHub. To generate a secure and fine-grained token, restrict its scope to a specific repository and grant the following permissions: Table 4.7. Permissions for fine-grained tokens Name Access Administration Read-only Metadata Read-only Content Read-only Commit statuses Read and Write Pull request Read and Write Webhooks Read and Write To use classic tokens, set the scope as public_repo for public repositories and repo for private repositories. In addition, provide a short token expiration period and note the token in an alternate location. Note If you want to configure the webhook using the tkn pac CLI, add the admin:repo_hook scope. Procedure Configure the webhook and create a Repository custom resource (CR). To configure a webhook and create a Repository CR automatically using the tkn pac CLI tool, use the following command: USD tkn pac create repo Sample interactive output ? Enter the Git repository url (default: https://github.com/owner/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes [β] Repository owner-repo has been created in repo-pipelines namespace [β] Setting up GitHub Webhook for Repository https://github.com/owner/repo π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: sJNwdmTifHTs): sJNwdmTifHTs i \ufe0fYou now need to create a GitHub personal access token, please checkout the docs at https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token for the required scopes ? Please enter the GitHub access token: **************************************** [β] Webhook has been created on repository owner/repo π Webhook Secret owner-repo has been created in the repo-pipelines namespace. π Repository CR owner-repo has been updated with webhook secret in the repo-pipelines namespace i Directory .tekton has been created. [β] We have detected your repository using the programming language Go. [β] A basic template has been created in /home/Go/src/github.com/owner/repo/.tekton/pipelinerun.yaml, feel free to customize it. To configure a webhook and create a Repository CR manually , perform the following steps: On your OpenShift cluster, extract the public URL of the Pipelines as Code controller. USD echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}') On your GitHub repository or organization, perform the following steps: Go to Settings -> Webhooks and click Add webhook . Set the Payload URL to the Pipelines as Code controller public URL. Select the content type as application/json . Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. USD openssl rand -hex 20 Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Add webhook . On your OpenShift cluster, create a Secret object with the personal access token and webhook secret. USD oc -n target-namespace create secret generic github-webhook-config \ --from-literal provider.token="<GITHUB_PERSONAL_ACCESS_TOKEN>" \ --from-literal webhook.secret="<WEBHOOK_SECRET>" Create a Repository CR. Example: Repository CR apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://github.com/owner/repo" git_provider: secret: name: "github-webhook-config" key: "provider.token" # Set this if you have a different key in your secret webhook_secret: name: "github-webhook-config" key: "webhook.secret" # Set this if you have a different key for your secret Note Pipelines as Code assumes that the OpenShift Secret object and the Repository CR are in the same namespace. Optional: For an existing Repository CR, add multiple GitHub Webhook secrets or provide a substitute for a deleted secret. Add a webhook using the tkn pac CLI tool. Example: Additional webhook using the tkn pac CLI USD tkn pac webhook add -n repo-pipelines Sample interactive output [β] Setting up GitHub Webhook for Repository https://github.com/owner/repo π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: AeHdHTJVfAeH): AeHdHTJVfAeH [β] Webhook has been created on repository owner/repo π Secret owner-repo has been updated with webhook secert in the repo-pipelines namespace. Update the webhook.secret key in the existing OpenShift Secret object. Optional: For an existing Repository CR, update the personal access token. Update the personal access token using the tkn pac CLI tool. Example: Updating personal access token using the tkn pac CLI USD tkn pac webhook update-token -n repo-pipelines Sample interactive output ? Please enter your personal access token: **************************************** π Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace. Alternatively, update the personal access token by modifying the Repository CR. Find the name of the secret in the Repository CR. ... spec: git_provider: secret: name: "github-webhook-config" ... Use the oc patch command to update the values of the USDNEW_TOKEN in the USDtarget_namespace namespace. USD oc -n USDtarget_namespace patch secret github-webhook-config -p "{\"data\": {\"provider.token\": \"USD(echo -n USDNEW_TOKEN|base64 -w0)\"}}" Additional resources GitHub Webhook documentation on GitHub GitHub Check Runs documentation on GitHub Creating a personal access token on GitHub Classic tokens with pre-filled permissions 4.8.7. Using Pipelines as Code with GitLab If your organization or project uses GitLab as the preferred platform, you can use Pipelines as Code for your repository with a webhook on GitLab. Prerequisites Ensure that Pipelines as Code is installed on the cluster. For authorization, generate a personal access token as the manager of the project or organization on GitLab. Note If you want to configure the webhook using the tkn pac CLI, add the admin:repo_hook scope to the token. Using a token scoped for a specific project cannot provide API access to a merge request (MR) sent from a forked repository. In such cases, Pipelines as Code displays the result of a pipeline as a comment on the MR. Procedure Configure the webhook and create a Repository custom resource (CR). To configure a webhook and create a Repository CR automatically using the tkn pac CLI tool, use the following command: USD tkn pac create repo Sample interactive output ? Enter the Git repository url (default: https://gitlab.com/owner/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes [β] Repository repositories-project has been created in repo-pipelines namespace [β] Setting up GitLab Webhook for Repository https://gitlab.com/owner/repo ? Please enter the project ID for the repository you want to be configured, project ID refers to an unique ID (e.g. 34405323) shown at the top of your GitLab project : 17103 π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: lFjHIEcaGFlF): lFjHIEcaGFlF i \ufe0fYou now need to create a GitLab personal access token with `api` scope i \ufe0fGo to this URL to generate one https://gitlab.com/-/profile/personal_access_tokens, see https://is.gd/rOEo9B for documentation ? Please enter the GitLab access token: ************************** ? Please enter your GitLab API URL:: https://gitlab.com [β] Webhook has been created on your repository π Webhook Secret repositories-project has been created in the repo-pipelines namespace. π Repository CR repositories-project has been updated with webhook secret in the repo-pipelines namespace i Directory .tekton has been created. [β] A basic template has been created in /home/Go/src/gitlab.com/repositories/project/.tekton/pipelinerun.yaml, feel free to customize it. To configure a webhook and create a Repository CR manually , perform the following steps: On your OpenShift cluster, extract the public URL of the Pipelines as Code controller. USD echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}') On your GitLab project, perform the following steps: Use the left sidebar to go to Settings -> Webhooks . Set the URL to the Pipelines as Code controller public URL. Add a webhook secret and note it in an alternate location. With openssl installed on your local machine, generate a random secret. USD openssl rand -hex 20 Click Let me select individual events and select these events: Commit comments , Issue comments , Pull request , and Pushes . Click Save changes . On your OpenShift cluster, create a Secret object with the personal access token and webhook secret. USD oc -n target-namespace create secret generic gitlab-webhook-config \ --from-literal provider.token="<GITLAB_PERSONAL_ACCESS_TOKEN>" \ --from-literal webhook.secret="<WEBHOOK_SECRET>" Create a Repository CR. Example: Repository CR apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://gitlab.com/owner/repo" 1 git_provider: secret: name: "gitlab-webhook-config" key: "provider.token" # Set this if you have a different key in your secret webhook_secret: name: "gitlab-webhook-config" key: "webhook.secret" # Set this if you have a different key for your secret 1 Currently, Pipelines as Code does not automatically detects private instances for GitLab. In such cases, specify the API URL under the git_provider.url spec. In general, you can use the git_provider.url spec to manually override the API URL. Note Pipelines as Code assumes that the OpenShift Secret object and the Repository CR are in the same namespace. Optional: For an existing Repository CR, add multiple GitLab Webhook secrets or provide a substitute for a deleted secret. Add a webhook using the tkn pac CLI tool. Example: Adding additional webhook using the tkn pac CLI USD tkn pac webhook add -n repo-pipelines Sample interactive output [β] Setting up GitLab Webhook for Repository https://gitlab.com/owner/repo π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: AeHdHTJVfAeH): AeHdHTJVfAeH [β] Webhook has been created on repository owner/repo π Secret owner-repo has been updated with webhook secert in the repo-pipelines namespace. Update the webhook.secret key in the existing OpenShift Secret object. Optional: For an existing Repository CR, update the personal access token. Update the personal access token using the tkn pac CLI tool. Example: Updating personal access token using the tkn pac CLI USD tkn pac webhook update-token -n repo-pipelines Sample interactive output ? Please enter your personal access token: **************************************** π Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace. Alternatively, update the personal access token by modifying the Repository CR. Find the name of the secret in the Repository CR. ... spec: git_provider: secret: name: "gitlab-webhook-config" ... Use the oc patch command to update the values of the USDNEW_TOKEN in the USDtarget_namespace namespace. USD oc -n USDtarget_namespace patch secret gitlab-webhook-config -p "{\"data\": {\"provider.token\": \"USD(echo -n USDNEW_TOKEN|base64 -w0)\"}}" Additional resources GitLab Webhook documentation on GitLab 4.8.8. Using Pipelines as Code with Bitbucket Cloud If your organization or project uses Bitbucket Cloud as the preferred platform, you can use Pipelines as Code for your repository with a webhook on Bitbucket Cloud. Prerequisites Ensure that Pipelines as Code is installed on the cluster. Create an app password on Bitbucket Cloud. Check the following boxes to add appropriate permissions to the token: Account: Email , Read Workspace membership: Read , Write Projects: Read , Write Issues: Read , Write Pull requests: Read , Write Note If you want to configure the webhook using the tkn pac CLI, add the Webhooks : Read and Write permission to the token. Once generated, save a copy of the password or token in an alternate location. Procedure Configure the webhook and create a Repository CR. To configure a webhook and create a Repository CR automatically using the tkn pac CLI tool, use the following command: USD tkn pac create repo Sample interactive output ? Enter the Git repository url (default: https://bitbucket.org/workspace/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes [β] Repository workspace-repo has been created in repo-pipelines namespace [β] Setting up Bitbucket Webhook for Repository https://bitbucket.org/workspace/repo ? Please enter your bitbucket cloud username: <username> i \ufe0fYou now need to create a Bitbucket Cloud app password, please checkout the docs at https://is.gd/fqMHiJ for the required permissions ? Please enter the Bitbucket Cloud app password: ************************************ π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes [β] Webhook has been created on repository workspace/repo π Webhook Secret workspace-repo has been created in the repo-pipelines namespace. π Repository CR workspace-repo has been updated with webhook secret in the repo-pipelines namespace i Directory .tekton has been created. [β] A basic template has been created in /home/Go/src/bitbucket/repo/.tekton/pipelinerun.yaml, feel free to customize it. To configure a webhook and create a Repository CR manually , perform the following steps: On your OpenShift cluster, extract the public URL of the Pipelines as Code controller. USD echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}') On Bitbucket Cloud, perform the following steps: Use the left navigation pane of your Bitbucket Cloud repository to go to Repository settings -> Webhooks and click Add webhook . Set a Title . For example, "Pipelines as Code". Set the URL to the Pipelines as Code controller public URL. Select these events: Repository: Push , Pull Request: Created , Pull Request: Updated , and Pull Request: Comment created . Click Save . On your OpenShift cluster, create a Secret object with the app password in the target namespace. USD oc -n target-namespace create secret generic bitbucket-cloud-token \ --from-literal provider.token="<BITBUCKET_APP_PASSWORD>" Create a Repository CR. Example: Repository CR apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://bitbucket.com/workspace/repo" branch: "main" git_provider: user: "<BITBUCKET_USERNAME>" 1 secret: name: "bitbucket-cloud-token" 2 key: "provider.token" # Set this if you have a different key in your secret 1 You can only reference a user by the ACCOUNT_ID in an owner file. 2 Pipelines as Code assumes that the secret referred in the git_provider.secret spec and the Repository CR is in the same namespace. Note The tkn pac create and tkn pac bootstrap commands are not supported on Bitbucket Cloud. Bitbucket Cloud does not support webhook secrets. To secure the payload and prevent hijacking of the CI, Pipelines as Code fetches the list of Bitbucket Cloud IP addresses and ensures that the webhook receptions come only from those IP addresses. To disable the default behavior, set the bitbucket-cloud-check-source-ip key to false in the Pipelines as Code config map for the pipelines-as-code namespace. To allow additional safe IP addresses or networks, add them as comma separated values to the bitbucket-cloud-additional-source-ip key in the Pipelines as Code config map for the pipelines-as-code namespace. Optional: For an existing Repository CR, add multiple Bitbucket Cloud Webhook secrets or provide a substitute for a deleted secret. Add a webhook using the tkn pac CLI tool. Example: Adding additional webhook using the tkn pac CLI USD tkn pac webhook add -n repo-pipelines Sample interactive output [β] Setting up Bitbucket Webhook for Repository https://bitbucket.org/workspace/repo ? Please enter your bitbucket cloud username: <username> π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes [β] Webhook has been created on repository workspace/repo π Secret workspace-repo has been updated with webhook secret in the repo-pipelines namespace. Note Use the [-n <namespace>] option with the tkn pac webhook add command only when the Repository CR exists in a namespace other than the default namespace. Update the webhook.secret key in the existing OpenShift Secret object. Optional: For an existing Repository CR, update the personal access token. Update the personal access token using the tkn pac CLI tool. Example: Updating personal access token using the tkn pac CLI USD tkn pac webhook update-token -n repo-pipelines Sample interactive output ? Please enter your personal access token: **************************************** π Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace. Note Use the [-n <namespace>] option with the tkn pac webhook update-token command only when the Repository CR exists in a namespace other than the default namespace. Alternatively, update the personal access token by modifying the Repository CR. Find the name of the secret in the Repository CR. ... spec: git_provider: user: "<BITBUCKET_USERNAME>" secret: name: "bitbucket-cloud-token" key: "provider.token" ... Use the oc patch command to update the values of the USDpassword in the USDtarget_namespace namespace. USD oc -n USDtarget_namespace patch secret bitbucket-cloud-token -p "{\"data\": {\"provider.token\": \"USD(echo -n USDNEW_TOKEN|base64 -w0)\"}}" Additional resources Creating app password on Bitbucket Cloud Introducing Altassian Account ID and Nicknames 4.8.9. Using Pipelines as Code with Bitbucket Server If your organization or project uses Bitbucket Server as the preferred platform, you can use Pipelines as Code for your repository with a webhook on Bitbucket Server. Prerequisites Ensure that Pipelines as Code is installed on the cluster. Generate a personal access token as the manager of the project on Bitbucket Server, and save a copy of it in an alternate location. Note The token must have the PROJECT_ADMIN and REPOSITORY_ADMIN permissions. The token must have access to forked repositories in pull requests. Procedure On your OpenShift cluster, extract the public URL of the Pipelines as Code controller. USD echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}') On Bitbucket Server, perform the following steps: Use the left navigation pane of your Bitbucket Data Center repository to go to Repository settings -> Webhooks and click Add webhook . Set a Title . For example, "Pipelines as Code". Set the URL to the Pipelines as Code controller public URL. Add a webhook secret and save a copy of it in an alternate location. If you have openssl installed on your local machine, generate a random secret using the following command: USD openssl rand -hex 20 Select the following events: Repository: Push Repository: Modified Pull Request: Opened Pull Request: Source branch updated Pull Request: Comment added Click Save . On your OpenShift cluster, create a Secret object with the app password in the target namespace. USD oc -n target-namespace create secret generic bitbucket-server-webhook-config \ --from-literal provider.token="<PERSONAL_TOKEN>" \ --from-literal webhook.secret="<WEBHOOK_SECRET>" Create a Repository CR. Example: Repository CR --- apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: "https://bitbucket.com/workspace/repo" git_provider: url: "https://bitbucket.server.api.url/rest" 1 user: "<BITBUCKET_USERNAME>" 2 secret: 3 name: "bitbucket-server-webhook-config" key: "provider.token" # Set this if you have a different key in your secret webhook_secret: name: "bitbucket-server-webhook-config" key: "webhook.secret" # Set this if you have a different key for your secret 1 Ensure that you have the right Bitbucket Server API URL without the /api/v1.0 suffix. Usually, the default install has a /rest suffix. 2 You can only reference a user by the ACCOUNT_ID in an owner file. 3 Pipelines as Code assumes that the secret referred in the git_provider.secret spec and the Repository CR is in the same namespace. Note The tkn pac create and tkn pac bootstrap commands are not supported on Bitbucket Server. Additional resources Creating personal tokens on Bitbucket Server Creating webhooks on Bitbucket server 4.8.10. Interfacing Pipelines as Code with custom certificates To configure Pipelines as Code with a Git repository that is accessible with a privately signed or custom certificate, you can expose the certificate to Pipelines as Code. Procedure If you have installed Pipelines as Code using the Red Hat OpenShift Pipelines Operator, you can add your custom certificate to the cluster using the Proxy object. The Operator exposes the certificate in all Red Hat OpenShift Pipelines components and workloads, including Pipelines as Code. Additional resources Enabling the cluster-wide proxy 4.8.11. Using the Repository CRD with Pipelines as Code The Repository custom resource (CR) has the following primary functions: Inform Pipelines as Code about processing an event from a URL. Inform Pipelines as Code about the namespace for the pipeline runs. Reference an API secret, username, or an API URL necessary for Git provider platforms when using webhook methods. Provide the last pipeline run status for a repository. You can use the tkn pac CLI or other alternative methods to create a Repository CR inside the target namespace. For example: cat <<EOF|kubectl create -n my-pipeline-ci -f- 1 apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: project-repository spec: url: "https://github.com/<repository>/<project>" EOF 1 my-pipeline-ci is the target namespace. Whenever there is an event coming from the URL such as https://github.com/<repository>/<project> , Pipelines as Code matches it and starts checking out the content of the <repository>/<project> repository for pipeline run to match the content in the .tekton/ directory. Note You must create the Repository CRD in the same namespace where pipelines associated with the source code repository will be executed; it cannot target a different namespace. If multiple Repository CRDs match the same event, Pipelines as Code will process only the oldest one. If you need to match a specific namespace, add the pipelinesascode.tekton.dev/target-namespace: "<mynamespace>" annotation. Such explicit targeting prevents a malicious actor from executing a pipeline run in a namespace to which they do not have access. 4.8.11.1. Setting concurrency limits in the Repository CRD You can use the concurrency_limit spec in the Repository CRD to define the maximum number of pipeline runs running simultaneously for a repository. ... spec: concurrency_limit: <number> ... If there are multiple pipeline runs matching an event, the pipeline runs that match the event start in an alphabetical order. For example, if you have three pipeline runs in the .tekton directory and you create a pull request with a concurrency_limit of 1 in the repository configuration, then all the pipeline runs are executed in an alphabetical order. At any given time, only one pipeline run is in the running state while the rest are queued. 4.8.12. Using Pipelines as Code resolver The Pipelines as Code resolver ensures that a running pipeline run does not conflict with others. To split your pipeline and pipeline run, store the files in the .tekton/ directory or its subdirectories. If Pipelines as Code observes a pipeline run with a reference to a task or a pipeline in any YAML file located in the .tekton/ directory, Pipelines as Code automatically resolves the referenced task to provide a single pipeline run with an embedded spec in a PipelineRun object. If Pipelines as Code cannot resolve the referenced tasks in the Pipeline or PipelineSpec definition, the run fails before applying any changes to the cluster. You can see the issue on your Git provider platform and inside the events of the target namespace where the Repository CR is located. The resolver skips resolving if it observes the following type of tasks: A reference to a cluster task. A task or pipeline bundle. A custom task with an API version that does not have a tekton.dev/ prefix. The resolver uses such tasks literally, without any transformation. To test your pipeline run locally before sending it in a pull request, use the tkn pac resolve command. You can also reference remote pipelines and tasks. 4.8.12.1. Using remote task annotations with Pipelines as Code Pipelines as Code supports fetching remote tasks or pipelines by using annotations in a pipeline run. If you reference a remote task in a pipeline run, or a pipeline in a PipelineRun or a PipelineSpec object, the Pipelines as Code resolver automatically includes it. If there is any error while fetching the remote tasks or parsing them, Pipelines as Code stops processing the tasks. To include remote tasks, refer to the following examples of annotation: Reference remote tasks in Tekton Hub Reference a single remote task in Tekton Hub. ... pipelinesascode.tekton.dev/task: "git-clone" 1 ... 1 Pipelines as Code includes the latest version of the task from the Tekton Hub. Reference multiple remote tasks from Tekton Hub ... pipelinesascode.tekton.dev/task: "[git-clone, golang-test, tkn]" ... Reference multiple remote tasks from Tekton Hub using the -<NUMBER> suffix. ... pipelinesascode.tekton.dev/task: "git-clone" pipelinesascode.tekton.dev/task-1: "golang-test" pipelinesascode.tekton.dev/task-2: "tkn" 1 ... 1 By default, Pipelines as Code interprets the string as the latest task to fetch from Tekton Hub. Reference a specific version of a remote task from Tekton Hub. ... pipelinesascode.tekton.dev/task: "[git-clone:0.1]" 1 ... 1 Refers to the 0.1 version of the git-clone remote task from Tekton Hub. Remote tasks using URLs ... pipelinesascode.tekton.dev/task: "<https://remote.url/task.yaml>" 1 ... 1 The public URL to the remote task. Note If you use GitHub and the remote task URL uses the same host as the Repository CRD, Pipelines as Code uses the GitHub token and fetches the URL using the GitHub API. For example, if you have a repository URL similar to https://github.com/<organization>/<repository> and the remote HTTP URL references a GitHub blob similar to https://github.com/<organization>/<repository>/blob/<mainbranch>/<path>/<file> , Pipelines as Code fetches the task definition files from that private repository with the GitHub App token. When you work on a public GitHub repository, Pipelines as Code acts similarly for a GitHub raw URL such as https://raw.githubusercontent.com/<organization>/<repository>/<mainbranch>/<path>/<file> . GitHub App tokens are scoped to the owner or organization where the repository is located. When you use the GitHub webhook method, you can fetch any private or public repository on any organization where the personal token is allowed. Reference a task from a YAML file inside your repository ... pipelinesascode.tekton.dev/task: "<share/tasks/git-clone.yaml>" 1 ... 1 Relative path to the local file containing the task definition. 4.8.12.2. Using remote pipeline annotations with Pipelines as Code You can share a pipeline definition across multiple repositories by using the remote pipeline annotation. ... pipelinesascode.tekton.dev/pipeline: "<https://git.provider/raw/pipeline.yaml>" 1 ... 1 URL to the remote pipeline definition. You can also provide locations for files inside the same repository. Note You can reference only one pipeline definition using the annotation. 4.8.13. Creating a pipeline run using Pipelines as Code To run pipelines using Pipelines as Code, you can create pipelines definitions or templates as YAML files in the .tekton/ directory of the repository. You can reference YAML files in other repositories using remote URLs, but pipeline runs are only triggered by events in the repository containing the .tekton/ directory. The Pipelines as Code resolver bundles the pipeline runs with all tasks as a single pipeline run without external dependencies. Note For pipelines, use at least one pipeline run with a spec, or a separated Pipeline object. For tasks, embed task spec inside a pipeline, or define it separately as a Task object. Parameterizing commits and URLs You can specify the parameters of your commit and URL by using dynamic, expandable variables with the {{<var>}} format. Currently, you can use the following variables: {{repo_owner}} : The repository owner. {{repo_name}} : The repository name. {{repo_url}} : The repository full URL. {{revision}} : Full SHA revision of a commit. {{sender}} : The username or account id of the sender of the commit. {{source_branch}} : The branch name where the event originated. {{target_branch}} : The branch name that the event targets. For push events, it's the same as the source_branch . {{pull_request_number}} : The pull or merge request number, defined only for a pull_request event type. {{git_auth_secret}} : The secret name that is generated automatically with Git provider's token for checking out private repos. Matching an event to a pipeline run You can match different Git provider events with each pipeline by using special annotations on the pipeline run. If there are multiple pipeline runs matching an event, Pipelines as Code runs them in parallel and posts the results to the Git provider as soon a pipeline run finishes. Matching a pull event to a pipeline run You can use the following example to match the pipeline-pr-main pipeline with a pull_request event that targets the main branch: ... metadata: name: pipeline-pr-main annotations: pipelinesascode.tekton.dev/on-target-branch: "[main]" 1 pipelinesascode.tekton.dev/on-event: "[pull_request]" ... 1 You can specify multiple branches by adding comma-separated entries. For example, "[main, release-nightly]" . In addition, you can specify the following: Full references to branches such as "refs/heads/main" Globs with pattern matching such as "refs/heads/\*" Tags such as "refs/tags/1.\*" Matching a push event to a pipeline run You can use the following example to match the pipeline-push-on-main pipeline with a push event targeting the refs/heads/main branch: ... metadata: name: pipeline-push-on-main annotations: pipelinesascode.tekton.dev/on-target-branch: "[refs/heads/main]" 1 pipelinesascode.tekton.dev/on-event: "[push]" ... 1 You can specifiy multiple branches by adding comma-separated entries. For example, "[main, release-nightly]" . In addition, you can specify the following: Full references to branches such as "refs/heads/main" Globs with pattern matching such as "refs/heads/\*" Tags such as "refs/tags/1.\*" Advanced event matching Pipelines as Code supports using Common Expression Language (CEL) based filtering for advanced event matching. If you have the pipelinesascode.tekton.dev/on-cel-expression annotation in your pipeline run, Pipelines as Code uses the CEL expression and skips the on-target-branch annotation. Compared to the simple on-target-branch annotation matching, the CEL expressions allow complex filtering and negation. To use CEL-based filtering with Pipelines as Code, consider the following examples of annotations: To match a pull_request event targeting the main branch and coming from the wip branch: ... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch == "main" && source_branch == "wip" ... To run a pipeline only if a path has changed, you can use the .pathChanged suffix function with a glob pattern: ... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && "docs/\*.md".pathChanged() 1 ... 1 Matches all markdown files in the docs directory. To match all pull requests starting with the title [DOWNSTREAM] : ... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request && event_title.startsWith("[DOWNSTREAM]") ... To run a pipeline on a pull_request event, but skip the experimental branch: ... pipelinesascode.tekton.dev/on-cel-expression: | event == "pull_request" && target_branch != experimental" ... For advanced CEL-based filtering while using Pipelines as Code, you can use the following fields and suffix functions: event : A push or pull_request event. target_branch : The target branch. source_branch : The branch of origin of a pull_request event. For push events, it is same as the target_branch . event_title : Matches the title of the event, such as the commit title for a push event, and the title of a pull or merge request for a pull_request event. Currently, only GitHub, Gitlab, and Bitbucket Cloud are the supported providers. .pathChanged : A suffix function to a string. The string can be a glob of a path to check if the path has changed. Currently, only GitHub and Gitlab are supported as providers. Using the temporary GitHub App token for Github API operations You can use the temporary installation token generated by Pipelines as Code from GitHub App to access the GitHub API. The token value is stored in the temporary {{git_auth_secret}} dynamic variable generated for private repositories in the git-provider-token key. For example, to add a comment to a pull request, you can use the github-add-comment task from Tekton Hub using a Pipelines as Code annotation: ... pipelinesascode.tekton.dev/task: "github-add-comment" ... You can then add a task to the tasks section or finally tasks in the pipeline run definition: [...] tasks: - name: taskRef: name: github-add-comment params: - name: REQUEST_URL value: "{{ repo_url }}/pull/{{ pull_request_number }}" 1 - name: COMMENT_OR_FILE value: "Pipelines as Code IS GREAT!" - name: GITHUB_TOKEN_SECRET_NAME value: "{{ git_auth_secret }}" - name: GITHUB_TOKEN_SECRET_KEY value: "git-provider-token" ... 1 By using the dynamic variables, you can reuse this snippet template for any pull request from any repository. Note On GitHub Apps, the generated installation token is available for 8 hours and scoped to the repository from where the events originate unless configured differently on the cluster. Additional resources CEL language specification 4.8.14. Running a pipeline run using Pipelines as Code With default configuration, Pipelines as Code runs any pipeline run in the .tekton/ directory of the default branch of repository, when specified events such as pull request or push occurs on the repository. For example, if a pipeline run on the default branch has the annotation pipelinesascode.tekton.dev/on-event: "[pull_request]" , it will run whenever a pull request event occurs. In the event of a pull request or a merge request, Pipelines as Code also runs pipelines from branches other than the default branch, if the following conditions are met by the author of the pull request: The author is the owner of the repository. The author is a collaborator on the repository. The author is a public member on the organization of the repository. The pull request author is listed in an OWNER file located in the repository root of the main branch as defined in the GitHub configuration for the repository. Also, the pull request author is added to either approvers or reviewers section. For example, if an author is listed in the approvers section, then a pull request raised by that author starts the pipeline run. ... approvers: - approved ... If the pull request author does not meet the requirements, another user who meets the requirements can comment /ok-to-test on the pull request, and start the pipeline run. Pipeline run execution A pipeline run always runs in the namespace of the Repository CRD associated with the repository that generated the event. You can observe the execution of your pipeline runs using the tkn pac CLI tool. To follow the execution of the last pipeline run, use the following example: USD tkn pac logs -n <my-pipeline-ci> -L 1 1 my-pipeline-ci is the namespace for the Repository CRD. To follow the execution of any pipeline run interactively, use the following example: USD tkn pac logs -n <my-pipeline-ci> 1 1 my-pipeline-ci is the namespace for the Repository CRD. If you need to view a pipeline run other than the last one, you can use the tkn pac logs command to select a PipelineRun attached to the repository: If you have configured Pipelines as Code with a GitHub App, Pipelines as Code posts a URL in the Checks tab of the GitHub App. You can click the URL and follow the pipeline execution. Restarting a pipeline run You can restart a pipeline run with no events, such as sending a new commit to your branch or raising a pull request. On a GitHub App, go to the Checks tab and click Re-run . If you target a pull or merge request, use the following comments inside your pull request to restart all or specific pipeline runs: The /retest comment restarts all pipeline runs. The /retest <pipelinerun-name> comment restarts a specific pipeline run. The /cancel comment cancels all pipeline runs. The /cancel <pipelinerun-name> comment cancels a specific pipeline run. The results of the comments are visible under the Checks tab of a GitHub App. 4.8.15. Monitoring pipeline run status using Pipelines as Code Depending on the context and supported tools, you can monitor the status of a pipeline run in different ways. Status on GitHub Apps When a pipeline run finishes, the status is added in the Check tabs with limited information on how long each task of your pipeline took, and the output of the tkn pipelinerun describe command. Log error snippet When Pipelines as Code detects an error in one of the tasks of a pipeline, a small snippet consisting of the last 3 lines in the task breakdown of the first failed task is displayed. Note Pipelines as Code avoids leaking secrets by looking into the pipeline run and replacing secret values with hidden characters. However, Pipelines as Code cannot hide secrets coming from workspaces and envFrom source. Annotations for log error snippets In the Pipelines as Code config map, if you set the error-detection-from-container-logs parameter to true , Pipelines as Code detects the errors from the container logs and adds them as annotations on the pull request where the error occurred. Important This feature is in Technology Preview. Currently, Pipelines as Code supports only the simple cases where the error looks like makefile or grep output of the following format: <filename>:<line>:<column>: <error message> You can customize the regular expression used to detect the errors with the error-detection-simple-regexp field. The regular expression uses named groups to give flexibility on how to specify the matching. The groups needed to match are filename, line, and error. You can view the Pipelines as Code config map for the default regular expression. Note By default, Pipelines as Code scans only the last 50 lines of the container logs. You can increase this value in the error-detection-max-number-of-lines field or set -1 for an unlimited number of lines. However, such configurations may increase the memory usage of the watcher. Status for webhook For webhook, when the event is a pull request, the status is added as a comment on the pull or merge request. Failures If a namespace is matched to a Repository CRD, Pipelines as Code emits its failure log messages in the Kubernetes events inside the namespace. Status associated with Repository CRD The last 5 status messages for a pipeline run is stored inside the Repository custom resource. USD oc get repo -n <pipelines-as-code-ci> NAME URL NAMESPACE SUCCEEDED REASON STARTTIME COMPLETIONTIME pipelines-as-code-ci https://github.com/openshift-pipelines/pipelines-as-code pipelines-as-code-ci True Succeeded 59m 56m Using the tkn pac describe command, you can extract the status of the runs associated with your repository and its metadata. Notifications Pipelines as Code does not manage notifications. If you need to have notifications, use the finally feature of pipelines. Additional resources An example task to send Slack messages on success or failure An example of a pipeline run with finally tasks triggered on push events 4.8.16. Using private repositories with Pipelines as Code Pipelines as Code supports private repositories by creating or updating a secret in the target namespace with the user token. The git-clone task from Tekton Hub uses the user token to clone private repositories. Whenever Pipelines as Code creates a new pipeline run in the target namespace, it creates or updates a secret with the pac-gitauth-<REPOSITORY_OWNER>-<REPOSITORY_NAME>-<RANDOM_STRING> format. You must reference the secret with the basic-auth workspace in your pipeline run and pipeline definitions, which is then passed on to the git-clone task. ... workspace: - name: basic-auth secret: secretName: "{{ git_auth_secret }}" ... In the pipeline, you can reference the basic-auth workspace for the git-clone task to reuse: ... workspaces: - name basic-auth params: - name: repo_url - name: revision ... tasks: workspaces: - name: basic-auth workspace: basic-auth ... tasks: - name: git-clone-from-catalog taskRef: name: git-clone 1 params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) ... 1 The git-clone task picks up the basic-auth workspace and uses it to clone the private repository. You can modify this configuration by setting the secret-auto-create flag to either a false or true value, as required in the Pipelines as Code config map. Additional resources An example of the git-clone task used for cloning private repositories 4.8.17. Cleaning up pipeline run using Pipelines as Code There can be many pipeline runs in a user namespace. By setting the max-keep-runs annotation, you can configure Pipelines as Code to retain a limited number of pipeline runs that matches an event. For example: ... pipelinesascode.tekton.dev/max-keep-runs: "<max_number>" 1 ... 1 Pipelines as Code starts cleaning up right after it finishes a successful execution, retaining only the maximum number of pipeline runs configured using the annotation. Note Pipelines as Code skips cleaning the running pipelines but cleans up the pipeline runs with an unknown status. Pipelines as Code skips cleaning a failed pull request. 4.8.18. Using incoming webhook with Pipelines as Code Using an incoming webhook URL and a shared secret, you can start a pipeline run in a repository. To use incoming webhooks, specify the following within the spec section of the Repository CRD: The incoming webhook URL that Pipelines as Code matches. The Git provider and the user token. Currently, Pipelines as Code supports github , gitlab , and bitbucket-cloud . Note When using incoming webhook URLs in the context of GitHub app, you must specify the token. The target branches and a secret for the incoming webhook URL. Example: Repository CRD with incoming webhook apiVersion: "pipelinesascode.tekton.dev/v1alpha1" kind: Repository metadata: name: repo namespace: ns spec: url: "https://github.com/owner/repo" git_provider: type: github secret: name: "owner-token" incoming: - targets: - main secret: name: repo-incoming-secret type: webhook-url Example: The repo-incoming-secret secret for incoming webhook apiVersion: v1 kind: Secret metadata: name: repo-incoming-secret namespace: ns type: Opaque stringData: secret: <very-secure-shared-secret> To trigger a pipeline run located in the .tekton directory of a Git repository, use the following command: USD curl -X POST 'https://control.pac.url/incoming?secret=very-secure-shared-secret&repository=repo&branch=main&pipelinerun=target_pipelinerun' Pipelines as Code matches the incoming URL and treats it as a push event. However, Pipelines as Code does not report status of the pipeline runs triggered by this command. To get a report or a notification, add it directly with a finally task to your pipeline. Alternatively, you can inspect the Repository CRD with the tkn pac CLI tool. 4.8.19. Customizing Pipelines as Code configuration To customize Pipelines as Code, cluster administrators can configure the following parameters using the pipelines-as-code config map in the pipelines-as-code namespace: Table 4.8. Customizing Pipelines as Code configuration Parameter Description Default application-name The name of the application. For example, the name displayed in the GitHub Checks labels. "Pipelines as Code CI" max-keep-days The number of the days for which the executed pipeline runs are kept in the pipelines-as-code namespace . Note that this configmap setting does not affect the cleanups of a user's pipeline runs, which are controlled by the annotations on the pipeline run definition in the user's GitHub repository. secret-auto-create Indicates whether or not a secret should be automatically created using the token generated in the GitHub application. This secret can then be used with private repositories. enabled remote-tasks When enabled, allows remote tasks from pipeline run annotations. enabled hub-url The base URL for the Tekton Hub API . https://hub.tekton.dev/ hub-catalog-name The Tekton Hub catalog name. tekton tekton-dashboard-url The URL of the Tekton Hub dashboard. Pipelines as Code uses this URL to generate a PipelineRun URL on the Tekton Hub dashboard. NA bitbucket-cloud-check-source-ip Indicates whether to secure the service requests by querying IP ranges for a public Bitbucket. Changing the parameter's default value might result into a security issue. enabled bitbucket-cloud-additional-source-ip Indicates whether to provide an additional set of IP ranges or networks, which are separated by commas. NA max-keep-run-upper-limit A maximum limit for the max-keep-run value for a pipeline run. NA default-max-keep-runs A default limit for the max-keep-run value for a pipeline run. If defined, the value is applied to all pipeline runs that do not have a max-keep-run annotation. NA auto-configure-new-github-repo Configures new GitHub repositories automatically. Pipelines as Code sets up a namespace and creates a custom resource for your repository. This parameter is only supported with GitHub applications. disabled auto-configure-repo-namespace-template Configures a template to automatically generate the namespace for your new repository, if auto-configure-new-github-repo is enabled. {repo_name}-pipelines error-log-snippet Enables or disables the view of a log snippet for the failed tasks, with an error in a pipeline. You can disable this parameter in the case of data leakage from your pipeline. enabled 4.8.20. Pipelines as Code command reference The tkn pac CLI tool offers the following capabilities: Bootstrap Pipelines as Code installation and configuration. Create a new Pipelines as Code repository. List all Pipelines as Code repositories. Describe a Pipelines as Code repository and the associated runs. Generate a simple pipeline run to get started. Resolve a pipeline run as if it was executed by Pipelines as Code. Tip You can use the commands corresponding to the capabilities for testing and experimentation, so that you don't have to make changes to the Git repository containing the application source code. 4.8.20.1. Basic syntax USD tkn pac [command or options] [arguments] 4.8.20.2. Global options USD tkn pac --help 4.8.20.3. Utility commands 4.8.20.3.1. bootstrap Table 4.9. Bootstrapping Pipelines as Code installation and configuration Command Description tkn pac bootstrap Installs and configures Pipelines as Code for Git repository hosting service providers, such as GitHub and GitHub Enterprise. tkn pac bootstrap --nightly Installs the nightly build of Pipelines as Code. tkn pac bootstrap --route-url <public_url_to_ingress_spec> Overrides the OpenShift route URL. By default, tkn pac bootstrap detects the OpenShift route, which is automatically associated with the Pipelines as Code controller service. If you do not have an OpenShift Container Platform cluster, it asks you for the public URL that points to the ingress endpoint. tkn pac bootstrap github-app Create a GitHub application and secrets in the pipelines-as-code namespace. 4.8.20.3.2. repository Table 4.10. Managing Pipelines as Code repositories Command Description tkn pac repo create Creates a new Pipelines as Code repository and a namespace based on the pipeline run template. tkn pac repo list Lists all the Pipelines as Code repositories and displays the last status of the associated runs. tkn pac repo describe Describes a Pipelines as Code repository and the associated runs. 4.8.20.3.3. generate Table 4.11. Generating pipeline runs using Pipelines as Code Command Description tkn pac generate Generates a simple pipeline run. When executed from the directory containing the source code, it automatically detects current Git information. In addition, it uses basic language detection capability and adds extra tasks depending on the language. For example, if it detects a setup.py file at the repository root, the pylint task is automatically added to the generated pipeline run. 4.8.20.3.4. resolve Table 4.12. Resolving and executing pipeline runs using Pipelines as Code Command Description tkn pac resolve Executes a pipeline run as if it is owned by the Pipelines as Code on service. tkn pac resolve -f .tekton/pull-request.yaml | oc apply -f - Displays the status of a live pipeline run that uses the template in .tekton/pull-request.yaml . Combined with a Kubernetes installation running on your local machine, you can observe the pipeline run without generating a new commit. If you run the command from a source code repository, it attempts to detect the current Git information and automatically resolve parameters such as current revision or branch. tkn pac resolve -f .tekton/pr.yaml -p revision=main -p repo_name=<repository_name> Executes a pipeline run by overriding default parameter values derived from the Git repository. The -f option can also accept a directory path and apply the tkn pac resolve command on all .yaml or .yml files in that directory. You can also use the -f flag multiple times in the same command. You can override the default information gathered from the Git repository by specifying parameter values using the -p option. For example, you can use a Git branch as a revision and a different repository name. 4.8.21. Additional resources An example of the .tekton/ directory in the Pipelines as Code repository Installing OpenShift Pipelines Installing tkn Red Hat OpenShift Pipelines release notes 4.9. Working with Red Hat OpenShift Pipelines in the web console You can use the Administrator or Developer perspective to create and modify Pipeline , PipelineRun , and Repository objects from the Pipelines page in the OpenShift Container Platform web console. You can also use the +Add page in the Developer perspective of the web console to create CI/CD pipelines for your software delivery process. 4.9.1. Working with Red Hat OpenShift Pipelines in the Developer perspective In the Developer perspective, you can access the following options for creating pipelines from the +Add page: Use the +Add Pipelines Pipeline builder option to create customized pipelines for your application. Use the +Add From Git option to create pipelines using pipeline templates and resources while creating an application. After you create the pipelines for your application, you can view and visually interact with the deployed pipelines in the Pipelines view. You can also use the Topology view to interact with the pipelines created using the From Git option. You must apply custom labels to pipelines created using the Pipeline builder to see them in the Topology view. Prerequisites You have access to an OpenShift Container Platform cluster, and have switched to the Developer perspective . You have the Pipelines Operator installed in your cluster. You are a cluster administrator or a user with create and edit permissions. You have created a project. 4.9.2. Constructing Pipelines using the Pipeline builder In the Developer perspective of the console, you can use the +Add Pipeline Pipeline builder option to: Configure pipelines using either the Pipeline builder or the YAML view . Construct a pipeline flow using existing tasks and cluster tasks. When you install the OpenShift Pipelines Operator, it adds reusable pipeline cluster tasks to your cluster. Specify the type of resources required for the pipeline run, and if required, add additional parameters to the pipeline. Reference these pipeline resources in each of the tasks in the pipeline as input and output resources. If required, reference any additional parameters added to the pipeline in the task. The parameters for a task are prepopulated based on the specifications of the task. Use the Operator-installed, reusable snippets and samples to create detailed pipelines. Procedure In the +Add view of the Developer perspective, click the Pipeline tile to see the Pipeline builder page. Configure the pipeline using either the Pipeline builder view or the YAML view . Note The Pipeline builder view supports a limited number of fields whereas the YAML view supports all available fields. Optionally, you can also use the Operator-installed, reusable snippets and samples to create detailed Pipelines. Figure 4.1. YAML view Configure your pipeline by using Pipeline builder : In the Name field, enter a unique name for the pipeline. In the Tasks section: Click Add task . Search for a task using the quick search field and select the required task from the displayed list. Click Add or Install and add . In this example, use the s2i-nodejs task. Note The search list contains all the Tekton Hub tasks and tasks available in the cluster. Also, if a task is already installed it will show Add to add the task whereas it will show Install and add to install and add the task. It will show Update and add when you add the same task with an updated version. To add sequential tasks to the pipeline: Click the plus icon to the right or left of the task click Add task . Search for a task using the quick search field and select the required task from the displayed list. Click Add or Install and add . Figure 4.2. Pipeline builder To add a final task: Click the Add finally task Click Add task . Search for a task using the quick search field and select the required task from the displayed list. Click Add or Install and add . In the Resources section, click Add Resources to specify the name and type of resources for the pipeline run. These resources are then used by the tasks in the pipeline as inputs and outputs. For this example: Add an input resource. In the Name field, enter Source , and then from the Resource Type drop-down list, select Git . Add an output resource. In the Name field, enter Img , and then from the Resource Type drop-down list, select Image . Note A red icon appears to the task if a resource is missing. Optional: The Parameters for a task are pre-populated based on the specifications of the task. If required, use the Add Parameters link in the Parameters section to add additional parameters. In the Workspaces section, click Add workspace and enter a unique workspace name in the Name field. You can add multiple workspaces to the pipeline. In the Tasks section, click the s2i-nodejs task to see the side panel with details for the task. In the task side panel, specify the resources and parameters for the s2i-nodejs task: If required, in the Parameters section, add more parameters to the default ones, by using the USD(params.<param-name>) syntax. In the Image section, enter Img as specified in the Resources section. Select a workspace from the source drop-down under Workspaces section. Add resources, parameters, and workspaces to the openshift-client task. Click Create to create and view the pipeline in the Pipeline Details page. Click the Actions drop-down menu then click Start , to see the Start Pipeline page. The Workspaces section lists the workspaces you created earlier. Use the respective drop-down to specify the volume source for your workspace. You have the following options: Empty Directory , Config Map , Secret , PersistentVolumeClaim , or VolumeClaimTemplate . 4.9.3. Creating OpenShift Pipelines along with applications To create pipelines along with applications, use the From Git option in the Add+ view of the Developer perspective. You can view all of your available pipelines and select the pipelines you want to use to create applications while importing your code or deploying an image. The Tekton Hub Integration is enabled by default and you can see tasks from the Tekton Hub that are supported by your cluster. Administrators can opt out of the Tekton Hub Integration and the Tekton Hub tasks will no longer be displayed. You can also check whether a webhook URL exists for a generated pipeline. Default webhooks are added for the pipelines that are created using the +Add flow and the URL is visible in the side panel of the selected resources in the Topology view. For more information, see Creating applications using the Developer perspective . 4.9.4. Interacting with pipelines using the Developer perspective The Pipelines view in the Developer perspective lists all the pipelines in a project, along with the following details: The namespace in which the pipeline was created The last pipeline run The status of the tasks in the pipeline run The status of the pipeline run The creation time of the last pipeline run Procedure In the Pipelines view of the Developer perspective, select a project from the Project drop-down list to see the pipelines in that project. Click the required pipeline to see the Pipeline details page. By default, the Details tab displays a visual representation of all the all the serial tasks, parallel tasks, finally tasks, and when expressions in the pipeline. The tasks and the finally tasks are listed in the lower right portion of the page. Click the listed Tasks and Finally tasks to view the task details. Figure 4.3. Pipeline details Optional: On the Pipeline details page, click the Metrics tab to see the following information about pipelines: Pipeline Success Ratio Number of Pipeline Runs Pipeline Run Duration Task Run Duration You can use this information to improve the pipeline workflow and eliminate issues early in the pipeline lifecycle. Optional: Click the YAML tab to edit the YAML file for the pipeline. Optional: Click the Pipeline Runs tab to see the completed, running, or failed runs for the pipeline. The Pipeline Runs tab provides details about the pipeline run, the status of the task, and a link to debug failed pipeline runs. Use the Options menu to stop a running pipeline, to rerun a pipeline using the same parameters and resources as that of the pipeline execution, or to delete a pipeline run. Click the required pipeline run to see the Pipeline Run details page. By default, the Details tab displays a visual representation of all the serial tasks, parallel tasks, finally tasks, and when expressions in the pipeline run. The results for successful runs are displayed under the Pipeline Run results pane at the bottom of the page. Additionally, you would only be able to see tasks from Tekton Hub which are supported by the cluster. While looking at a task, you can click the link beside it to jump to the task documentation. Note The Details section of the Pipeline Run Details page displays a Log Snippet of the failed pipeline run. Log Snippet provides a general error message and a snippet of the log. A link to the Logs section provides quick access to the details about the failed run. On the Pipeline Run details page, click the Task Runs tab to see the completed, running, and failed runs for the task. The Task Runs tab provides information about the task run along with the links to its task and pod, and also the status and duration of the task run. Use the Options menu to delete a task run. Click the required task run to see the Task Run details page. The results for successful runs are displayed under the Task Run results pane at the bottom of the page. Note The Details section of the Task Run details page displays a Log Snippet of the failed task run. Log Snippet provides a general error message and a snippet of the log. A link to the Logs section provides quick access to the details about the failed task run. Click the Parameters tab to see the parameters defined in the pipeline. You can also add or edit additional parameters, as required. Click the Resources tab to see the resources defined in the pipeline. You can also add or edit additional resources, as required. 4.9.5. Starting pipelines from Pipelines view After you create a pipeline, you need to start it to execute the included tasks in the defined sequence. You can start a pipeline from the Pipelines view, the Pipeline Details page, or the Topology view. Procedure To start a pipeline using the Pipelines view: In the Pipelines view of the Developer perspective, click the Options menu adjoining a pipeline, and select Start . The Start Pipeline dialog box displays the Git Resources and the Image Resources based on the pipeline definition. Note For pipelines created using the From Git option, the Start Pipeline dialog box also displays an APP_NAME field in the Parameters section, and all the fields in the dialog box are prepopulated by the pipeline template. If you have resources in your namespace, the Git Resources and the Image Resources fields are prepopulated with those resources. If required, use the drop-downs to select or create the required resources and customize the pipeline run instance. Optional: Modify the Advanced Options to add the credentials that authenticate the specified private Git server or the image registry. Under Advanced Options , click Show Credentials Options and select Add Secret . In the Create Source Secret section, specify the following: A unique Secret Name for the secret. In the Designated provider to be authenticated section, specify the provider to be authenticated in the Access to field, and the base Server URL . Select the Authentication Type and provide the credentials: For the Authentication Type Image Registry Credentials , specify the Registry Server Address that you want to authenticate, and provide your credentials in the Username , Password , and Email fields. Select Add Credentials if you want to specify an additional Registry Server Address . For the Authentication Type Basic Authentication , specify the values for the UserName and Password or Token fields. For the Authentication Type SSH Keys , specify the value of the SSH Private Key field. Note For basic authentication and SSH authentication, you can use annotations such as: tekton.dev/git-0: https://github.com tekton.dev/git-1: https://gitlab.com . Select the check mark to add the secret. You can add multiple secrets based upon the number of resources in your pipeline. Click Start to start the pipeline. The Pipeline Run Details page displays the pipeline being executed. After the pipeline starts, the tasks and steps within each task are executed. You can: Hover over the tasks to see the time taken to execute each step. Click on a task to see the logs for each step in the task. Click the Logs tab to see the logs relating to the execution sequence of the tasks. You can also expand the pane and download the logs individually or in bulk, by using the relevant button. Click the Events tab to see the stream of events generated by a pipeline run. You can use the Task Runs , Logs , and Events tabs to assist in debugging a failed pipeline run or a failed task run. Figure 4.4. Pipeline run details 4.9.6. Starting pipelines from Topology view For pipelines created using the From Git option, you can use the Topology view to interact with pipelines after you start them: Note To see the pipelines created using Pipeline builder in the Topology view, customize the pipeline labels to link the pipeline with the application workload. Procedure Click Topology in the left navigation panel. Click the application to display Pipeline Runs in the side panel. In Pipeline Runs , click Start Last Run to start a new pipeline run with the same parameters and resources as the one. This option is disabled if a pipeline run has not been initiated. You can also start a pipeline run when you create it. Figure 4.5. Pipelines in Topology view In the Topology page, hover to the left of the application to see the status of its pipeline run. After a pipeline is added, a bottom left icon indicates that there is an associated pipeline. 4.9.7. Interacting with pipelines from Topology view The side panel of the application node in the Topology page displays the status of a pipeline run and you can interact with it. If a pipeline run does not start automatically, the side panel displays a message that the pipeline cannot be automatically started, hence it would need to be started manually. If a pipeline is created but the user has not started the pipeline, its status is not started. When the user clicks the Not started status icon, the start dialog box opens in the Topology view. If the pipeline has no build or build config, the Builds section is not visible. If there is a pipeline and build config, the Builds section is visible. The side panel displays a Log Snippet when a pipeline run fails on a specific task run. You can view the Log Snippet in the Pipeline Runs section, under the Resources tab. It provides a general error message and a snippet of the log. A link to the Logs section provides quick access to the details about the failed run. 4.9.8. Editing Pipelines You can edit the Pipelines in your cluster using the Developer perspective of the web console: Procedure In the Pipelines view of the Developer perspective, select the Pipeline you want to edit to see the details of the Pipeline. In the Pipeline Details page, click Actions and select Edit Pipeline . On the Pipeline builder page, you can perform the following tasks: Add additional Tasks, parameters, or resources to the Pipeline. Click the Task you want to modify to see the Task details in the side panel and modify the required Task details, such as the display name, parameters, and resources. Alternatively, to delete the Task, click the Task, and in the side panel, click Actions and select Remove Task . Click Save to save the modified Pipeline. 4.9.9. Deleting Pipelines You can delete the Pipelines in your cluster using the Developer perspective of the web console. Procedure In the Pipelines view of the Developer perspective, click the Options menu adjoining a Pipeline, and select Delete Pipeline . In the Delete Pipeline confirmation prompt, click Delete to confirm the deletion. 4.9.9.1. Additional resources Using Tekton Hub with Pipelines 4.9.10. Creating pipeline templates in the Administrator perspective As a cluster administrator, you can create pipeline templates that developers can reuse when they create a pipeline on the cluster. Prerequisites You have access to an OpenShift Container Platform cluster with cluster administrator permissions, and have switched to the Administrator perspective. You have installed the Pipelines Operator in your cluster. Procedure Navigate to the Pipelines page to view existing pipeline templates. Click the icon to go to the Import YAML page. Add the YAML for your pipeline template. The template must include the following information: apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: # ... namespace: openshift 1 labels: pipeline.openshift.io/runtime: <runtime> 2 pipeline.openshift.io/type: <pipeline-type> 3 # ... 1 The template must be created in the openshift namespace. 2 The template must contain the pipeline.openshift.io/runtime label. The accepted runtime values for this label are nodejs , golang , dotnet , java , php , ruby , perl , python , nginx , and httpd . 3 The template must contain the pipeline.openshift.io/type: label. The accepted type values for this label are openshift , knative , and kubernetes . Click Create . After the pipeline has been created, you are taken to the Pipeline details page, where you can view information about or edit your pipeline. 4.10. Customizing configurations in the TektonConfig custom resource In Red Hat OpenShift Pipelines, you can customize the following configurations by using the TektonConfig custom resource (CR): Configuring the Red Hat OpenShift Pipelines control plane Changing the default service account Disabling the service monitor Disabling cluster tasks and pipeline templates Disabling the integration of Tekton Hub Disabling the automatic creation of RBAC resources Pruning of task runs and pipeline runs 4.10.1. Prerequisites You have installed the Red Hat OpenShift Pipelines Operator. 4.10.2. Configuring the Red Hat OpenShift Pipelines control plane You can customize the Pipelines control plane by editing the configuration fields in the TektonConfig custom resource (CR). The Red Hat OpenShift Pipelines Operator automatically adds the configuration fields with their default values so that you can use the Pipelines control plane. Procedure In the Administrator perspective of the web console, navigate to Administration CustomResourceDefinitions . Use the Search by name box to search for the tektonconfigs.operator.tekton.dev custom resource definition (CRD). Click TektonConfig to see the CRD details page. Click the Instances tab. Click the config instance to see the TektonConfig CR details. Click the YAML tab. Edit the TektonConfig YAML file based on your requirements. Example of TektonConfig CR with default values apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: running-in-environment-with-injected-sidecars: true metrics.taskrun.duration-type: histogram metrics.pipelinerun.duration-type: histogram await-sidecar-readiness: true params: - name: enableMetrics value: 'true' default-service-account: pipeline require-git-ssh-secret-known-hosts: false enable-tekton-oci-bundles: false metrics.taskrun.level: task metrics.pipelinerun.level: pipeline embedded-status: both enable-api-fields: stable enable-provenance-in-status: false enable-custom-tasks: true disable-creds-init: false disable-affinity-assistant: true 4.10.2.1. Modifiable fields with default values The following list includes all modifiable fields with their default values in the TektonConfig CR: running-in-environment-with-injected-sidecars (default: true ): Set this field to false if pipelines run in a cluster that does not use injected sidecars, such as Istio. Setting it to false decreases the time a pipeline takes for a task run to start. Note For clusters that use injected sidecars, setting this field to false can lead to an unexpected behavior. await-sidecar-readiness (default: true ): Set this field to false to stop Pipelines from waiting for TaskRun sidecar containers to run before it begins to operate. This allows tasks to be run in environments that do not support the downwardAPI volume type. default-service-account (default: pipeline ): This field contains the default service account name to use for the TaskRun and PipelineRun resources, if none is specified. require-git-ssh-secret-known-hosts (default: false ): Setting this field to true requires that any Git SSH secret must include the known_hosts field. For more information about configuring Git SSH secrets, see Configuring SSH authentication for Git in the Additional resources section. enable-tekton-oci-bundles (default: false ): Set this field to true to enable the use of an experimental alpha feature named Tekton OCI bundle. embedded-status (default: both ): This field has three acceptable values: full : Enables full embedding of Run and TaskRun statuses in the PipelineRun status minimal : Populates the ChildReferences field with information, such as name, kind, and API version for each run and task run in the`PipelineRun` status both : Applies both, full and minimal values Note The embedded-status field is deprecated and will be removed in a future release. In addition, the pipeline default embedded status will be changed to minimal . enable-api-fields (default: stable ): Setting this field determines which features are enabled. Acceptable value is stable , beta , or alpha . Note Red Hat OpenShift Pipelines does not support the alpha value. enable-provenance-in-status (default: false ): Set this field to true to enable populating the provenance field in TaskRun and PipelineRun statuses. The provenance field contains metadata about resources used in the task run and pipeline run, such as the source from where a remote task or pipeline definition was fetched. enable-custom-tasks (default: true ): Set this field to false to disable the use of custom tasks in pipelines. disable-creds-init (default: false ): Set this field to true to prevent Pipelines from scanning attached service accounts and injecting any credentials into your steps. disable-affinity-assistant (default: true ): Set this field to false to enable affinity assistant for each TaskRun resource sharing a persistent volume claim workspace. Metrics options You can modify the default values of the following metrics fields in the TektonConfig CR: metrics.taskrun.duration-type and metrics.pipelinerun.duration-type (default: histogram ): Setting these fields determines the duration type for a task or pipeline run. Acceptable value is gauge or histogram . metrics.taskrun.level (default: task ): This field determines the level of the task run metrics. Acceptable value is taskrun , task , or namespace . metrics.pipelinerun.level (default: pipeline ): This field determines the level of the pipeline run metrics. Acceptable value is pipelinerun , pipeline , or namespace . 4.10.2.2. Optional configuration fields The following fields do not have a default value, and are considered only if you configure them. By default, the Operator does not add and configure these fields in the TektonConfig custom resource (CR). default-timeout-minutes : This field sets the default timeout for the TaskRun and PipelineRun resources, if none is specified when creating them. If a task run or pipeline run takes more time than the set number of minutes for its execution, then the task run or pipeline run is timed out and cancelled. For example, default-timeout-minutes: 60 sets 60 minutes as default. default-managed-by-label-value : This field contains the default value given to the app.kubernetes.io/managed-by label that is applied to all TaskRun pods, if none is specified. For example, default-managed-by-label-value: tekton-pipelines . default-pod-template : This field sets the default TaskRun and PipelineRun pod templates, if none is specified. default-cloud-events-sink : This field sets the default CloudEvents sink that is used for the TaskRun and PipelineRun resources, if none is specified. default-task-run-workspace-binding : This field contains the default workspace configuration for the workspaces that a Task resource declares, but a TaskRun resource does not explicitly declare. default-affinity-assistant-pod-template : This field sets the default PipelineRun pod template that is used for affinity assistant pods, if none is specified. default-max-matrix-combinations-count : This field contains the default maximum number of combinations generated from a matrix, if none is specified. 4.10.3. Changing the default service account for Pipelines You can change the default service account for Pipelines by editing the default-service-account field in the .spec.pipeline and .spec.trigger specifications. The default service account name is pipeline . Example apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: default-service-account: pipeline trigger: default-service-account: pipeline enable-api-fields: stable 4.10.4. Disabling the service monitor You can disable the service monitor, which is part of Pipelines, to expose the telemetry data. To disable the service monitor, set the enableMetrics parameter to false in the .spec.pipeline specification of the TektonConfig custom resource (CR): Example apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: params: - name: enableMetrics value: 'false' 4.10.5. Disabling cluster tasks and pipeline templates By default, the TektonAddon custom resource (CR) installs clusterTasks and pipelineTemplates resources along with Pipelines on the cluster. You can disable installation of the clusterTasks and pipelineTemplates resources by setting the parameter value to false in the .spec.addon specification. In addition, you can disable the communityClusterTasks parameter. Example apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: params: - name: clusterTasks value: 'false' - name: pipelineTemplates value: 'false' - name: communityClusterTasks value: 'true' 4.10.6. Disabling the integration of Tekton Hub You can disable the integration of Tekton Hub in the web console Developer perspective by setting the enable-devconsole-integration parameter to false in the TektonConfig custom resource (CR). Example of disabling Tekton Hub apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: hub: params: - name: enable-devconsole-integration value: false 4.10.7. Disabling the automatic creation of RBAC resources The default installation of the Red Hat OpenShift Pipelines Operator creates multiple role-based access control (RBAC) resources for all namespaces in the cluster, except the namespaces matching the ^(openshift|kube)-* regular expression pattern. Among these RBAC resources, the pipelines-scc-rolebinding security context constraint (SCC) role binding resource is a potential security issue, because the associated pipelines-scc SCC has the RunAsAny privilege. To disable the automatic creation of cluster-wide RBAC resources after the Red Hat OpenShift Pipelines Operator is installed, cluster administrators can set the createRbacResource parameter to false in the cluster-level TektonConfig custom resource (CR). Example TektonConfig CR apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: "false" ... Warning As a cluster administrator or an user with appropriate privileges, when you disable the automatic creation of RBAC resources for all namespaces, the default ClusterTask resource does not work. For the ClusterTask resource to function, you must create the RBAC resources manually for each intended namespace. 4.10.8. Automatic pruning of task runs and pipeline runs Stale TaskRun and PipelineRun objects and their executed instances occupy physical resources that can be used for active runs. For optimal utilization of these resources, Red Hat OpenShift Pipelines provides annotations that cluster administrators can use to automatically prune the unused objects and their instances in various namespaces. Note Configuring automatic pruning by specifying annotations affects the entire namespace. You cannot selectively auto-prune an individual task run or pipeline run in a namespace. 4.10.8.1. Annotations for automatically pruning task runs and pipeline runs To automatically prune task runs and pipeline runs in a namespace, you can set the following annotations in the namespace: operator.tekton.dev/prune.schedule : If the value of this annotation is different from the value specified in the TektonConfig custom resource definition, a new cron job in that namespace is created. operator.tekton.dev/prune.skip : When set to true , the namespace for which it is configured is not pruned. operator.tekton.dev/prune.resources : This annotation accepts a comma-separated list of resources. To prune a single resource such as a pipeline run, set this annotation to "pipelinerun" . To prune multiple resources, such as task run and pipeline run, set this annotation to "taskrun, pipelinerun" . operator.tekton.dev/prune.keep : Use this annotation to retain a resource without pruning. operator.tekton.dev/prune.keep-since : Use this annotation to retain resources based on their age. The value for this annotation must be equal to the age of the resource in minutes. For example, to retain resources which were created not more than five days ago, set keep-since to 7200 . Note The keep and keep-since annotations are mutually exclusive. For any resource, you must configure only one of them. operator.tekton.dev/prune.strategy : Set the value of this annotation to either keep or keep-since . For example, consider the following annotations that retain all task runs and pipeline runs created in the last five days, and deletes the older resources: Example of auto-pruning annotations ... annotations: operator.tekton.dev/prune.resources: "taskrun, pipelinerun" operator.tekton.dev/prune.keep-since: 7200 ... 4.10.9. Additional resources Configuring SSH authentication for Git Managing non-versioned and versioned cluster tasks Pruning objects to reclaim resources Creating pipeline templates in the Administrator perspective 4.11. Reducing resource consumption of OpenShift Pipelines If you use clusters in multi-tenant environments you must control the consumption of CPU, memory, and storage resources for each project and Kubernetes object. This helps prevent any one application from consuming too many resources and affecting other applications. To define the final resource limits that are set on the resulting pods, Red Hat OpenShift Pipelines use resource quota limits and limit ranges of the project in which they are executed. To restrict resource consumption in your project, you can: Set and manage resource quotas to limit the aggregate resource consumption. Use limit ranges to restrict resource consumption for specific objects, such as pods, images, image streams, and persistent volume claims. 4.11.1. Understanding resource consumption in pipelines Each task consists of a number of required steps to be executed in a particular order defined in the steps field of the Task resource. Every task runs as a pod, and each step runs as a container within that pod. Steps are executed one at a time. The pod that executes the task only requests enough resources to run a single container image (step) in the task at a time, and thus does not store resources for all the steps in the task. The Resources field in the steps spec specifies the limits for resource consumption. By default, the resource requests for the CPU, memory, and ephemeral storage are set to BestEffort (zero) values or to the minimums set through limit ranges in that project. Example configuration of resource requests and limits for a step spec: steps: - name: <step_name> resources: requests: memory: 2Gi cpu: 600m limits: memory: 4Gi cpu: 900m When the LimitRange parameter and the minimum values for container resource requests are specified in the project in which the pipeline and task runs are executed, Red Hat OpenShift Pipelines looks at all the LimitRange values in the project and uses the minimum values instead of zero. Example configuration of limit range parameters at a project level apiVersion: v1 kind: LimitRange metadata: name: <limit_container_resource> spec: limits: - max: cpu: "600m" memory: "2Gi" min: cpu: "200m" memory: "100Mi" default: cpu: "500m" memory: "800Mi" defaultRequest: cpu: "100m" memory: "100Mi" type: Container ... 4.11.2. Mitigating extra resource consumption in pipelines When you have resource limits set on the containers in your pod, OpenShift Container Platform sums up the resource limits requested as all containers run simultaneously. To consume the minimum amount of resources needed to execute one step at a time in the invoked task, Red Hat OpenShift Pipelines requests the maximum CPU, memory, and ephemeral storage as specified in the step that requires the most amount of resources. This ensures that the resource requirements of all the steps are met. Requests other than the maximum values are set to zero. However, this behavior can lead to higher resource usage than required. If you use resource quotas, this could also lead to unschedulable pods. For example, consider a task with two steps that uses scripts, and that does not define any resource limits and requests. The resulting pod has two init containers (one for entrypoint copy, the other for writing scripts) and two containers, one for each step. OpenShift Container Platform uses the limit range set up for the project to compute required resource requests and limits. For this example, set the following limit range in the project: apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr spec: limits: - max: memory: 1Gi min: memory: 500Mi type: Container In this scenario, each init container uses a request memory of 1Gi (the max limit of the limit range), and each container uses a request memory of 500Mi. Thus, the total memory request for the pod is 2Gi. If the same limit range is used with a task of ten steps, the final memory request is 5Gi, which is higher than what each step actually needs, that is 500Mi (since each step runs after the other). Thus, to reduce resource consumption of resources, you can: Reduce the number of steps in a given task by grouping different steps into one bigger step, using the script feature, and the same image. This reduces the minimum requested resource. Distribute steps that are relatively independent of each other and can run on their own to multiple tasks instead of a single task. This lowers the number of steps in each task, making the request for each task smaller, and the scheduler can then run them when the resources are available. 4.11.3. Additional resources Setting compute resource quota for OpenShift Pipelines Resource quotas per project Restricting resource consumption using limit ranges Resource requests and limits in Kubernetes 4.12. Setting compute resource quota for OpenShift Pipelines A ResourceQuota object in Red Hat OpenShift Pipelines controls the total resource consumption per namespace. You can use it to limit the quantity of objects created in a namespace, based on the type of the object. In addition, you can specify a compute resource quota to restrict the total amount of compute resources consumed in a namespace. However, you might want to limit the amount of compute resources consumed by pods resulting from a pipeline run, rather than setting quotas for the entire namespace. Currently, Red Hat OpenShift Pipelines does not enable you to directly specify the compute resource quota for a pipeline. 4.12.1. Alternative approaches for limiting compute resource consumption in OpenShift Pipelines To attain some degree of control over the usage of compute resources by a pipeline, consider the following alternative approaches: Set resource requests and limits for each step in a task. Example: Set resource requests and limits for each step in a task. ... spec: steps: - name: step-with-limts resources: requests: memory: 1Gi cpu: 500m limits: memory: 2Gi cpu: 800m ... Set resource limits by specifying values for the LimitRange object. For more information on LimitRange , refer to Restrict resource consumption with limit ranges . Reduce pipeline resource consumption . Set and manage resource quotas per project . Ideally, the compute resource quota for a pipeline should be same as the total amount of compute resources consumed by the concurrently running pods in a pipeline run. However, the pods running the tasks consume compute resources based on the use case. For example, a Maven build task might require different compute resources for different applications that it builds. As a result, you cannot predetermine the compute resource quotas for tasks in a generic pipeline. For greater predictability and control over usage of compute resources, use customized pipelines for different applications. Note When using Red Hat OpenShift Pipelines in a namespace configured with a ResourceQuota object, the pods resulting from task runs and pipeline runs might fail with an error, such as: failed quota: <quota name> must specify cpu, memory . To avoid this error, do any one of the following: (Recommended) Specify a limit range for the namespace. Explicitly define requests and limits for all containers. For more information, refer to the issue and the resolution . If your use case is not addressed by these approaches, you can implement a workaround by using a resource quota for a priority class. 4.12.2. Specifying pipelines resource quota using priority class A PriorityClass object maps priority class names to the integer values that indicates their relative priorities. Higher values increase the priority of a class. After you create a priority class, you can create pods that specify the priority class name in their specifications. In addition, you can control a pod's consumption of system resources based on the pod's priority. Specifying resource quota for a pipeline is similar to setting a resource quota for the subset of pods created by a pipeline run. The following steps provide an example of the workaround by specifying resource quota based on priority class. Procedure Create a priority class for a pipeline. Example: Priority class for a pipeline apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: pipeline1-pc value: 1000000 description: "Priority class for pipeline1" Create a resource quota for a pipeline. Example: Resource quota for a pipeline apiVersion: v1 kind: ResourceQuota metadata: name: pipeline1-rq spec: hard: cpu: "1000" memory: 200Gi pods: "10" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: ["pipeline1-pc"] Verify the resource quota usage for the pipeline. Example: Verify resource quota usage for the pipeline USD oc describe quota Sample output Because pods are not running, the quota is unused. Create the pipelines and tasks. Example: YAML for the pipeline apiVersion: tekton.dev/v1alpha1 kind: Pipeline metadata: name: maven-build spec: workspaces: - name: local-maven-repo resources: - name: app-git type: git tasks: - name: build taskRef: name: mvn resources: inputs: - name: source resource: app-git params: - name: GOALS value: ["package"] workspaces: - name: maven-repo workspace: local-maven-repo - name: int-test taskRef: name: mvn runAfter: ["build"] resources: inputs: - name: source resource: app-git params: - name: GOALS value: ["verify"] workspaces: - name: maven-repo workspace: local-maven-repo - name: gen-report taskRef: name: mvn runAfter: ["build"] resources: inputs: - name: source resource: app-git params: - name: GOALS value: ["site"] workspaces: - name: maven-repo workspace: local-maven-repo Example: YAML for a task in the pipeline apiVersion: tekton.dev/v1alpha1 kind: Task metadata: name: mvn spec: workspaces: - name: maven-repo inputs: params: - name: GOALS description: The Maven goals to run type: array default: ["package"] resources: - name: source type: git steps: - name: mvn image: gcr.io/cloud-builders/mvn workingDir: /workspace/source command: ["/usr/bin/mvn"] args: - -Dmaven.repo.local=USD(workspaces.maven-repo.path) - "USD(inputs.params.GOALS)" priorityClassName: pipeline1-pc Note Ensure that all tasks in the pipeline belongs to the same priority class. Create and start the pipeline run. Example: YAML for a pipeline run apiVersion: tekton.dev/v1alpha1 kind: PipelineRun metadata: generateName: petclinic-run- spec: pipelineRef: name: maven-build resources: - name: app-git resourceSpec: type: git params: - name: url value: https://github.com/spring-projects/spring-petclinic After the pods are created, verify the resource quota usage for the pipeline run. Example: Verify resource quota usage for the pipeline USD oc describe quota Sample output The output indicates that you can manage the combined resource quota for all concurrent running pods belonging to a priority class, by specifying the resource quota per priority class. 4.12.3. Additional resources Resource quotas in Kubernetes Limit ranges in Kubernetes Resource requests and limits in Kubernetes 4.13. Using pods in a privileged security context The default configuration of OpenShift Pipelines 1.3.x and later versions does not allow you to run pods with privileged security context, if the pods result from pipeline run or task run. For such pods, the default service account is pipeline , and the security context constraint (SCC) associated with the pipeline service account is pipelines-scc . The pipelines-scc SCC is similar to the anyuid SCC, but with minor differences as defined in the YAML file for the SCC of pipelines: Example pipelines-scc.yaml snippet apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints ... allowedCapabilities: - SETFCAP ... fsGroup: type: MustRunAs ... In addition, the Buildah cluster task, shipped as part of the OpenShift Pipelines, uses vfs as the default storage driver. 4.13.1. Running pipeline run and task run pods with privileged security context Procedure To run a pod (resulting from pipeline run or task run) with the privileged security context, do the following modifications: Configure the associated user account or service account to have an explicit SCC. You can perform the configuration using any of the following methods: Run the following command: USD oc adm policy add-scc-to-user <scc-name> -z <service-account-name> Alternatively, modify the YAML files for RoleBinding , and Role or ClusterRole : Example RoleBinding object apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: service-account-name 1 namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-clusterrole 2 subjects: - kind: ServiceAccount name: pipeline namespace: default 1 Substitute with an appropriate service account name. 2 Substitute with an appropriate cluster role based on the role binding you use. Example ClusterRole object apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-clusterrole 1 rules: - apiGroups: - security.openshift.io resourceNames: - nonroot resources: - securitycontextconstraints verbs: - use 1 Substitute with an appropriate cluster role based on the role binding you use. Note As a best practice, create a copy of the default YAML files and make changes in the duplicate file. If you do not use the vfs storage driver, configure the service account associated with the task run or the pipeline run to have a privileged SCC, and set the security context as privileged: true . 4.13.2. Running pipeline run and task run by using a custom SCC and a custom service account When using the pipelines-scc security context constraint (SCC) associated with the default pipelines service account, the pipeline run and task run pods may face timeouts. This happens because in the default pipelines-scc SCC, the fsGroup.type parameter is set to MustRunAs . Note For more information about pod timeouts, see BZ#1995779 . To avoid pod timeouts, you can create a custom SCC with the fsGroup.type parameter set to RunAsAny , and associate it with a custom service account. Note As a best practice, use a custom SCC and a custom service account for pipeline runs and task runs. This approach allows greater flexibility and does not break the runs when the defaults are modified during an upgrade. Procedure Define a custom SCC with the fsGroup.type parameter set to RunAsAny : Example: Custom SCC apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: my-scc is a close replica of anyuid scc. pipelines-scc has fsGroup - RunAsAny. name: my-scc allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null defaultAddCapabilities: null fsGroup: type: RunAsAny groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD runAsUser: type: RunAsAny seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret Create the custom SCC: Example: Create the my-scc SCC USD oc create -f my-scc.yaml Create a custom service account: Example: Create a fsgroup-runasany service account USD oc create serviceaccount fsgroup-runasany Associate the custom SCC with the custom service account: Example: Associate the my-scc SCC with the fsgroup-runasany service account USD oc adm policy add-scc-to-user my-scc -z fsgroup-runasany If you want to use the custom service account for privileged tasks, you can associate the privileged SCC with the custom service account by running the following command: Example: Associate the privileged SCC with the fsgroup-runasany service account USD oc adm policy add-scc-to-user privileged -z fsgroup-runasany Use the custom service account in the pipeline run and task run: Example: Pipeline run YAML with fsgroup-runasany custom service account apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: <pipeline-run-name> spec: pipelineRef: name: <pipeline-cluster-task-name> serviceAccountName: 'fsgroup-runasany' Example: Task run YAML with fsgroup-runasany custom service account apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: <task-run-name> spec: taskRef: name: <cluster-task-name> serviceAccountName: 'fsgroup-runasany' 4.13.3. Additional resources For information on managing SCCs, refer to Managing security context constraints . 4.14. Securing webhooks with event listeners As an administrator, you can secure webhooks with event listeners. After creating a namespace, you enable HTTPS for the Eventlistener resource by adding the operator.tekton.dev/enable-annotation=enabled label to the namespace. Then, you create a Trigger resource and a secured route using the re-encrypted TLS termination. Triggers in Red Hat OpenShift Pipelines support insecure HTTP and secure HTTPS connections to the Eventlistener resource. HTTPS secures connections within and outside the cluster. Red Hat OpenShift Pipelines runs a tekton-operator-proxy-webhook pod that watches for the labels in the namespace. When you add the label to the namespace, the webhook sets the service.beta.openshift.io/serving-cert-secret-name=<secret_name> annotation on the EventListener object. This, in turn, creates secrets and the required certificates. service.beta.openshift.io/serving-cert-secret-name=<secret_name> In addition, you can mount the created secret into the Eventlistener pod to secure the request. 4.14.1. Providing secure connection with OpenShift routes To create a route with the re-encrypted TLS termination, run: USD oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname> Alternatively, you can create a re-encrypted TLS termination YAML file to create a secure route. Example re-encrypt TLS termination YAML to create a secure route apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: <hostname> to: kind: Service name: frontend 2 tls: termination: reencrypt 3 key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- 4 -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- 1 2 The name of the object, which is limited to only 63 characters. 3 The termination field is set to reencrypt . This is the only required TLS field. 4 This is required for re-encryption. The destinationCACertificate field specifies a CA certificate to validate the endpoint certificate, thus securing the connection from the router to the destination pods. You can omit this field in either of the following scenarios: The service uses a service signing certificate. The administrator specifies a default CA certificate for the router, and the service has a certificate signed by that CA. You can run the oc create route reencrypt --help command to display more options. 4.14.2. Creating a sample EventListener resource using a secure HTTPS connection This section uses the pipelines-tutorial example to demonstrate creation of a sample EventListener resource using a secure HTTPS connection. Procedure Create the TriggerBinding resource from the YAML file available in the pipelines-tutorial repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yaml Create the TriggerTemplate resource from the YAML file available in the pipelines-tutorial repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yaml Create the Trigger resource directly from the pipelines-tutorial repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yaml Create an EventListener resource using a secure HTTPS connection: Add a label to enable the secure HTTPS connection to the Eventlistener resource: USD oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled Create the EventListener resource from the YAML file available in the pipelines-tutorial repository: USD oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yaml Create a route with the re-encrypted TLS termination: USD oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname> 4.15. Authenticating pipelines using git secret A Git secret consists of credentials to securely interact with a Git repository, and is often used to automate authentication. In Red Hat OpenShift Pipelines, you can use Git secrets to authenticate pipeline runs and task runs that interact with a Git repository during execution. A pipeline run or a task run gains access to the secrets through the associated service account. Pipelines support the use of Git secrets as annotations (key-value pairs) for basic authentication and SSH-based authentication. 4.15.1. Credential selection A pipeline run or task run might require multiple authentications to access different Git repositories. Annotate each secret with the domains where Pipelines can use its credentials. A credential annotation key for Git secrets must begin with tekton.dev/git- , and its value is the URL of the host for which you want Pipelines to use that credential. In the following example, Pipelines uses a basic-auth secret, which relies on a username and password, to access repositories at github.com and gitlab.com . Example: Multiple credentials for basic authentication apiVersion: v1 kind: Secret metadata: annotations: tekton.dev/git-0: github.com tekton.dev/git-1: gitlab.com type: kubernetes.io/basic-auth stringData: username: <username> 1 password: <password> 2 1 Username for the repository 2 Password or personal access token for the repository You can also use an ssh-auth secret (private key) to access a Git repository. Example: Private key for SSH based authentication apiVersion: v1 kind: Secret metadata: annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: 1 1 The content of the SSH private key file. 4.15.2. Configuring basic authentication for Git For a pipeline to retrieve resources from password-protected repositories, you must configure the basic authentication for that pipeline. To configure basic authentication for a pipeline, update the secret.yaml , serviceaccount.yaml , and run.yaml files with the credentials from the Git secret for the specified repository. When you complete this process, Pipelines can use that information to retrieve the specified pipeline resources. Note For GitHub, authentication using plain password is deprecated. Instead, use a personal access token . Procedure In the secret.yaml file, specify the username and password or GitHub personal access token to access the target Git repository. apiVersion: v1 kind: Secret metadata: name: basic-user-pass 1 annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/basic-auth stringData: username: <username> 2 password: <password> 3 1 Name of the secret. In this example, basic-user-pass . 2 Username for the Git repository. 3 Password for the Git repository. In the serviceaccount.yaml file, associate the secret with the appropriate service account. apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: basic-user-pass 2 1 Name of the service account. In this example, build-bot . 2 Name of the secret. In this example, basic-user-pass . In the run.yaml file, associate the service account with a task run or a pipeline run. Associate the service account with a task run: apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: serviceAccountName: build-bot 2 taskRef: name: build-push 3 1 Name of the task run. In this example, build-push-task-run-2 . 2 Name of the service account. In this example, build-bot . 3 Name of the task. In this example, build-push . Associate the service account with a PipelineRun resource: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3 1 Name of the pipeline run. In this example, demo-pipeline . 2 Name of the service account. In this example, build-bot . 3 Name of the pipeline. In this example, demo-pipeline . Apply the changes. USD oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml 4.15.3. Configuring SSH authentication for Git For a pipeline to retrieve resources from repositories configured with SSH keys, you must configure the SSH-based authentication for that pipeline. To configure SSH-based authentication for a pipeline, update the secret.yaml , serviceaccount.yaml , and run.yaml files with the credentials from the SSH private key for the specified repository. When you complete this process, Pipelines can use that information to retrieve the specified pipeline resources. Note Consider using SSH-based authentication rather than basic authentication. Procedure Generate an SSH private key , or copy an existing private key, which is usually available in the ~/.ssh/id_rsa file. In the secret.yaml file, set the value of ssh-privatekey to the content of the SSH private key file, and set the value of known_hosts to the content of the known hosts file. apiVersion: v1 kind: Secret metadata: name: ssh-key 1 annotations: tekton.dev/git-0: github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: 2 known_hosts: 3 1 Name of the secret containing the SSH private key. In this example, ssh-key . 2 The content of the SSH private key file. 3 The content of the known hosts file. Caution If you omit the private key, Pipelines accepts the public key of any server. Optional: To specify a custom SSH port, add :<port number> to the end of the annotation value. For example, tekton.dev/git-0: github.com:2222 . In the serviceaccount.yaml file, associate the ssh-key secret with the build-bot service account. apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: ssh-key 2 1 Name of the service account. In this example, build-bot . 2 Name of the secret containing the SSH private key. In this example, ssh-key . In the run.yaml file, associate the service account with a task run or a pipeline run. Associate the service account with a task run: apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: serviceAccountName: build-bot 2 taskRef: name: build-push 3 1 Name of the task run. In this example, build-push-task-run-2 . 2 Name of the service account. In this example, build-bot . 3 Name of the task. In this example, build-push . Associate the service account with a pipeline run: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3 1 Name of the pipeline run. In this example, demo-pipeline . 2 Name of the service account. In this example, build-bot . 3 Name of the pipeline. In this example, demo-pipeline . Apply the changes. USD oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml 4.15.4. Using SSH authentication in git type tasks When invoking Git commands, you can use SSH authentication directly in the steps of a task. SSH authentication ignores the USDHOME variable and only uses the user's home directory specified in the /etc/passwd file. So each step in a task must symlink the /tekton/home/.ssh directory to the home directory of the associated user. However, explicit symlinks are not necessary when you use a pipeline resource of the git type, or the git-clone task available in the Tekton catalog. As an example of using SSH authentication in git type tasks, refer to authenticating-git-commands.yaml . 4.15.5. Using secrets as a non-root user You might need to use secrets as a non-root user in certain scenarios, such as: The users and groups that the containers use to execute runs are randomized by the platform. The steps in a task define a non-root security context. A task specifies a global non-root security context, which applies to all steps in a task. In such scenarios, consider the following aspects of executing task runs and pipeline runs as a non-root user: SSH authentication for Git requires the user to have a valid home directory configured in the /etc/passwd directory. Specifying a UID that has no valid home directory results in authentication failure. SSH authentication ignores the USDHOME environment variable. So you must or symlink the appropriate secret files from the USDHOME directory defined by Pipelines ( /tekton/home ), to the non-root user's valid home directory. In addition, to configure SSH authentication in a non-root security context, refer to the example for authenticating git commands . 4.15.6. Limiting secret access to specific steps By default, the secrets for Pipelines are stored in the USDHOME/tekton/home directory, and are available for all the steps in a task. To limit a secret to specific steps, use the secret definition to specify a volume, and mount the volume in specific steps. 4.16. Using Tekton Chains for OpenShift Pipelines supply chain security Important Tekton Chains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Tekton Chains is a Kubernetes Custom Resource Definition (CRD) controller. You can use it to manage the supply chain security of the tasks and pipelines created using Red Hat OpenShift Pipelines. By default, Tekton Chains observes all task run executions in your OpenShift Container Platform cluster. When the task runs complete, Tekton Chains takes a snapshot of the task runs. It then converts the snapshot to one or more standard payload formats, and finally signs and stores all artifacts. To capture information about task runs, Tekton Chains uses the Result and PipelineResource objects. When the objects are unavailable, Tekton Chains the URLs and qualified digests of the OCI images. Note The PipelineResource object is deprecated and will be removed in a future release; for manual use, the Results object is recommended. 4.16.1. Key features You can sign task runs, task run results, and OCI registry images with cryptographic key types and services such as cosign . You can use attestation formats such as in-toto . You can securely store signatures and signed artifacts using OCI repository as a storage backend. 4.16.2. Installing Tekton Chains using the Red Hat OpenShift Pipelines Operator Cluster administrators can use the TektonChain custom resource (CR) to install and manage Tekton Chains. Note Tekton Chains is an optional component of Red Hat OpenShift Pipelines. Currently, you cannot install it using the TektonConfig CR. Prerequisites Ensure that the Red Hat OpenShift Pipelines Operator is installed in the openshift-pipelines namespace on your cluster. Procedure Create the TektonChain CR for your OpenShift Container Platform cluster. apiVersion: operator.tekton.dev/v1alpha1 kind: TektonChain metadata: name: chain spec: targetNamespace: openshift-pipelines Apply the TektonChain CR. USD oc apply -f TektonChain.yaml 1 1 Substitute with the file name of the TektonChain CR. Check the status of the installation. USD oc get tektonchains.operator.tekton.dev 4.16.3. Configuring Tekton Chains Tekton Chains uses a ConfigMap object named chains-config in the openshift-pipelines namespace for configuration. To configure Tekton Chains, use the following example: Example: Configuring Tekton Chains USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.oci.storage": "", "artifacts.taskrun.format":"tekton", "artifacts.taskrun.storage": "tekton"}}' 1 1 Use a combination of supported key-value pairs in the JSON payload. 4.16.3.1. Supported keys for Tekton Chains configuration Cluster administrators can use various supported keys and values to configure specifications about task runs, OCI images, and storage. 4.16.3.1.1. Supported keys for task run Table 4.13. Chains configuration: Supported keys for task run Supported keys Description Supported values Default values artifacts.taskrun.format The format to store task run payloads. tekton , in-toto tekton artifacts.taskrun.storage The storage backend for task run signatures. You can specify multiple backends as a comma-separated list, such as "tekton,oci" . To disable this artifact, provide an empty string "" . tekton , oci tekton artifacts.taskrun.signer The signature backend to sign task run payloads. x509 x509 4.16.3.1.2. Supported keys for OCI Table 4.14. Chains configuration: Supported keys for OCI Supported keys Description Supported values Default values artifacts.oci.format The format to store OCI payloads. simplesigning simplesigning artifacts.oci.storage The storage backend to for OCI signatures. You can specify multiple backends as a comma-separated list, such as "oci,tekton" . To disable the OCI artifact, provide an empty string "" . tekton , oci oci artifacts.oci.signer The signature backend to sign OCI payloads. x509 , cosign x509 4.16.3.1.3. Supported keys for storage Table 4.15. Chains configuration: Supported keys for storage Supported keys Description Supported values Default values artifacts.oci.repository The OCI repository to store OCI signatures. Currently, Chains support only the internal OpenShift OCI registry; other popular options such as Quay is not supported. 4.16.4. Signing secrets in Tekton Chains Cluster administrators can generate a key pair and use Tekton Chains to sign artifacts using a Kubernetes secret. For Tekton Chains to work, a private key and a password for encrypted keys must exist as part of the signing-secrets Kubernetes secret, in the openshift-pipelines namespace. Currently, Tekton Chains supports the x509 and cosign signature schemes. Note Use only one of the supported signature schemes. 4.16.4.1. Signing using x509 To use the x509 signing scheme with Tekton Chains, store the x509.pem private key of the ed25519 or ecdsa type in the signing-secrets Kubernetes secret. Ensure that the key is stored as an unencrypted PKCS8 PEM file ( BEGIN PRIVATE KEY ). 4.16.4.2. Signing using cosign To use the cosign signing scheme with Tekton Chains: Install cosign . Generate the cosign.key and cosign.pub key pairs. USD cosign generate-key-pair k8s://openshift-pipelines/signing-secrets Cosign prompts you for a password, and creates a Kubernetes secret. Store the encrypted cosign.key private key and the cosign.password decryption password in the signing-secrets Kubernetes secret. Ensure that the private key is stored as an encrypted PEM file of the ENCRYPTED COSIGN PRIVATE KEY type. 4.16.4.3. Troubleshooting signing If the signing secrets are already populated, you might get the following error: Error from server (AlreadyExists): secrets "signing-secrets" already exists To resolve the error: Delete the secrets: USD oc delete secret signing-secrets -n openshift-pipelines Recreate the key pairs and store them in the secrets using your preferred signing scheme. 4.16.5. Authenticating to an OCI registry Before pushing signatures to an OCI registry, cluster administrators must configure Tekton Chains to authenticate with the registry. The Tekton Chains controller uses the same service account under which the task runs execute. To set up a service account with the necessary credentials for pushing signatures to an OCI registry, perform the following steps: Procedure Set the namespace and name of the Kubernetes service account. USD export NAMESPACE=<namespace> 1 USD export SERVICE_ACCOUNT_NAME=<service_account> 2 1 The namespace associated with the service account. 2 The name of the service account. Create a Kubernetes secret. USD oc create secret registry-credentials \ --from-file=.dockerconfigjson \ 1 --type=kubernetes.io/dockerconfigjson \ -n USDNAMESPACE 1 Substitute with the path to your Docker config file. Default path is ~/.docker/config.json . Give the service account access to the secret. USD oc patch serviceaccount USDSERVICE_ACCOUNT_NAME \ -p "{\"imagePullSecrets\": [{\"name\": \"registry-credentials\"}]}" -n USDNAMESPACE If you patch the default pipeline service account that Red Hat OpenShift Pipelines assigns to all task runs, the Red Hat OpenShift Pipelines Operator will override the service account. As a best practice, you can perform the following steps: Create a separate service account to assign to user's task runs. USD oc create serviceaccount <service_account_name> Associate the service account to the task runs by setting the value of the serviceaccountname field in the task run template. apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 spec: serviceAccountName: build-bot 1 taskRef: name: build-push ... 1 Substitute with the name of the newly created service account. 4.16.5.1. Creating and verifying task run signatures without any additional authentication To verify signatures of task runs using Tekton Chains with any additional authentication, perform the following tasks: Create an encrypted x509 key pair and save it as a Kubernetes secret. Configure the Tekton Chains backend storage. Create a task run, sign it, and store the signature and the payload as annotations on the task run itself. Retrieve the signature and payload from the signed task run. Verify the signature of the task run. Prerequisites Ensure that the following are installed on the cluster: Red Hat OpenShift Pipelines Operator Tekton Chains Cosign Procedure Create an encrypted x509 key pair and save it as a Kubernetes secret: USD cosign generate-key-pair k8s://openshift-pipelines/signing-secrets Provide a password when prompted. Cosign stores the resulting private key as part of the signing-secrets Kubernetes secret in the openshift-pipelines namespace. In the Tekton Chains configuration, disable the OCI storage, and set the task run storage and format to tekton . USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.oci.storage": "", "artifacts.taskrun.format":"tekton", "artifacts.taskrun.storage": "tekton"}}' Restart the Tekton Chains controller to ensure that the modified configuration is applied. USD oc delete po -n openshift-pipelines -l app=tekton-chains-controller Create a task run. USD oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1 taskrun.tekton.dev/build-push-run-output-image-qbjvh created 1 Substitute with the URI or file path pointing to your task run. Check the status of the steps, and wait till the process finishes. USD tkn tr describe --last [...truncated output...] NAME STATUS β create-dir-builtimage-9467f Completed β git-source-sourcerepo-p2sk8 Completed β build-and-push Completed β echo Completed β image-digest-exporter-xlkn7 Completed Retrieve the signature and payload from the object stored as base64 encoded annotations: USD export TASKRUN_UID=USD(tkn tr describe --last -o jsonpath='{.metadata.uid}') USD tkn tr describe --last -o jsonpath="{.metadata.annotations.chains\.tekton\.dev/signature-taskrun-USDTASKRUN_UID}" > signature USD tkn tr describe --last -o jsonpath="{.metadata.annotations.chains\.tekton\.dev/payload-taskrun-USDTASKRUN_UID}" | base64 -d > payload Verify the signature. USD cosign verify-blob --key k8s://openshift-pipelines/signing-secrets --signature ./signature ./payload Verified OK 4.16.6. Using Tekton Chains to sign and verify image and provenance Cluster administrators can use Tekton Chains to sign and verify images and provenances, by performing the following tasks: Create an encrypted x509 key pair and save it as a Kubernetes secret. Set up authentication for the OCI registry to store images, image signatures, and signed image attestations. Configure Tekton Chains to generate and sign provenance. Create an image with Kaniko in a task run. Verify the signed image and the signed provenance. Prerequisites Ensure that the following are installed on the cluster: Red Hat OpenShift Pipelines Operator Tekton Chains Cosign Rekor jq Procedure Create an encrypted x509 key pair and save it as a Kubernetes secret: USD cosign generate-key-pair k8s://openshift-pipelines/signing-secrets Provide a password when prompted. Cosign stores the resulting private key as part of the signing-secrets Kubernetes secret in the openshift-pipelines namespace, and writes the public key to the cosign.pub local file. Configure authentication for the image registry. To configure the Tekton Chains controller for pushing signature to an OCI registry, use the credentials associated with the service account of the task run. For detailed information, see the "Authenticating to an OCI registry" section. To configure authentication for a Kaniko task that builds and pushes image to the registry, create a Kubernetes secret of the docker config.json file containing the required credentials. USD oc create secret generic <docker_config_secret_name> \ 1 --from-file <path_to_config.json> 2 1 Substitute with the name of the docker config secret. 2 Substitute with the path to docker config.json file. Configure Tekton Chains by setting the artifacts.taskrun.format , artifacts.taskrun.storage , and transparency.enabled parameters in the chains-config object: USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.format": "in-toto"}}' USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"artifacts.taskrun.storage": "oci"}}' USD oc patch configmap chains-config -n openshift-pipelines -p='{"data":{"transparency.enabled": "true"}}' Start the Kaniko task. Apply the Kaniko task to the cluster. USD oc apply -f examples/kaniko/kaniko.yaml 1 1 Substitute with the URI or file path to your Kaniko task. Set the appropriate environment variables. USD export REGISTRY=<url_of_registry> 1 USD export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2 1 Substitute with the URL of the registry where you want to push the image. 2 Substitute with the name of the secret in the docker config.json file. Start the Kaniko task. USD tkn task start --param IMAGE=USDREGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir="" --workspace name=dockerconfig,secret=USDDOCKERCONFIG_SECRET_NAME kaniko-chains Observe the logs of this task until all steps are complete. On successful authentication, the final image will be pushed to USDREGISTRY/kaniko-chains . Wait for a minute to allow Tekton Chains to generate the provenance and sign it, and then check the availability of the chains.tekton.dev/signed=true annotation on the task run. USD oc get tr <task_run_name> \ 1 -o json | jq -r .metadata.annotations { "chains.tekton.dev/signed": "true", ... } 1 Substitute with the name of the task run. Verify the image and the attestation. USD cosign verify --key cosign.pub USDREGISTRY/kaniko-chains USD cosign verify-attestation --key cosign.pub USDREGISTRY/kaniko-chains Find the provenance for the image in Rekor. Get the digest of the USDREGISTRY/kaniko-chains image. You can search for it ing the task run, or pull the image to extract the digest. Search Rekor to find all entries that match the sha256 digest of the image. USD rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3 ... 1 Substitute with the sha256 digest of the image. 2 The first matching universally unique identifier (UUID). 3 The second matching UUID. The search result displays UUIDs of the matching entries. One of those UUIDs holds the attestation. Check the attestation. USD rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq 4.16.7. Additional resources Installing OpenShift Pipelines 4.17. Viewing pipeline logs using the OpenShift Logging Operator The logs generated by pipeline runs, task runs, and event listeners are stored in their respective pods. It is useful to review and analyze logs for troubleshooting and audits. However, retaining the pods indefinitely leads to unnecessary resource consumption and cluttered namespaces. To eliminate any dependency on the pods for viewing pipeline logs, you can use the OpenShift Elasticsearch Operator and the OpenShift Logging Operator. These Operators help you to view pipeline logs by using the Elasticsearch Kibana stack, even after you have deleted the pods that contained the logs. 4.17.1. Prerequisites Before trying to view pipeline logs in a Kibana dashboard, ensure the following: The steps are performed by a cluster administrator. Logs for pipeline runs and task runs are available. The OpenShift Elasticsearch Operator and the OpenShift Logging Operator are installed. 4.17.2. Viewing pipeline logs in Kibana To view pipeline logs in the Kibana web console: Procedure Log in to OpenShift Container Platform web console as a cluster administrator. In the top right of the menu bar, click the grid icon Observability Logging . The Kibana web console is displayed. Create an index pattern: On the left navigation panel of the Kibana web console, click Management . Click Create index pattern . Under Step 1 of 2: Define index pattern Index pattern , enter a * pattern and click Step . Under Step 2 of 2: Configure settings Time filter field name , select @timestamp from the drop-down menu, and click Create index pattern . Add a filter: On the left navigation panel of the Kibana web console, click Discover . Click Add a filter + Edit Query DSL . Note For each of the example filters that follows, edit the query and click Save . The filters are applied one after another. Filter the containers related to pipelines: Example query to filter pipelines containers { "query": { "match": { "kubernetes.flat_labels": { "query": "app_kubernetes_io/managed-by=tekton-pipelines", "type": "phrase" } } } } Filter all containers that are not place-tools container. As an illustration of using the graphical drop-down menus instead of editing the query DSL, consider the following approach: Figure 4.6. Example of filtering using the drop-down fields Filter pipelinerun in labels for highlighting: Example query to filter pipelinerun in labels for highlighting { "query": { "match": { "kubernetes.flat_labels": { "query": "tekton_dev/pipelineRun=", "type": "phrase" } } } } Filter pipeline in labels for highlighting: Example query to filter pipeline in labels for highlighting { "query": { "match": { "kubernetes.flat_labels": { "query": "tekton_dev/pipeline=", "type": "phrase" } } } } From the Available fields list, select the following fields: kubernetes.flat_labels message Ensure that the selected fields are displayed under the Selected fields list. The logs are displayed under the message field. Figure 4.7. Filtered messages 4.17.3. Additional resources Installing OpenShift Logging Viewing logs for a resource Viewing cluster logs by using Kibana 4.18. Building of container images using Buildah as a non-root user Running Pipelines as the root user on a container can expose the container processes and the host to other potentially malicious resources. You can reduce this type of exposure by running the workload as a specific non-root user in the container. To run builds of container images using Buildah as a non-root user, you can perform the following steps: Define custom service account (SA) and security context constraint (SCC). Configure Buildah to use the build user with id 1000 . Start a task run with a custom config map, or integrate it with a pipeline run. 4.18.1. Configuring custom service account and security context constraint The default pipeline SA allows using a user id outside of the namespace range. To reduce dependency on the default SA, you can define a custom SA and SCC with necessary cluster role and role bindings for the build user with user id 1000 . Important At this time, enabling the allowPrivilegeEscalation setting is required for Buildah to run successfully in the container. With this setting, Buildah can leverage SETUID and SETGID capabilities when running as a non-root user. Procedure Create a custom SA and SCC with necessary cluster role and role bindings. Example: Custom SA and SCC for used id 1000 apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-sa-userid-1000 1 --- kind: SecurityContextConstraints metadata: annotations: name: pipelines-scc-userid-1000 2 allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true 3 allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD - KILL runAsUser: 4 type: MustRunAs uid: 1000 seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-userid-1000-clusterrole 5 rules: - apiGroups: - security.openshift.io resourceNames: - pipelines-scc-userid-1000 resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pipelines-scc-userid-1000-rolebinding 6 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-userid-1000-clusterrole subjects: - kind: ServiceAccount name: pipelines-sa-userid-1000 1 Define a custom SA. 2 Define a custom SCC created based on restricted privileges, with modified runAsUser field. 3 At this time, enabling the allowPrivilegeEscalation setting is required for Buildah to run successfully in the container. With this setting, Buildah can leverage SETUID and SETGID capabilities when running as a non-root user. 4 Restrict any pod that gets attached with the custom SCC through the custom SA to run as user id 1000 . 5 Define a cluster role that uses the custom SCC. 6 Bind the cluster role that uses the custom SCC to the custom SA. 4.18.2. Configuring Buildah to use build user You can define a Buildah task to use the build user with user id 1000 . Procedure Create a copy of the buildah cluster task as an ordinary task. USD oc get clustertask buildah -o yaml | yq '. |= (del .metadata |= with_entries(select(.key == "name" )))' | yq '.kind="Task"' | yq '.metadata.name="buildah-as-user"' | oc create -f - Edit the copied buildah task. USD oc edit task buildah-as-user Example: Modified Buildah task with build user apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: buildah-as-user spec: description: >- Buildah task builds source into a container image and then pushes it to a container registry. Buildah Task builds source into a container image using Project Atomic's Buildah build tool.It uses Buildah's support for building from Dockerfiles, using its buildah bud command.This command executes the directives in the Dockerfile to assemble a container image, then pushes that image to a container registry. params: - name: IMAGE description: Reference of the image buildah will produce. - name: BUILDER_IMAGE description: The location of the buildah builder image. default: registry.redhat.io/rhel8/buildah@sha256:99cae35f40c7ec050fed3765b2b27e0b8bbea2aa2da7c16408e2ca13c60ff8ee - name: STORAGE_DRIVER description: Set buildah storage driver default: vfs - name: DOCKERFILE description: Path to the Dockerfile to build. default: ./Dockerfile - name: CONTEXT description: Path to the directory to use as context. default: . - name: TLSVERIFY description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry) default: "true" - name: FORMAT description: The format of the built container, oci or docker default: "oci" - name: BUILD_EXTRA_ARGS description: Extra parameters passed for the build command when building images. default: "" - description: Extra parameters passed for the push command when pushing images. name: PUSH_EXTRA_ARGS type: string default: "" - description: Skip pushing the built image name: SKIP_PUSH type: string default: "false" results: - description: Digest of the image just built. name: IMAGE_DIGEST type: string workspaces: - name: source steps: - name: build securityContext: runAsUser: 1000 1 image: USD(params.BUILDER_IMAGE) workingDir: USD(workspaces.source.path) script: | echo "Running as USER ID `id`" 2 buildah --storage-driver=USD(params.STORAGE_DRIVER) bud \ USD(params.BUILD_EXTRA_ARGS) --format=USD(params.FORMAT) \ --tls-verify=USD(params.TLSVERIFY) --no-cache \ -f USD(params.DOCKERFILE) -t USD(params.IMAGE) USD(params.CONTEXT) [[ "USD(params.SKIP_PUSH)" == "true" ]] && echo "Push skipped" && exit 0 buildah --storage-driver=USD(params.STORAGE_DRIVER) push \ USD(params.PUSH_EXTRA_ARGS) --tls-verify=USD(params.TLSVERIFY) \ --digestfile USD(workspaces.source.path)/image-digest USD(params.IMAGE) \ docker://USD(params.IMAGE) cat USD(workspaces.source.path)/image-digest | tee /tekton/results/IMAGE_DIGEST volumeMounts: - name: varlibcontainers mountPath: /home/build/.local/share/containers 3 volumes: - name: varlibcontainers emptyDir: {} 1 Run the container explicitly as the user id 1000 , which corresponds to the build user in the Buildah image. 2 Display the user id to confirm that the process is running as user id 1000 . 3 You can change the path for the volume mount as necessary. 4.18.3. Starting a task run with custom config map, or a pipeline run After defining the custom Buildah cluster task, you can create a TaskRun object that builds an image as a build user with user id 1000 . In addition, you can integrate the TaskRun object as part of a PipelineRun object. Procedure Create a TaskRun object with a custom ConfigMap and Dockerfile objects. Example: A task run that runs Buildah as user id 1000 apiVersion: v1 data: Dockerfile: | ARG BASE_IMG=registry.access.redhat.com/ubi8/ubi FROM USDBASE_IMG AS buildah-runner RUN dnf -y update && \ dnf -y install git && \ dnf clean all CMD git kind: ConfigMap metadata: name: dockerfile 1 --- apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: buildah-as-user-1000 spec: serviceAccountName: pipelines-sa-userid-1000 2 params: - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser taskRef: kind: Task name: buildah-as-user workspaces: - configMap: name: dockerfile 3 name: source 1 Use a config map because the focus is on the task run, without any prior task that fetches some sources with a Dockerfile. 2 The name of the service account that you created. 3 Mount a config map as the source workspace for the buildah-as-user task. (Optional) Create a pipeline and a corresponding pipeline run. Example: A pipeline and corresponding pipeline run apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: pipeline-buildah-as-user-1000 spec: params: - name: IMAGE - name: URL workspaces: - name: shared-workspace - name: sslcertdir optional: true tasks: - name: fetch-repository 1 taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: USD(params.URL) - name: subdirectory value: "" - name: deleteExisting value: "true" - name: buildah taskRef: name: buildah-as-user 2 runAfter: - fetch-repository workspaces: - name: source workspace: shared-workspace - name: sslcertdir workspace: sslcertdir params: - name: IMAGE value: USD(params.IMAGE) --- apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: pipelinerun-buildah-as-user-1000 spec: taskRunSpecs: - pipelineTaskName: buildah taskServiceAccountName: pipelines-sa-userid-1000 3 params: - name: URL value: https://github.com/openshift/pipelines-vote-api - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser pipelineRef: name: pipeline-buildah-as-user-1000 workspaces: - name: shared-workspace 4 volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi 1 Use the git-clone cluster task to fetch the source containing a Dockerfile and build it using the modified Buildah task. 2 Refer to the modified Buildah task. 3 Use the service account that you created for the Buildah task. 4 Share data between the git-clone task and the modified Buildah task using a persistent volume claim (PVC) created automatically by the controller. Start the task run or the pipeline run. 4.18.4. Limitations of unprivileged builds The process for unprivileged builds works with most Dockerfile objects. However, there are some known limitations might cause a build to fail: Using the --mount=type=cache option might fail due to lack of necessay permissions issues. For more information, see this article . Using the --mount=type=secret option fails because mounting resources requires additionnal capabilities that are not provided by the custom SCC. Additional resources Managing security context constraints (SCCs) | [
"oc get tektoninstallersets.operator.tekton.dev | awk '/pipeline-main-static/ {print USD1}' | xargs oc delete tektoninstallersets",
"oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": false}}}}}'",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: enable-bundles-resolver: true enable-cluster-resolver: true enable-git-resolver: true enable-hub-resolver: true",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: bundles-resolver-config: default-service-account: pipelines cluster-resolver-config: default-namespace: test git-resolver-config: server-url: localhost.com hub-resolver-config: default-tekton-hub-catalog: tekton",
"annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && \"docs/*.md\".pathChanged()",
"yaml kind: PipelineRun spec: timeouts: pipeline: \"0\" # No timeout tasks: \"0h3m0s\"",
"- name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/<test_namespace>/<test_pipelinerun>'",
"- name: IMAGE_NAME value: 'image-registry.openshift-image-registry.svc:5000/{{ target_namespace }}/USD(context.pipelineRun.name)'",
"kind: Task apiVersion: tekton.dev/v1beta1 metadata: name: write-array annotations: description: | A simple task that writes array spec: results: - name: array-results type: array description: The array results",
"echo -n \"[\\\"hello\\\",\\\"world\\\"]\" | tee USD(results.array-results.path)",
"apiVersion: v1 kind: Secret metadata: name: tekton-hub-db labels: app: tekton-hub-db type: Opaque stringData: POSTGRES_HOST: <hostname> POSTGRES_DB: <database_name> POSTGRES_USER: <username> POSTGRES_PASSWORD: <password> POSTGRES_PORT: <listening_port_number>",
"annotations: pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && target_branch == \"main\" && source_branch == \"wip\"",
"apiVersion: v1 kind: ConfigMap metadata: name: config-observability namespace: tekton-pipelines labels: app.kubernetes.io/instance: default app.kubernetes.io/part-of: tekton-pipelines data: _example: | metrics.taskrun.level: \"task\" metrics.taskrun.duration-type: \"histogram\" metrics.pipelinerun.level: \"pipeline\" metrics.pipelinerun.duration-type: \"histogram\"",
"oc get route -n openshift-pipelines pipelines-as-code-controller --template='https://{{ .spec.host }}'",
"error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io \"openshift-operators-prometheus-k8s-read-binding\" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:\"rbac.authorization.k8s.io\", Kind:\"Role\", Name:\"openshift-operator-read\"}: cannot change roleRef",
"Error: error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time=\"2022-03-04T09:47:57Z\" level=error msg=\"error writing \\\"0 0 4294967295\\\\n\\\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted\" time=\"2022-03-04T09:47:57Z\" level=error msg=\"(unable to determine exit status)\"",
"securityContext: capabilities: add: [\"SETFCAP\"]",
"oc get tektoninstallerset NAME READY REASON addon-clustertasks-nx5xz False Error addon-communityclustertasks-cfb2p True addon-consolecli-ftrb8 True addon-openshift-67dj2 True addon-pac-cf7pz True addon-pipelines-fvllm True addon-triggers-b2wtt True addon-versioned-clustertasks-1-8-hqhnw False Error pipeline-w75ww True postpipeline-lrs22 True prepipeline-ldlhw True rhosp-rbac-4dmgb True trigger-hfg64 True validating-mutating-webhoook-28rf7 True",
"oc get tektonconfig config NAME VERSION READY REASON config 1.8.1 True",
"tkn pipeline export test_pipeline -n openshift-pipelines",
"tkn pipelinerun export test_pipeline_run -n openshift-pipelines",
"spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: \"true\" - name: pipelineTemplates value: \"true\" - name: communityClusterTasks value: \"false\"",
"hub: params: - name: enable-devconsole-integration value: \"true\"",
"STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP \"RUN /usr/libexec/s2i/assemble\": exit status 127 time=\"2021-11-04T13:05:26Z\" level=error msg=\"exit status 127\"",
"error updating rolebinding openshift-operators-prometheus-k8s-read-binding: RoleBinding.rbac.authorization.k8s.io \"openshift-operators-prometheus-k8s-read-binding\" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:\"rbac.authorization.k8s.io\", Kind:\"Role\", Name:\"openshift-operator-read\"}: cannot change roleRef",
"Error: error writing \"0 0 4294967295\\n\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted time=\"2022-03-04T09:47:57Z\" level=error msg=\"error writing \\\"0 0 4294967295\\\\n\\\" to /proc/22/uid_map: write /proc/22/uid_map: operation not permitted\" time=\"2022-03-04T09:47:57Z\" level=error msg=\"(unable to determine exit status)\"",
"securityContext: capabilities: add: [\"SETFCAP\"]",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false",
"STEP 7: RUN /usr/libexec/s2i/assemble /bin/sh: /usr/libexec/s2i/assemble: No such file or directory subprocess exited with status 127 subprocess exited with status 127 error building at STEP \"RUN /usr/libexec/s2i/assemble\": exit status 127 time=\"2021-11-04T13:05:26Z\" level=error msg=\"exit status 127\"",
"Error from server (InternalError): Internal error occurred: failed calling webhook \"validation.webhook.pipeline.tekton.dev\": Post \"https://tekton-pipelines-webhook.openshift-pipelines.svc:443/resource-validation?timeout=10s\": service \"tekton-pipelines-webhook\" not found.",
"oc get route -n <namespace>",
"oc edit route -n <namespace> <el-route_name>",
"spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: 8000 to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None",
"spec: host: el-event-listener-q8c3w5-test-upgrade1.apps.ve49aws.aws.ospqa.com port: targetPort: http-listener to: kind: Service name: el-event-listener-q8c3w5 weight: 100 wildcardPolicy: None",
"pruner: resources: - pipelinerun - taskrun schedule: \"*/5 * * * *\" # cron schedule keep: 2 # delete all keeping n",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: \"true\" - name: pipelineTemplates value: \"true\"",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: profile: all targetNamespace: openshift-pipelines pipeline: params: - name: enableMetrics value: \"true\"",
"tkn pipeline start build-and-deploy -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml -p deployment-name=pipelines-vote-api -p git-url=https://github.com/openshift/pipelines-vote-api.git -p IMAGE=image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/pipelines-vote-api --use-param-defaults",
"- name: deploy params: - name: SCRIPT value: oc rollout status <deployment-name> runAfter: - build taskRef: kind: ClusterTask name: openshift-client",
"steps: - name: git env: - name: HOME value: /root image: USD(params.BASE_IMAGE) workingDir: USD(workspaces.source.path)",
"fsGroup: type: MustRunAs",
"params: - name: github_json value: USD(body)",
"annotations: triggers.tekton.dev/old-escape-quotes: \"true\"",
"oc patch el/<eventlistener_name> -p '{\"metadata\":{\"finalizers\":[\"foregroundDeletion\"]}}' --type=merge",
"oc patch el/github-listener-interceptor -p '{\"metadata\":{\"finalizers\":[\"foregroundDeletion\"]}}' --type=merge",
"oc patch crd/eventlisteners.triggers.tekton.dev -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge",
"Error executing command: fork/exec /bin/bash: exec format error",
"skopeo inspect --raw <image_name>| jq '.manifests[] | select(.platform.architecture == \"<architecture>\") | .digest'",
"useradd: /etc/passwd.8: lock file already used useradd: cannot lock /etc/passwd; try again later.",
"oc login -u <login> -p <password> https://openshift.example.com:6443",
"oc edit clustertask buildah",
"command: ['buildah', 'bud', '--format=USD(params.FORMAT)', '--tls-verify=USD(params.TLSVERIFY)', '--layers', '-f', 'USD(params.DOCKERFILE)', '-t', 'USD(resources.outputs.image.url)', 'USD(params.CONTEXT)']",
"command: ['buildah', '--storage-driver=overlay', 'bud', '--format=USD(params.FORMAT)', '--tls-verify=USD(params.TLSVERIFY)', '--no-cache', '-f', 'USD(params.DOCKERFILE)', '-t', 'USD(params.IMAGE)', 'USD(params.CONTEXT)']",
"apiVersion: tekton.dev/v1beta1 1 kind: Task 2 metadata: name: apply-manifests 3 spec: 4 workspaces: - name: source params: - name: manifest_dir description: The directory in source that contains yaml manifests type: string default: \"k8s\" steps: - name: apply image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest workingDir: /workspace/source command: [\"/bin/bash\", \"-c\"] args: - |- echo Applying manifests in USD(params.manifest_dir) directory oc apply -f USD(params.manifest_dir) echo -----------------------------------",
"spec: pipeline: disable-working-directory-overwrite: false disable-home-env-overwrite: false",
"apiVersion: tekton.dev/v1beta1 kind: PipelineRun 1 metadata: generateName: guarded-pr- spec: serviceAccountName: 'pipeline' pipelineSpec: params: - name: path type: string description: The path of the file to be created workspaces: - name: source description: | This workspace is shared among all the pipeline tasks to read/write common resources tasks: - name: create-file 2 when: - input: \"USD(params.path)\" operator: in values: [\"README.md\"] workspaces: - name: source workspace: source taskSpec: workspaces: - name: source description: The workspace to create the readme file in steps: - name: write-new-stuff image: ubuntu script: 'touch USD(workspaces.source.path)/README.md' - name: check-file params: - name: path value: \"USD(params.path)\" workspaces: - name: source workspace: source runAfter: - create-file taskSpec: params: - name: path workspaces: - name: source description: The workspace to check for the file results: - name: exists description: indicates whether the file exists or is missing steps: - name: check-file image: alpine script: | if test -f USD(workspaces.source.path)/USD(params.path); then printf yes | tee /tekton/results/exists else printf no | tee /tekton/results/exists fi - name: echo-file-exists when: 3 - input: \"USD(tasks.check-file.results.exists)\" operator: in values: [\"yes\"] taskSpec: steps: - name: echo image: ubuntu script: 'echo file exists' - name: task-should-be-skipped-1 when: 4 - input: \"USD(params.path)\" operator: notin values: [\"README.md\"] taskSpec: steps: - name: echo image: ubuntu script: exit 1 finally: - name: finally-task-should-be-executed when: 5 - input: \"USD(tasks.echo-file-exists.status)\" operator: in values: [\"Succeeded\"] - input: \"USD(tasks.status)\" operator: in values: [\"Succeeded\"] - input: \"USD(tasks.check-file.results.exists)\" operator: in values: [\"yes\"] - input: \"USD(params.path)\" operator: in values: [\"README.md\"] taskSpec: steps: - name: echo image: ubuntu script: 'echo finally done' params: - name: path value: README.md workspaces: - name: source volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 16Mi",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: clone-cleanup-workspace 1 spec: workspaces: - name: git-source 2 tasks: - name: clone-app-repo 3 taskRef: name: git-clone-from-catalog params: - name: url value: https://github.com/tektoncd/community.git - name: subdirectory value: application workspaces: - name: output workspace: git-source finally: - name: cleanup 4 taskRef: 5 name: cleanup-workspace workspaces: 6 - name: source workspace: git-source - name: check-git-commit params: 7 - name: commit value: USD(tasks.clone-app-repo.results.commit) taskSpec: 8 params: - name: commit steps: - name: check-commit-initialized image: alpine script: | if [[ ! USD(params.commit) ]]; then exit 1 fi",
"apiVersion: tekton.dev/v1beta1 1 kind: TaskRun 2 metadata: name: apply-manifests-taskrun 3 spec: 4 serviceAccountName: pipeline taskRef: 5 kind: Task name: apply-manifests workspaces: 6 - name: source persistentVolumeClaim: claimName: source-pvc",
"apiVersion: tekton.dev/v1beta1 1 kind: Pipeline 2 metadata: name: build-and-deploy 3 spec: 4 workspaces: 5 - name: shared-workspace params: 6 - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: \"pipelines-1.10\" - name: IMAGE type: string description: image to be built from the code tasks: 7 - name: fetch-repository taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: USD(params.git-url) - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.git-revision) - name: build-image 8 taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: \"false\" - name: IMAGE value: USD(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests 9 taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: 10 - build-image - name: update-deployment taskRef: name: update-deployment workspaces: - name: source workspace: shared-workspace params: - name: deployment value: USD(params.deployment-name) - name: IMAGE value: USD(params.IMAGE) runAfter: - apply-manifests",
"apiVersion: tekton.dev/v1beta1 1 kind: PipelineRun 2 metadata: name: build-deploy-api-pipelinerun 3 spec: pipelineRef: name: build-and-deploy 4 params: 5 - name: deployment-name value: vote-api - name: git-url value: https://github.com/openshift-pipelines/vote-api.git - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/vote-api workspaces: 6 - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: 1 - name: shared-workspace params: tasks: 2 - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: TLSVERIFY value: \"false\" - name: IMAGE value: USD(params.IMAGE) workspaces: 3 - name: source 4 workspace: shared-workspace 5 runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: 6 - name: source workspace: shared-workspace runAfter: - build-image",
"apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: build-deploy-api-pipelinerun spec: pipelineRef: name: build-and-deploy params: workspaces: 1 - name: shared-workspace 2 volumeClaimTemplate: 3 spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerBinding 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url value: USD(body.repository.url) - name: git-repo-name value: USD(body.repository.name) - name: git-revision value: USD(body.head_commit.id)",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: TriggerTemplate 2 metadata: name: vote-app 3 spec: params: 4 - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.10 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: 5 - apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: build-deploy-USD(tt.params.git-repo-name)-USD(uid) spec: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: USD(tt.params.git-repo-name) - name: git-url value: USD(tt.params.git-repo-url) - name: git-revision value: USD(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/pipelines-tutorial/USD(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: Trigger 2 metadata: name: vote-trigger 3 spec: serviceAccountName: pipeline 4 interceptors: - ref: name: \"github\" 5 params: 6 - name: \"secretRef\" value: secretName: github-secret secretKey: secretToken - name: \"eventTypes\" value: [\"push\"] bindings: - ref: vote-app 7 template: 8 ref: vote-app --- apiVersion: v1 kind: Secret 9 metadata: name: github-secret type: Opaque stringData: secretToken: \"1234567\"",
"apiVersion: triggers.tekton.dev/v1beta1 1 kind: EventListener 2 metadata: name: vote-app 3 spec: serviceAccountName: pipeline 4 triggers: - triggerRef: vote-trigger 5",
"oc get tektonconfig config",
"NAME VERSION READY REASON config 1.9.2 True",
"oc get tektonpipeline,tektontrigger,tektonaddon,pac",
"NAME VERSION READY REASON tektonpipeline.operator.tekton.dev/pipeline v0.41.1 True NAME VERSION READY REASON tektontrigger.operator.tekton.dev/trigger v0.22.2 True NAME VERSION READY REASON tektonaddon.operator.tekton.dev/addon 1.9.2 True NAME VERSION READY REASON openshiftpipelinesascode.operator.tekton.dev/pipelines-as-code v0.15.5 True",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-pipelines-operator namespace: openshift-operators spec: channel: <channel name> 1 name: openshift-pipelines-operator-rh 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4",
"oc apply -f sub.yaml",
"oc login -u <login> -p <password> https://openshift.example.com:6443",
"oc new-project pipelines-tutorial",
"oc get serviceaccount pipeline",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/01_apply_manifest_task.yaml oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/02_update_deployment_task.yaml",
"tkn task list",
"NAME DESCRIPTION AGE apply-manifests 1 minute ago update-deployment 48 seconds ago",
"tkn clustertasks list",
"NAME DESCRIPTION AGE buildah 1 day ago git-clone 1 day ago s2i-python 1 day ago tkn 1 day ago",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: build-and-deploy spec: workspaces: - name: shared-workspace params: - name: deployment-name type: string description: name of the deployment to be patched - name: git-url type: string description: url of the git repo for the code of deployment - name: git-revision type: string description: revision to be used from repo of the code for deployment default: \"pipelines-1.10\" - name: IMAGE type: string description: image to be built from the code tasks: - name: fetch-repository taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: USD(params.git-url) - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.git-revision) - name: build-image taskRef: name: buildah kind: ClusterTask params: - name: IMAGE value: USD(params.IMAGE) workspaces: - name: source workspace: shared-workspace runAfter: - fetch-repository - name: apply-manifests taskRef: name: apply-manifests workspaces: - name: source workspace: shared-workspace runAfter: - build-image - name: update-deployment taskRef: name: update-deployment params: - name: deployment value: USD(params.deployment-name) - name: IMAGE value: USD(params.IMAGE) runAfter: - apply-manifests",
"oc create -f <pipeline-yaml-file-name.yaml>",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/04_pipeline.yaml",
"tkn pipeline list",
"NAME AGE LAST RUN STARTED DURATION STATUS build-and-deploy 1 minute ago --- --- --- ---",
"oc describe imagestream python -n openshift",
"Name: python Namespace: openshift [...] 3.8-ubi8 (latest) tagged from registry.redhat.io/ubi8/python-38:latest prefer registry pullthrough when referencing this tag Build and run Python 3.8 applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/s2i-python-container/blob/master/3.8/README.md. Tags: builder, python Supports: python:3.8, python Example Repo: https://github.com/sclorg/django-ex.git [...]",
"oc image mirror registry.redhat.io/ubi8/python-38:latest <mirror-registry>:<port>/ubi8/python-38",
"oc tag <mirror-registry>:<port>/ubi8/python-38 python:latest --scheduled -n openshift",
"oc describe imagestream python -n openshift",
"Name: python Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/ubi8/python-38 * <mirror-registry>:<port>/ubi8/python-38@sha256:3ee3c2e70251e75bfeac25c0c33356add9cc4abcbc9c51d858f39e4dc29c5f58 [...]",
"oc describe imagestream golang -n openshift",
"Name: golang Namespace: openshift [...] 1.14.7-ubi8 (latest) tagged from registry.redhat.io/ubi8/go-toolset:1.14.7 prefer registry pullthrough when referencing this tag Build and run Go applications on UBI 8. For more information about using this builder image, including OpenShift considerations, see https://github.com/sclorg/golang-container/blob/master/README.md. Tags: builder, golang, go Supports: golang Example Repo: https://github.com/sclorg/golang-ex.git [...]",
"oc image mirror registry.redhat.io/ubi8/go-toolset:1.14.7 <mirror-registry>:<port>/ubi8/go-toolset",
"oc tag <mirror-registry>:<port>/ubi8/go-toolset golang:latest --scheduled -n openshift",
"oc describe imagestream golang -n openshift",
"Name: golang Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/ubi8/go-toolset * <mirror-registry>:<port>/ubi8/go-toolset@sha256:59a74d581df3a2bd63ab55f7ac106677694bf612a1fe9e7e3e1487f55c421b37 [...]",
"oc describe imagestream cli -n openshift",
"Name: cli Namespace: openshift [...] latest updates automatically from registry quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 * quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 [...]",
"oc image mirror quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev:latest",
"oc tag <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev cli:latest --scheduled -n openshift",
"oc describe imagestream cli -n openshift",
"Name: cli Namespace: openshift [...] latest updates automatically from registry <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev * <mirror-registry>:<port>/openshift-release-dev/ocp-v4.0-art-dev@sha256:65c68e8c22487375c4c6ce6f18ed5485915f2bf612e41fef6d41cbfcdb143551 [...]",
"tkn pipeline start build-and-deploy -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml -p deployment-name=pipelines-vote-api -p git-url=https://github.com/openshift/pipelines-vote-api.git -p IMAGE='image-registry.openshift-image-registry.svc:5000/USD(context.pipelineRun.namespace)/pipelines-vote-api' --use-param-defaults",
"tkn pipelinerun logs <pipelinerun_id> -f",
"tkn pipeline start build-and-deploy -w name=shared-workspace,volumeClaimTemplateFile=https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/01_pipeline/03_persistent_volume_claim.yaml -p deployment-name=pipelines-vote-ui -p git-url=https://github.com/openshift/pipelines-vote-ui.git -p IMAGE='image-registry.openshift-image-registry.svc:5000/USD(context.pipelineRun.namespace)/pipelines-vote-ui' --use-param-defaults",
"tkn pipelinerun logs <pipelinerun_id> -f",
"tkn pipelinerun list",
"NAME STARTED DURATION STATUS build-and-deploy-run-xy7rw 1 hour ago 2 minutes Succeeded build-and-deploy-run-z2rz8 1 hour ago 19 minutes Succeeded",
"oc get route pipelines-vote-ui --template='http://{{.spec.host}}'",
"tkn pipeline start build-and-deploy --last",
"apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerBinding metadata: name: vote-app spec: params: - name: git-repo-url value: USD(body.repository.url) - name: git-repo-name value: USD(body.repository.name) - name: git-revision value: USD(body.head_commit.id)",
"oc create -f <triggerbinding-yaml-file-name.yaml>",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/01_binding.yaml",
"apiVersion: triggers.tekton.dev/v1beta1 kind: TriggerTemplate metadata: name: vote-app spec: params: - name: git-repo-url description: The git repository url - name: git-revision description: The git revision default: pipelines-1.10 - name: git-repo-name description: The name of the deployment to be created / patched resourcetemplates: - apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: build-deploy-USD(tt.params.git-repo-name)- spec: serviceAccountName: pipeline pipelineRef: name: build-and-deploy params: - name: deployment-name value: USD(tt.params.git-repo-name) - name: git-url value: USD(tt.params.git-repo-url) - name: git-revision value: USD(tt.params.git-revision) - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/USD(context.pipelineRun.namespace)/USD(tt.params.git-repo-name) workspaces: - name: shared-workspace volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"oc create -f <triggertemplate-yaml-file-name.yaml>",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/02_template.yaml",
"apiVersion: triggers.tekton.dev/v1beta1 kind: Trigger metadata: name: vote-trigger spec: serviceAccountName: pipeline bindings: - ref: vote-app template: ref: vote-app",
"oc create -f <trigger-yaml-file-name.yaml>",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/03_trigger.yaml",
"apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: vote-app spec: serviceAccountName: pipeline triggers: - triggerRef: vote-trigger",
"apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: vote-app spec: serviceAccountName: pipeline triggers: - bindings: - ref: vote-app template: ref: vote-app",
"oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled",
"oc create -f <eventlistener-yaml-file-name.yaml>",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/pipelines-1.10/03_triggers/04_event_listener.yaml",
"oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: <hostname> to: kind: Service name: frontend 2 tls: termination: reencrypt 3 key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- 4 -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc expose svc el-vote-app",
"apiVersion: v1 kind: ServiceAccount metadata: name: el-sa ---",
"kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: el-sel-clusterrole rules: - apiGroups: [\"triggers.tekton.dev\"] resources: [\"eventlisteners\", \"clustertriggerbindings\", \"clusterinterceptors\", \"triggerbindings\", \"triggertemplates\", \"triggers\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"\"] resources: [\"configmaps\", \"secrets\"] verbs: [\"get\", \"list\", \"watch\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"impersonate\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: el-mul-clusterrolebinding subjects: - kind: ServiceAccount name: el-sa namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: el-sel-clusterrole",
"apiVersion: triggers.tekton.dev/v1beta1 kind: EventListener metadata: name: namespace-selector-listener spec: serviceAccountName: el-sa namespaceSelector: matchNames: - default - foo",
"apiVersion: v1 kind: ServiceAccount metadata: name: foo-trigger-sa namespace: foo",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: triggercr-rolebinding namespace: foo subjects: - kind: ServiceAccount name: foo-trigger-sa namespace: foo roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tekton-triggers-eventlistener-roles",
"apiVersion: triggers.tekton.dev/v1beta1 kind: Trigger metadata: name: trigger namespace: foo spec: serviceAccountName: foo-trigger-sa interceptors: - ref: name: \"github\" params: - name: \"secretRef\" value: secretName: github-secret secretKey: secretToken - name: \"eventTypes\" value: [\"push\"] bindings: - ref: vote-app template: ref: vote-app",
"echo \"URL: USD(oc get route el-vote-app --template='https://{{.spec.host}}')\"",
"echo \"URL: USD(oc get route el-vote-app --template='http://{{.spec.host}}')\"",
"git clone [email protected]:<your GitHub ID>/pipelines-vote-ui.git -b pipelines-1.10",
"git commit -m \"empty-commit\" --allow-empty && git push origin pipelines-1.10",
"tkn pipelinerun list",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app.kubernetes.io/managed-by: EventListener app.kubernetes.io/part-of: Triggers eventlistener: github-listener annotations: networkoperator.openshift.io/ignore-errors: \"\" name: el-monitor namespace: test spec: endpoints: - interval: 10s port: http-metrics jobLabel: name namespaceSelector: matchNames: - test selector: matchLabels: app.kubernetes.io/managed-by: EventListener app.kubernetes.io/part-of: Triggers eventlistener: github-listener",
"git commit -m \"empty-commit\" --allow-empty && git push origin main",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: \"false\" profile: all targetNamespace: openshift-pipelines addon: params: - name: clusterTasks value: \"false\"",
"oc delete clustertask buildah-1-6-0",
"oc delete tektoninstallerset versioned-clustertask-1-6-k98as",
"scopes: - name: agent:create users: <username_registered_with_the_Git_repository_hosting_service_provider> - name: catalog:refresh users: <username_registered_with_the_Git_repository_hosting_service_provider> - name: config:refresh users: <username_registered_with_the_Git_repository_hosting_service_provider>",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonHub metadata: name: hub spec: targetNamespace: openshift-pipelines 1 api: hubConfigUrl: https://raw.githubusercontent.com/tektoncd/hub/main/config.yaml 2",
"oc apply -f TektonHub.yaml 1",
"oc get tektonhub.operator.tekton.dev NAME VERSION READY REASON APIURL UIURL hub v1.7.2 True https://api.route.url/ https://ui.route.url/",
"curl -X POST -H \"Authorization: <jwt-token>\" \\ 1 <api-url>/catalog/<catalog_name>/refresh 2",
"[{\"id\":1,\"catalogName\":\"tekton\",\"status\":\"queued\"}]",
"curl -X POST -H \"Authorization: <jwt-token>\" \\ 1 <api-url>/catalog/refresh 2",
"curl -X PUT --header \"Content-Type: application/json\" -H \"Authorization: <access-token>\" \\ 1 --data '{\"name\":\"catalog-refresh-agent\",\"scopes\": [\"catalog:refresh\"]}' <api-route>/system/user/agent",
"apiVersion: v1 kind: Secret metadata: name: catalog-refresh type: Opaque stringData: HUB_TOKEN: <hub_token> 1",
"oc apply -f 05-catalog-refresh-cj/ -n openshift-pipelines.",
"apiVersion: batch/v1 kind: CronJob metadata: name: catalog-refresh labels: app: tekton-hub-api spec: schedule: \"*/30 * * * *\"",
"scopes: - name: agent:create users: [<username_1>, <username_2>] 1 - name: catalog:refresh users: [<username_3>, <username_4>] - name: config:refresh users: [<username_5>, <username_6>] default: scopes: - rating:read - rating:write",
"curl -X POST -H \"Authorization: <access-token>\" \\ 1 --header \"Content-Type: application/json\" --data '{\"force\": true} <api-route>/system/config/refresh",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: targetNamespace: openshift-pipelines hub: params: - name: enable-devconsole-integration value: \"false\"",
"spec: platforms: openshift: pipelinesAsCode: enable: false settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: \"false\" bitbucket-cloud-check-source-ip: \"true\" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: \"true\" secret-auto-create: \"true\"",
"oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": false}}}}}'",
"spec: addon: enablePipelinesAsCode: false",
"oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": true}}}}}'",
"tkn pac bootstrap github-app",
"oc -n openshift-pipelines create secret generic pipelines-as-code-secret --from-literal github-private-key=\"USD(cat <PATH_PRIVATE_KEY>)\" \\ 1 --from-literal github-application-id=\"<APP_ID>\" \\ 2 --from-literal webhook.secret=\"<WEBHOOK_SECRET>\" 3",
"git --amend -a --no-edit && git push --force-with-lease <origin> <branchname>",
"tkn pac create repo",
"? Enter the Git repository url (default: https://github.com/owner/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes [β] Repository owner-repo has been created in repo-pipelines namespace [β] Setting up GitHub Webhook for Repository https://github.com/owner/repo π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: sJNwdmTifHTs): sJNwdmTifHTs i \\ufe0fYou now need to create a GitHub personal access token, please checkout the docs at https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token for the required scopes ? Please enter the GitHub access token: **************************************** [β] Webhook has been created on repository owner/repo π Webhook Secret owner-repo has been created in the repo-pipelines namespace. π Repository CR owner-repo has been updated with webhook secret in the repo-pipelines namespace i Directory .tekton has been created. [β] We have detected your repository using the programming language Go. [β] A basic template has been created in /home/Go/src/github.com/owner/repo/.tekton/pipelinerun.yaml, feel free to customize it.",
"echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')",
"openssl rand -hex 20",
"oc -n target-namespace create secret generic github-webhook-config --from-literal provider.token=\"<GITHUB_PERSONAL_ACCESS_TOKEN>\" --from-literal webhook.secret=\"<WEBHOOK_SECRET>\"",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: \"https://github.com/owner/repo\" git_provider: secret: name: \"github-webhook-config\" key: \"provider.token\" # Set this if you have a different key in your secret webhook_secret: name: \"github-webhook-config\" key: \"webhook.secret\" # Set this if you have a different key for your secret",
"tkn pac webhook add -n repo-pipelines",
"[β] Setting up GitHub Webhook for Repository https://github.com/owner/repo π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: AeHdHTJVfAeH): AeHdHTJVfAeH [β] Webhook has been created on repository owner/repo π Secret owner-repo has been updated with webhook secert in the repo-pipelines namespace.",
"tkn pac webhook update-token -n repo-pipelines",
"? Please enter your personal access token: **************************************** π Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace.",
"spec: git_provider: secret: name: \"github-webhook-config\"",
"oc -n USDtarget_namespace patch secret github-webhook-config -p \"{\\\"data\\\": {\\\"provider.token\\\": \\\"USD(echo -n USDNEW_TOKEN|base64 -w0)\\\"}}\"",
"tkn pac create repo",
"? Enter the Git repository url (default: https://gitlab.com/owner/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes [β] Repository repositories-project has been created in repo-pipelines namespace [β] Setting up GitLab Webhook for Repository https://gitlab.com/owner/repo ? Please enter the project ID for the repository you want to be configured, project ID refers to an unique ID (e.g. 34405323) shown at the top of your GitLab project : 17103 π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: lFjHIEcaGFlF): lFjHIEcaGFlF i \\ufe0fYou now need to create a GitLab personal access token with `api` scope i \\ufe0fGo to this URL to generate one https://gitlab.com/-/profile/personal_access_tokens, see https://is.gd/rOEo9B for documentation ? Please enter the GitLab access token: ************************** ? Please enter your GitLab API URL:: https://gitlab.com [β] Webhook has been created on your repository π Webhook Secret repositories-project has been created in the repo-pipelines namespace. π Repository CR repositories-project has been updated with webhook secret in the repo-pipelines namespace i Directory .tekton has been created. [β] A basic template has been created in /home/Go/src/gitlab.com/repositories/project/.tekton/pipelinerun.yaml, feel free to customize it.",
"echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')",
"openssl rand -hex 20",
"oc -n target-namespace create secret generic gitlab-webhook-config --from-literal provider.token=\"<GITLAB_PERSONAL_ACCESS_TOKEN>\" --from-literal webhook.secret=\"<WEBHOOK_SECRET>\"",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: \"https://gitlab.com/owner/repo\" 1 git_provider: secret: name: \"gitlab-webhook-config\" key: \"provider.token\" # Set this if you have a different key in your secret webhook_secret: name: \"gitlab-webhook-config\" key: \"webhook.secret\" # Set this if you have a different key for your secret",
"tkn pac webhook add -n repo-pipelines",
"[β] Setting up GitLab Webhook for Repository https://gitlab.com/owner/repo π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes ? Please enter the secret to configure the webhook for payload validation (default: AeHdHTJVfAeH): AeHdHTJVfAeH [β] Webhook has been created on repository owner/repo π Secret owner-repo has been updated with webhook secert in the repo-pipelines namespace.",
"tkn pac webhook update-token -n repo-pipelines",
"? Please enter your personal access token: **************************************** π Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace.",
"spec: git_provider: secret: name: \"gitlab-webhook-config\"",
"oc -n USDtarget_namespace patch secret gitlab-webhook-config -p \"{\\\"data\\\": {\\\"provider.token\\\": \\\"USD(echo -n USDNEW_TOKEN|base64 -w0)\\\"}}\"",
"tkn pac create repo",
"? Enter the Git repository url (default: https://bitbucket.org/workspace/repo): ? Please enter the namespace where the pipeline should run (default: repo-pipelines): ! Namespace repo-pipelines is not found ? Would you like me to create the namespace repo-pipelines? Yes [β] Repository workspace-repo has been created in repo-pipelines namespace [β] Setting up Bitbucket Webhook for Repository https://bitbucket.org/workspace/repo ? Please enter your bitbucket cloud username: <username> i \\ufe0fYou now need to create a Bitbucket Cloud app password, please checkout the docs at https://is.gd/fqMHiJ for the required permissions ? Please enter the Bitbucket Cloud app password: ************************************ π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes [β] Webhook has been created on repository workspace/repo π Webhook Secret workspace-repo has been created in the repo-pipelines namespace. π Repository CR workspace-repo has been updated with webhook secret in the repo-pipelines namespace i Directory .tekton has been created. [β] A basic template has been created in /home/Go/src/bitbucket/repo/.tekton/pipelinerun.yaml, feel free to customize it.",
"echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')",
"oc -n target-namespace create secret generic bitbucket-cloud-token --from-literal provider.token=\"<BITBUCKET_APP_PASSWORD>\"",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: \"https://bitbucket.com/workspace/repo\" branch: \"main\" git_provider: user: \"<BITBUCKET_USERNAME>\" 1 secret: name: \"bitbucket-cloud-token\" 2 key: \"provider.token\" # Set this if you have a different key in your secret",
"tkn pac webhook add -n repo-pipelines",
"[β] Setting up Bitbucket Webhook for Repository https://bitbucket.org/workspace/repo ? Please enter your bitbucket cloud username: <username> π I have detected a controller url: https://pipelines-as-code-controller-openshift-pipelines.apps.example.com ? Do you want me to use it? Yes [β] Webhook has been created on repository workspace/repo π Secret workspace-repo has been updated with webhook secret in the repo-pipelines namespace.",
"tkn pac webhook update-token -n repo-pipelines",
"? Please enter your personal access token: **************************************** π Secret owner-repo has been updated with new personal access token in the repo-pipelines namespace.",
"spec: git_provider: user: \"<BITBUCKET_USERNAME>\" secret: name: \"bitbucket-cloud-token\" key: \"provider.token\"",
"oc -n USDtarget_namespace patch secret bitbucket-cloud-token -p \"{\\\"data\\\": {\\\"provider.token\\\": \\\"USD(echo -n USDNEW_TOKEN|base64 -w0)\\\"}}\"",
"echo https://USD(oc get route -n pipelines-as-code pipelines-as-code-controller -o jsonpath='{.spec.host}')",
"openssl rand -hex 20",
"oc -n target-namespace create secret generic bitbucket-server-webhook-config --from-literal provider.token=\"<PERSONAL_TOKEN>\" --from-literal webhook.secret=\"<WEBHOOK_SECRET>\"",
"--- apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: my-repo namespace: target-namespace spec: url: \"https://bitbucket.com/workspace/repo\" git_provider: url: \"https://bitbucket.server.api.url/rest\" 1 user: \"<BITBUCKET_USERNAME>\" 2 secret: 3 name: \"bitbucket-server-webhook-config\" key: \"provider.token\" # Set this if you have a different key in your secret webhook_secret: name: \"bitbucket-server-webhook-config\" key: \"webhook.secret\" # Set this if you have a different key for your secret",
"cat <<EOF|kubectl create -n my-pipeline-ci -f- 1 apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: project-repository spec: url: \"https://github.com/<repository>/<project>\" EOF",
"spec: concurrency_limit: <number>",
"pipelinesascode.tekton.dev/task: \"git-clone\" 1",
"pipelinesascode.tekton.dev/task: \"[git-clone, golang-test, tkn]\"",
"pipelinesascode.tekton.dev/task: \"git-clone\" pipelinesascode.tekton.dev/task-1: \"golang-test\" pipelinesascode.tekton.dev/task-2: \"tkn\" 1",
"pipelinesascode.tekton.dev/task: \"[git-clone:0.1]\" 1",
"pipelinesascode.tekton.dev/task: \"<https://remote.url/task.yaml>\" 1",
"pipelinesascode.tekton.dev/task: \"<share/tasks/git-clone.yaml>\" 1",
"pipelinesascode.tekton.dev/pipeline: \"<https://git.provider/raw/pipeline.yaml>\" 1",
"metadata: name: pipeline-pr-main annotations: pipelinesascode.tekton.dev/on-target-branch: \"[main]\" 1 pipelinesascode.tekton.dev/on-event: \"[pull_request]\"",
"metadata: name: pipeline-push-on-main annotations: pipelinesascode.tekton.dev/on-target-branch: \"[refs/heads/main]\" 1 pipelinesascode.tekton.dev/on-event: \"[push]\"",
"pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && target_branch == \"main\" && source_branch == \"wip\"",
"pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && \"docs/\\*.md\".pathChanged() 1",
"pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request && event_title.startsWith(\"[DOWNSTREAM]\")",
"pipelinesascode.tekton.dev/on-cel-expression: | event == \"pull_request\" && target_branch != experimental\"",
"pipelinesascode.tekton.dev/task: \"github-add-comment\"",
"[...] tasks: - name: taskRef: name: github-add-comment params: - name: REQUEST_URL value: \"{{ repo_url }}/pull/{{ pull_request_number }}\" 1 - name: COMMENT_OR_FILE value: \"Pipelines as Code IS GREAT!\" - name: GITHUB_TOKEN_SECRET_NAME value: \"{{ git_auth_secret }}\" - name: GITHUB_TOKEN_SECRET_KEY value: \"git-provider-token\"",
"approvers: - approved",
"tkn pac logs -n <my-pipeline-ci> -L 1",
"tkn pac logs -n <my-pipeline-ci> 1",
"<filename>:<line>:<column>: <error message>",
"oc get repo -n <pipelines-as-code-ci>",
"NAME URL NAMESPACE SUCCEEDED REASON STARTTIME COMPLETIONTIME pipelines-as-code-ci https://github.com/openshift-pipelines/pipelines-as-code pipelines-as-code-ci True Succeeded 59m 56m",
"workspace: - name: basic-auth secret: secretName: \"{{ git_auth_secret }}\"",
"workspaces: - name basic-auth params: - name: repo_url - name: revision tasks: workspaces: - name: basic-auth workspace: basic-auth tasks: - name: git-clone-from-catalog taskRef: name: git-clone 1 params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision)",
"pipelinesascode.tekton.dev/max-keep-runs: \"<max_number>\" 1",
"apiVersion: \"pipelinesascode.tekton.dev/v1alpha1\" kind: Repository metadata: name: repo namespace: ns spec: url: \"https://github.com/owner/repo\" git_provider: type: github secret: name: \"owner-token\" incoming: - targets: - main secret: name: repo-incoming-secret type: webhook-url",
"apiVersion: v1 kind: Secret metadata: name: repo-incoming-secret namespace: ns type: Opaque stringData: secret: <very-secure-shared-secret>",
"curl -X POST 'https://control.pac.url/incoming?secret=very-secure-shared-secret&repository=repo&branch=main&pipelinerun=target_pipelinerun'",
"tkn pac [command or options] [arguments]",
"tkn pac --help",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: namespace: openshift 1 labels: pipeline.openshift.io/runtime: <runtime> 2 pipeline.openshift.io/type: <pipeline-type> 3",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: running-in-environment-with-injected-sidecars: true metrics.taskrun.duration-type: histogram metrics.pipelinerun.duration-type: histogram await-sidecar-readiness: true params: - name: enableMetrics value: 'true' default-service-account: pipeline require-git-ssh-secret-known-hosts: false enable-tekton-oci-bundles: false metrics.taskrun.level: task metrics.pipelinerun.level: pipeline embedded-status: both enable-api-fields: stable enable-provenance-in-status: false enable-custom-tasks: true disable-creds-init: false disable-affinity-assistant: true",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: default-service-account: pipeline trigger: default-service-account: pipeline enable-api-fields: stable",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: pipeline: params: - name: enableMetrics value: 'false'",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: params: - name: clusterTasks value: 'false' - name: pipelineTemplates value: 'false' - name: communityClusterTasks value: 'true'",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: hub: params: - name: enable-devconsole-integration value: false",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: params: - name: createRbacResource value: \"false\"",
"annotations: operator.tekton.dev/prune.resources: \"taskrun, pipelinerun\" operator.tekton.dev/prune.keep-since: 7200",
"spec: steps: - name: <step_name> resources: requests: memory: 2Gi cpu: 600m limits: memory: 4Gi cpu: 900m",
"apiVersion: v1 kind: LimitRange metadata: name: <limit_container_resource> spec: limits: - max: cpu: \"600m\" memory: \"2Gi\" min: cpu: \"200m\" memory: \"100Mi\" default: cpu: \"500m\" memory: \"800Mi\" defaultRequest: cpu: \"100m\" memory: \"100Mi\" type: Container",
"apiVersion: v1 kind: LimitRange metadata: name: mem-min-max-demo-lr spec: limits: - max: memory: 1Gi min: memory: 500Mi type: Container",
"spec: steps: - name: step-with-limts resources: requests: memory: 1Gi cpu: 500m limits: memory: 2Gi cpu: 800m",
"apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: pipeline1-pc value: 1000000 description: \"Priority class for pipeline1\"",
"apiVersion: v1 kind: ResourceQuota metadata: name: pipeline1-rq spec: hard: cpu: \"1000\" memory: 200Gi pods: \"10\" scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [\"pipeline1-pc\"]",
"oc describe quota",
"Name: pipeline1-rq Namespace: default Resource Used Hard -------- ---- ---- cpu 0 1k memory 0 200Gi pods 0 10",
"apiVersion: tekton.dev/v1alpha1 kind: Pipeline metadata: name: maven-build spec: workspaces: - name: local-maven-repo resources: - name: app-git type: git tasks: - name: build taskRef: name: mvn resources: inputs: - name: source resource: app-git params: - name: GOALS value: [\"package\"] workspaces: - name: maven-repo workspace: local-maven-repo - name: int-test taskRef: name: mvn runAfter: [\"build\"] resources: inputs: - name: source resource: app-git params: - name: GOALS value: [\"verify\"] workspaces: - name: maven-repo workspace: local-maven-repo - name: gen-report taskRef: name: mvn runAfter: [\"build\"] resources: inputs: - name: source resource: app-git params: - name: GOALS value: [\"site\"] workspaces: - name: maven-repo workspace: local-maven-repo",
"apiVersion: tekton.dev/v1alpha1 kind: Task metadata: name: mvn spec: workspaces: - name: maven-repo inputs: params: - name: GOALS description: The Maven goals to run type: array default: [\"package\"] resources: - name: source type: git steps: - name: mvn image: gcr.io/cloud-builders/mvn workingDir: /workspace/source command: [\"/usr/bin/mvn\"] args: - -Dmaven.repo.local=USD(workspaces.maven-repo.path) - \"USD(inputs.params.GOALS)\" priorityClassName: pipeline1-pc",
"apiVersion: tekton.dev/v1alpha1 kind: PipelineRun metadata: generateName: petclinic-run- spec: pipelineRef: name: maven-build resources: - name: app-git resourceSpec: type: git params: - name: url value: https://github.com/spring-projects/spring-petclinic",
"oc describe quota",
"Name: pipeline1-rq Namespace: default Resource Used Hard -------- ---- ---- cpu 500m 1k memory 10Gi 200Gi pods 1 10",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowedCapabilities: - SETFCAP fsGroup: type: MustRunAs",
"oc adm policy add-scc-to-user <scc-name> -z <service-account-name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: service-account-name 1 namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-clusterrole 2 subjects: - kind: ServiceAccount name: pipeline namespace: default",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-clusterrole 1 rules: - apiGroups: - security.openshift.io resourceNames: - nonroot resources: - securitycontextconstraints verbs: - use",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: my-scc is a close replica of anyuid scc. pipelines-scc has fsGroup - RunAsAny. name: my-scc allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null defaultAddCapabilities: null fsGroup: type: RunAsAny groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD runAsUser: type: RunAsAny seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc create -f my-scc.yaml",
"oc create serviceaccount fsgroup-runasany",
"oc adm policy add-scc-to-user my-scc -z fsgroup-runasany",
"oc adm policy add-scc-to-user privileged -z fsgroup-runasany",
"apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: <pipeline-run-name> spec: pipelineRef: name: <pipeline-cluster-task-name> serviceAccountName: 'fsgroup-runasany'",
"apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: <task-run-name> spec: taskRef: name: <cluster-task-name> serviceAccountName: 'fsgroup-runasany'",
"service.beta.openshift.io/serving-cert-secret-name=<secret_name>",
"oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: <hostname> to: kind: Service name: frontend 2 tls: termination: reencrypt 3 key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- 4 -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yaml",
"oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yaml",
"oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>",
"apiVersion: v1 kind: Secret metadata: annotations: tekton.dev/git-0: github.com tekton.dev/git-1: gitlab.com type: kubernetes.io/basic-auth stringData: username: <username> 1 password: <password> 2",
"apiVersion: v1 kind: Secret metadata: annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: 1",
"apiVersion: v1 kind: Secret metadata: name: basic-user-pass 1 annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/basic-auth stringData: username: <username> 2 password: <password> 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: basic-user-pass 2",
"apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: serviceAccountName: build-bot 2 taskRef: name: build-push 3",
"apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3",
"oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml",
"apiVersion: v1 kind: Secret metadata: name: ssh-key 1 annotations: tekton.dev/git-0: github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: 2 known_hosts: 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: ssh-key 2",
"apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: serviceAccountName: build-bot 2 taskRef: name: build-push 3",
"apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3",
"oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonChain metadata: name: chain spec: targetNamespace: openshift-pipelines",
"oc apply -f TektonChain.yaml 1",
"oc get tektonchains.operator.tekton.dev",
"oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.oci.storage\": \"\", \"artifacts.taskrun.format\":\"tekton\", \"artifacts.taskrun.storage\": \"tekton\"}}' 1",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"Error from server (AlreadyExists): secrets \"signing-secrets\" already exists",
"oc delete secret signing-secrets -n openshift-pipelines",
"export NAMESPACE=<namespace> 1 export SERVICE_ACCOUNT_NAME=<service_account> 2",
"oc create secret registry-credentials --from-file=.dockerconfigjson \\ 1 --type=kubernetes.io/dockerconfigjson -n USDNAMESPACE",
"oc patch serviceaccount USDSERVICE_ACCOUNT_NAME -p \"{\\\"imagePullSecrets\\\": [{\\\"name\\\": \\\"registry-credentials\\\"}]}\" -n USDNAMESPACE",
"oc create serviceaccount <service_account_name>",
"apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: build-push-task-run-2 spec: serviceAccountName: build-bot 1 taskRef: name: build-push",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.oci.storage\": \"\", \"artifacts.taskrun.format\":\"tekton\", \"artifacts.taskrun.storage\": \"tekton\"}}'",
"oc delete po -n openshift-pipelines -l app=tekton-chains-controller",
"oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1 taskrun.tekton.dev/build-push-run-output-image-qbjvh created",
"tkn tr describe --last [...truncated output...] NAME STATUS β create-dir-builtimage-9467f Completed β git-source-sourcerepo-p2sk8 Completed β build-and-push Completed β echo Completed β image-digest-exporter-xlkn7 Completed",
"export TASKRUN_UID=USD(tkn tr describe --last -o jsonpath='{.metadata.uid}') tkn tr describe --last -o jsonpath=\"{.metadata.annotations.chains\\.tekton\\.dev/signature-taskrun-USDTASKRUN_UID}\" > signature tkn tr describe --last -o jsonpath=\"{.metadata.annotations.chains\\.tekton\\.dev/payload-taskrun-USDTASKRUN_UID}\" | base64 -d > payload",
"cosign verify-blob --key k8s://openshift-pipelines/signing-secrets --signature ./signature ./payload Verified OK",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"oc create secret generic <docker_config_secret_name> \\ 1 --from-file <path_to_config.json> 2",
"oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.format\": \"in-toto\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.storage\": \"oci\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"transparency.enabled\": \"true\"}}'",
"oc apply -f examples/kaniko/kaniko.yaml 1",
"export REGISTRY=<url_of_registry> 1 export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2",
"tkn task start --param IMAGE=USDREGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir=\"\" --workspace name=dockerconfig,secret=USDDOCKERCONFIG_SECRET_NAME kaniko-chains",
"oc get tr <task_run_name> \\ 1 -o json | jq -r .metadata.annotations { \"chains.tekton.dev/signed\": \"true\", }",
"cosign verify --key cosign.pub USDREGISTRY/kaniko-chains cosign verify-attestation --key cosign.pub USDREGISTRY/kaniko-chains",
"rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3",
"rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq",
"{ \"query\": { \"match\": { \"kubernetes.flat_labels\": { \"query\": \"app_kubernetes_io/managed-by=tekton-pipelines\", \"type\": \"phrase\" } } } }",
"{ \"query\": { \"match\": { \"kubernetes.flat_labels\": { \"query\": \"tekton_dev/pipelineRun=\", \"type\": \"phrase\" } } } }",
"{ \"query\": { \"match\": { \"kubernetes.flat_labels\": { \"query\": \"tekton_dev/pipeline=\", \"type\": \"phrase\" } } } }",
"apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-sa-userid-1000 1 --- kind: SecurityContextConstraints metadata: annotations: name: pipelines-scc-userid-1000 2 allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true 3 allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD - KILL runAsUser: 4 type: MustRunAs uid: 1000 seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-userid-1000-clusterrole 5 rules: - apiGroups: - security.openshift.io resourceNames: - pipelines-scc-userid-1000 resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pipelines-scc-userid-1000-rolebinding 6 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-userid-1000-clusterrole subjects: - kind: ServiceAccount name: pipelines-sa-userid-1000",
"oc get clustertask buildah -o yaml | yq '. |= (del .metadata |= with_entries(select(.key == \"name\" )))' | yq '.kind=\"Task\"' | yq '.metadata.name=\"buildah-as-user\"' | oc create -f -",
"oc edit task buildah-as-user",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: buildah-as-user spec: description: >- Buildah task builds source into a container image and then pushes it to a container registry. Buildah Task builds source into a container image using Project Atomic's Buildah build tool.It uses Buildah's support for building from Dockerfiles, using its buildah bud command.This command executes the directives in the Dockerfile to assemble a container image, then pushes that image to a container registry. params: - name: IMAGE description: Reference of the image buildah will produce. - name: BUILDER_IMAGE description: The location of the buildah builder image. default: registry.redhat.io/rhel8/buildah@sha256:99cae35f40c7ec050fed3765b2b27e0b8bbea2aa2da7c16408e2ca13c60ff8ee - name: STORAGE_DRIVER description: Set buildah storage driver default: vfs - name: DOCKERFILE description: Path to the Dockerfile to build. default: ./Dockerfile - name: CONTEXT description: Path to the directory to use as context. default: . - name: TLSVERIFY description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry) default: \"true\" - name: FORMAT description: The format of the built container, oci or docker default: \"oci\" - name: BUILD_EXTRA_ARGS description: Extra parameters passed for the build command when building images. default: \"\" - description: Extra parameters passed for the push command when pushing images. name: PUSH_EXTRA_ARGS type: string default: \"\" - description: Skip pushing the built image name: SKIP_PUSH type: string default: \"false\" results: - description: Digest of the image just built. name: IMAGE_DIGEST type: string workspaces: - name: source steps: - name: build securityContext: runAsUser: 1000 1 image: USD(params.BUILDER_IMAGE) workingDir: USD(workspaces.source.path) script: | echo \"Running as USER ID `id`\" 2 buildah --storage-driver=USD(params.STORAGE_DRIVER) bud USD(params.BUILD_EXTRA_ARGS) --format=USD(params.FORMAT) --tls-verify=USD(params.TLSVERIFY) --no-cache -f USD(params.DOCKERFILE) -t USD(params.IMAGE) USD(params.CONTEXT) [[ \"USD(params.SKIP_PUSH)\" == \"true\" ]] && echo \"Push skipped\" && exit 0 buildah --storage-driver=USD(params.STORAGE_DRIVER) push USD(params.PUSH_EXTRA_ARGS) --tls-verify=USD(params.TLSVERIFY) --digestfile USD(workspaces.source.path)/image-digest USD(params.IMAGE) docker://USD(params.IMAGE) cat USD(workspaces.source.path)/image-digest | tee /tekton/results/IMAGE_DIGEST volumeMounts: - name: varlibcontainers mountPath: /home/build/.local/share/containers 3 volumes: - name: varlibcontainers emptyDir: {}",
"apiVersion: v1 data: Dockerfile: | ARG BASE_IMG=registry.access.redhat.com/ubi8/ubi FROM USDBASE_IMG AS buildah-runner RUN dnf -y update && dnf -y install git && dnf clean all CMD git kind: ConfigMap metadata: name: dockerfile 1 --- apiVersion: tekton.dev/v1beta1 kind: TaskRun metadata: name: buildah-as-user-1000 spec: serviceAccountName: pipelines-sa-userid-1000 2 params: - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser taskRef: kind: Task name: buildah-as-user workspaces: - configMap: name: dockerfile 3 name: source",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: pipeline-buildah-as-user-1000 spec: params: - name: IMAGE - name: URL workspaces: - name: shared-workspace - name: sslcertdir optional: true tasks: - name: fetch-repository 1 taskRef: name: git-clone kind: ClusterTask workspaces: - name: output workspace: shared-workspace params: - name: url value: USD(params.URL) - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: buildah taskRef: name: buildah-as-user 2 runAfter: - fetch-repository workspaces: - name: source workspace: shared-workspace - name: sslcertdir workspace: sslcertdir params: - name: IMAGE value: USD(params.IMAGE) --- apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: pipelinerun-buildah-as-user-1000 spec: taskRunSpecs: - pipelineTaskName: buildah taskServiceAccountName: pipelines-sa-userid-1000 3 params: - name: URL value: https://github.com/openshift/pipelines-vote-api - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser pipelineRef: name: pipeline-buildah-as-user-1000 workspaces: - name: shared-workspace 4 volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/cicd/pipelines |
Chapter 1. About the Fuse Console | Chapter 1. About the Fuse Console The Red Hat Fuse Console is a web console based on HawtIO open source software. For a list of supported browsers, go to Supported Configurations . The Fuse Console provides a central interface to examine and manage the details of one or more deployed Fuse containers. You can also monitor Red Hat Fuse and system resources, perform updates, and start or stop services. The Fuse Console is available when you install Red Hat Fuse standalone or use Fuse on OpenShift. The integrations that you can view and manage in the Fuse Console depend on the plugins that are running. Possible plugins include: Camel JMX OSGI Runtime Logs | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_openshift/fuse-console-overview-all_fcopensift |
Chapter 7. Automating software updates in RHEL 9 | Chapter 7. Automating software updates in RHEL 9 DNF Automatic is an alternative command-line interface to DNF that is suited for automatic and regular execution by using systemd timers, cron jobs, and other such tools. DNF Automatic synchronizes package metadata as needed, checks for updates available, and then performs one of the following actions depending on how you configure the tool: Exit Download updated packages Download and apply the updates The outcome of the operation is then reported by a selected mechanism, such as the standard output or email. 7.1. Installing DNF Automatic To check and download package updates automatically and regularly, you can use the DNF Automatic tool that is provided by the dnf-automatic package. Procedure Install the dnf-automatic package: Verification Verify the successful installation by confirming the presence of the dnf-automatic package: 7.2. DNF Automatic configuration file By default, DNF Automatic uses /etc/dnf/automatic.conf as its configuration file to define its behavior. The configuration file is separated into the following topical sections: [commands] Sets the mode of operation of DNF Automatic . Warning Settings of the operation mode from the [commands] section are overridden by settings used by a systemd timer unit for all timer units except dnf-automatic.timer . [emitters] Defines how the results of DNF Automatic are reported. [command] Defines the command emitter configuration. [command_email] Provides the email emitter configuration for an external command used to send email. [email] Provides the email emitter configuration. [base] Overrides settings from the main configuration file of DNF . With the default settings of the /etc/dnf/automatic.conf file, DNF Automatic checks for available updates, downloads them, and reports the results to standard output. Additional resources dnf-automatic(8) man page on your system Overview of the systemd timer units included in the dnf-automatic package 7.3. Enabling DNF Automatic To run DNF Automatic once, you must start a systemd timer unit. However, if you want to run DNF Automatic periodically, you must enable the timer unit. You can use one of the timer units provided in the dnf-automatic package, or you can create a drop-in file for the timer unit to adjust the execution time. Prerequisites You specified the behavior of DNF Automatic by modifying the /etc/dnf/automatic.conf configuration file. Procedure To enable and execute a systemd timer unit immediately, enter: If you want to only enable the timer without executing it immediately, omit the --now option. You can use the following timers: dnf-automatic-download.timer : Downloads available updates. dnf-automatic-install.timer : Downloads and installs available updates. dnf-automatic-notifyonly.timer : Reports available updates. dnf-automatic.timer : Downloads, downloads and installs, or reports available updates. Verification Verify that the timer is enabled: Optional: Check when each of the timers on your system ran the last time: Additional resources dnf-automatic(8) man page on your system Overview of the systemd timer units included in the dnf-automatic package 7.4. Overview of the systemd timer units included in the dnf-automatic package The systemd timer units take precedence and override the settings in the /etc/dnf/automatic.conf configuration file when downloading and applying updates. For example if you set download_updates = yes in the /etc/dnf/automatic.conf configuration file, but you have activated the dnf-automatic-notifyonly.timer unit , the packages will not be downloaded. Table 7.1. systemd timers included in the dnf-automatic package Timer unit Function Overrides the apply_updates and download_updates settings in the [commands] section of the /etc/dnf/automatic.conf file? dnf-automatic-download.timer Downloads packages to cache and makes them available for updating. This timer unit does not install the updated packages. To perform the installation, you must run the dnf update command. Yes dnf-automatic-install.timer Downloads and installs updated packages. Yes dnf-automatic-notifyonly.timer Downloads only repository data to keep the repository cache up-to-date and notifies you about available updates. This timer unit does not download or install the updated packages. Yes dnf-automatic.timer The behavior of this timer when downloading and applying updates is specified by the settings in the /etc/dnf/automatic.conf configuration file. This timer downloads packages, but does not install them. No | [
"dnf install dnf-automatic",
"rpm -qi dnf-automatic",
"systemctl enable --now <timer_name>",
"systemctl status <systemd timer unit>",
"systemctl list-timers --all"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_automating-software-updates-in-rhel-9_managing-software-with-the-dnf-tool |
Chapter 4. Updating Directory Server | Chapter 4. Updating Directory Server Red Hat frequently releases updated versions of Red Hat Directory Server 11. This section describes how to update the Directory Server packages. If you instead want to migrate Red Hat Directory Server 10 to version 11, see Chapter 5, Migrating Directory Server 10 to Directory Server 11 . Prerequisites Red Hat Directory Server 11 installed on the server. The system to update is registered to the Red Hat subscription management service. A valid Red Hat Directory Server subscription is attached to the server. 4.1. Updating the Directory Server packages Use the yum utility to update the module, which also automatically updates the related packages. The following procedure updates Directory Server from version 11.8 to 11.9. Disable the Directory Server 11.8 repository: Enable the Directory Server 11.9 repository: Update the Directory Server packages: This command updates Directory Server packages and their dependencies to version 11.9. During the update, the dirsrv service restarts automatically for all instances on the server. Additional resources For details about available Directory Server repositories, see What are the names of the Red Hat repositories that have to be enabled . | [
"subscription-manager repos --disable dirsrv-11.8-for-rhel-8-x86_64-rpms Repository 'dirsrv-11.8-for-rhel-8-x86_64-rpms' is disabled for this system.",
"subscription-manager repos --enable=dirsrv-11.9-for-rhel-8-x86_64-rpms Repository 'dirsrv-11.9-for-rhel-8-x86_64-rpms' is enabled for this system.",
"yum module update redhat-ds"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/installation_guide/assembly_updating-directory-server_installation-guide |
Chapter 4. Distribution of content in RHEL 8 | Chapter 4. Distribution of content in RHEL 8 4.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Binary DVD ISO image is larger than 4.7 GB, and as a result, it might not fit on a single-layer DVD. A dual-layer DVD or USB key is recommended when using the Binary DVD ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 4.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 4.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 4.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.1_release_notes/Distribution-of-content-in-RHEL-8 |
Chapter 1. Installation methods | Chapter 1. Installation methods You can install an OpenShift Container Platform cluster on vSphere using a variety of different installation methods. Each method has qualities that can make them more suitable for different use cases, such as installing a cluster in a disconnected environment or installing a cluster with minimal configuration and provisioning. 1.1. Assisted Installer You can install OpenShift Container Platform with the Assisted Installer . This method requires no setup for the installer and is ideal for connected environments like vSphere. Installing with the Assisted Installer also provides integration with vSphere, enabling autoscaling. See Installing an on-premise cluster using the Assisted Installer for additional details. 1.2. Agent-based Installer You can install an OpenShift Container Platform cluster on vSphere using the Agent-based Installer. The Agent-based Installer can be used to boot an on-premises server in a disconnected environment by using a bootable image. With the Agent-based Installer, users also have the flexibility to provision infrastructure, customize network configurations, and customize installations within a disconnected environment. See Preparing to install with the Agent-based Installer for additional details. 1.3. Installer-provisioned infrastructure installation You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure. Installer-provisioned infrastructure allows the installation program to preconfigure and automate the provisioning of resources required by OpenShift Container Platform. Installer-provisioned infrastructure is useful for installing in environments with disconnected networks, where the installation program provisions the underlying infrastructure for the cluster. Installing a cluster on vSphere : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with no customization. Installing a cluster on vSphere with customizations : You can install OpenShift Container Platform on vSphere by using installer-provisioned infrastructure installation with the default customization options. Installing a cluster on vSphere with network customizations : You can install OpenShift Container Platform on installer-provisioned vSphere infrastructure, with network customizations. You can customize your OpenShift Container Platform network configuration during installation, so that your cluster can coexist with your existing IP address allocations and adhere to your network requirements. Installing a cluster on vSphere in a restricted network : You can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content. You can use this method to deploy OpenShift Container Platform on an internal network that is not visible to the internet. 1.4. User-provisioned infrastructure installation You can install OpenShift Container Platform on vSphere by using user-provisioned infrastructure. User-provisioned infrastructure requires the user to provision all resources required by OpenShift Container Platform. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. Installing a cluster on vSphere with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision. Installing a cluster on vSphere with network customizations with user-provisioned infrastructure : You can install OpenShift Container Platform on VMware vSphere infrastructure that you provision with customized network configuration options. Installing a cluster on vSphere in a restricted network with user-provisioned infrastructure : OpenShift Container Platform can be installed on VMware vSphere infrastructure that you provision in a restricted network. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 1.5. Additional resources Installation process | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_vsphere/preparing-to-install-on-vsphere |
Chapter 12. Configuring RBAC policies | Chapter 12. Configuring RBAC policies 12.1. Overview of RBAC policies Role-based access control (RBAC) policies in OpenStack Networking allow granular control over shared neutron networks. OpenStack Networking uses a RBAC table to control sharing of neutron networks among projects, allowing an administrator to control which projects are granted permission to attach instances to a network. As a result, cloud administrators can remove the ability for some projects to create networks and can instead allow them to attach to pre-existing networks that correspond to their project. 12.2. Creating RBAC policies This example procedure demonstrates how to use a role-based access control (RBAC) policy to grant a project access to a shared network. View the list of available networks: View the list of projects: Create a RBAC entry for the web-servers network that grants access to the auditors project ( 4b0b98f8c6c040f38ba4f7146e8680f5 ): As a result, users in the auditors project can connect instances to the web-servers network. 12.3. Reviewing RBAC policies Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: Run the openstack network rbac-show command to view the details of a specific RBAC entry: 12.4. Deleting RBAC policies Run the openstack network rbac list command to retrieve the ID of your existing role-based access control (RBAC) policies: Run the openstack network rbac delete command to delete the RBAC, using the ID of the RBAC that you want to delete: 12.5. Granting RBAC policy access for external networks You can grant role-based access control (RBAC) policy access to external networks (networks with gateway interfaces attached) using the --action access_as_external parameter. Complete the steps in the following example procedure to create a RBAC for the web-servers network and grant access to the engineering project (c717f263785d4679b16a122516247deb): Create a new RBAC policy using the --action access_as_external option: As a result, users in the engineering project are able to view the network or connect instances to it: | [
"openstack network list +--------------------------------------+-------------+-------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+-------------------------------------------------------+ | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-2cb18fecc923 192.168.200.0/24 | | bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-82c965b0e050 10.0.0.0/24 | | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-7d4cf48aaab9 172.24.4.224/28 | +--------------------------------------+-------------+-------------------------------------------------------+",
"openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors | | 519e6344f82e4c079c8e2eabb690023b | services | | 80bf5732752a41128e612fe615c886c6 | demo | | 98a2f53c20ce4d50a40dac4a38016c69 | admin | +----------------------------------+----------+",
"openstack network rbac create --type network --target-project 4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers Created a new rbac_policy: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+",
"openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+",
"openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709 +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_shared | | id | 314004d0-2261-4d5e-bda7-0181fcf40709 | | object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | object_type | network | | target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 | | project_id | 98a2f53c20ce4d50a40dac4a38016c69 | +----------------+--------------------------------------+",
"openstack network rbac list +--------------------------------------+-------------+--------------------------------------+ | id | object_type | object_id | +--------------------------------------+-------------+--------------------------------------+ | 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | | bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | +--------------------------------------+-------------+--------------------------------------+",
"openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709 Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709",
"openstack network rbac create --type network --target-project c717f263785d4679b16a122516247deb --action access_as_external web-servers Created a new rbac_policy: +----------------+--------------------------------------+ | Field | Value | +----------------+--------------------------------------+ | action | access_as_external | | id | ddef112a-c092-4ac1-8914-c714a3d3ba08 | | object_id | 6e437ff0-d20f-4483-b627-c3749399bdca | | object_type | network | | target_project | c717f263785d4679b16a122516247deb | | project_id | c717f263785d4679b16a122516247deb | +----------------+--------------------------------------+",
"openstack network list +--------------------------------------+-------------+------------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------+------------------------------------------------------+ | 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-57eaeac9b904 192.168.10.0/24 | +--------------------------------------+-------------+------------------------------------------------------+"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/networking_guide/config-rbac-policies_rhosp-network |
4.342. vsftpd | 4.342. vsftpd 4.342.1. RHBA-2012:0001 - vsftpd bug fix update An updated vsftpd package that fixes one bug is now available for Red Hat Enterprise Linux 6. The vsftpd package includes a Very Secure FTP (File Transfer Protocol) daemon. Bug Fix BZ# 767108 The vsftpd daemon sets a value of the RLIMIT_AS variable during its initialization phase. With Red Hat Enterprise Linux 6.1, the RLIMIT_AS value (100 MB) became insufficient which restricted LDAP users from authentication to the system using vsftpd. With this update, the initial RLIMIT_AS value has been increased to 200 MB, and vsftpd now can be used for LDAP authentication as expected. All users of vsftpd are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/vsftpd |
4.7. SELinux Contexts - Labeling Files | 4.7. SELinux Contexts - Labeling Files On systems running SELinux, all processes and files are labeled in a way that represents security-relevant information. This information is called the SELinux context. For files, this is viewed using the ls -Z command: In this example, SELinux provides a user ( unconfined_u ), a role ( object_r ), a type ( user_home_t ), and a level ( s0 ). This information is used to make access control decisions. On DAC systems, access is controlled based on Linux user and group IDs. SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first. Note By default, newly-created files and directories inherit the SELinux type of their parent directories. For example, when creating a new file in the /etc directory that is labeled with the etc_t type, the new file inherits the same type: SELinux provides multiple commands for managing the file system labeling, such as chcon , semanage fcontext , restorecon , and matchpathcon . 4.7.1. Temporary Changes: chcon The chcon command changes the SELinux context for files. However, changes made with the chcon command are not persistent across file-system relabels, or the execution of the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. When using chcon , users provide all or part of the SELinux context to change. An incorrect file type is a common cause of SELinux denying access. Quick Reference Run the chcon -t type file-name command to change the file type, where type is an SELinux type, such as httpd_sys_content_t , and file-name is a file or directory name: Run the chcon -R -t type directory-name command to change the type of the directory and its contents, where type is an SELinux type, such as httpd_sys_content_t , and directory-name is a directory name: Procedure 4.6. Changing a File's or Directory's Type The following procedure demonstrates changing the type, and no other attributes of the SELinux context. The example in this section works the same for directories, for example, if file1 was a directory. Change into your home directory. Create a new file and view its SELinux context: In this example, the SELinux context for file1 includes the SELinux unconfined_u user, object_r role, user_home_t type, and the s0 level. For a description of each part of the SELinux context, see Chapter 2, SELinux Contexts . Enter the following command to change the type to samba_share_t . The -t option only changes the type. Then view the change: Use the following command to restore the SELinux context for the file1 file. Use the -v option to view what changes: In this example, the type, samba_share_t , is restored to the correct, user_home_t type. When using targeted policy (the default SELinux policy in Red Hat Enterprise Linux), the restorecon command reads the files in the /etc/selinux/targeted/contexts/files/ directory, to see which SELinux context files should have. Procedure 4.7. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by the Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root (instead of /var/www/html/ ): As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory (and its contents) to httpd_sys_content_t : To restore the default SELinux contexts, use the restorecon utility as root: See the chcon (1) manual page for further information about chcon . Note Type Enforcement is the main permission control used in SELinux targeted policy. For the most part, SELinux users and roles can be ignored. 4.7.2. Persistent Changes: semanage fcontext The semanage fcontext command is used to change the SELinux context of files. To show contexts to newly created files and directories, enter the following command as root: Changes made by semanage fcontext are used by the following utilities. The setfiles utility is used when a file system is relabeled and the restorecon utility restores the default SELinux contexts. This means that changes made by semanage fcontext are persistent, even if the file system is relabeled. SELinux policy controls whether users are able to modify the SELinux context for any given file. Quick Reference To make SELinux context changes that survive a file system relabel: Enter the following command, remembering to use the full path to the file or directory: Use the restorecon utility to apply the context changes: Use of regular expressions with semanage fcontext For the semanage fcontext command to work correctly, you can use either a fully qualified path or Perl-compatible regular expressions ( PCRE ) . The only PCRE flag in use is PCRE2_DOTALL , which causes the . wildcard to match anything, including a new line. Strings representing paths are processed as bytes, meaning that non-ASCII characters are not matched by a single wildcard. Note that file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. Local file context modifications stored in file_contexts.local have a higher priority than those specified in policy modules. This means that whenever a match for a given file path is found in file_contexts.local , no other file-context definitions are considered. Important File-context definitions specified using the semanage fcontext command effectively override all other file-context definitions. All regular expressions should therefore be as specific as possible to avoid unintentionally impacting other parts of the file system. For more information on a type of regular expression used in file-context definitions and flags in effect, see the semanage-fcontext(8) man page. Procedure 4.8. Changing a File's or Directory 's Type The following example demonstrates changing a file's type, and no other attributes of the SELinux context. This example works the same for directories, for instance if file1 was a directory. As the root user, create a new file in the /etc directory. By default, newly-created files in /etc are labeled with the etc_t type: To list information about a directory, use the following command: As root, enter the following command to change the file1 type to samba_share_t . The -a option adds a new record, and the -t option defines a type ( samba_share_t ). Note that running this command does not directly change the type; file1 is still labeled with the etc_t type: As root, use the restorecon utility to change the type. Because semanage added an entry to file_contexts.local for /etc/file1 , restorecon changes the type to samba_share_t : Procedure 4.9. Changing a Directory and its Contents Types The following example demonstrates creating a new directory, and changing the directory's file type along with its contents to a type used by Apache HTTP Server. The configuration in this example is used if you want Apache HTTP Server to use a different document root instead of /var/www/html/ : As the root user, create a new web/ directory and then 3 empty files ( file1 , file2 , and file3 ) within this directory. The web/ directory and files in it are labeled with the default_t type: As root, enter the following command to change the type of the web/ directory and the files in it, to httpd_sys_content_t . The -a option adds a new record, and the -t option defines a type ( httpd_sys_content_t ). The "/web(/.*)?" regular expression causes semanage to apply changes to web/ , as well as the files in it. Note that running this command does not directly change the type; web/ and files in it are still labeled with the default_t type: The semanage fcontext -a -t httpd_sys_content_t "/web(/.*)?" command adds the following entry to /etc/selinux/targeted/contexts/files/file_contexts.local : As root, use the restorecon utility to change the type of web/ , as well as all files in it. The -R is for recursive, which means all files and directories under web/ are labeled with the httpd_sys_content_t type. Since semanage added an entry to file.contexts.local for /web(/.*)? , restorecon changes the types to httpd_sys_content_t : Note that by default, newly-created files and directories inherit the SELinux type of their parent directories. Procedure 4.10. Deleting an added Context The following example demonstrates adding and removing an SELinux context. If the context is part of a regular expression, for example, /web(/.*)? , use quotation marks around the regular expression: To remove the context, as root, enter the following command, where file-name | directory-name is the first part in file_contexts.local : The following is an example of a context in file_contexts.local : With the first part being test . To prevent the test/ directory from being labeled with the httpd_sys_content_t after running restorecon , or after a file system relabel, enter the following command as root to delete the context from file_contexts.local : As root, use the restorecon utility to restore the default SELinux context. For further information about semanage , see the semanage (8) and semanage-fcontext (8) manual pages. Important When changing the SELinux context with semanage fcontext -a , use the full path to the file or directory to avoid files being mislabeled after a file system relabel, or after the restorecon command is run. 4.7.3. How File Context is Determined Determining file context is based on file-context definitions, which are specified in the system security policy (the .fc files). Based on the system policy, semanage generates file_contexts.homedirs and file_contexts files. System administrators can customize file-context definitions using the semanage fcontext command. Such customizations are stored in the file_contexts.local file. When a labeling utility, such as matchpathcon or restorecon , is determining the proper label for a given path, it searches for local changes first ( file_contexts.local ). If the utility does not find a matching pattern, it searches the file_contexts.homedirs file and finally the file_contexts file. However, whenever a match for a given file path is found, the search ends, the utility does look for any additional file-context definitions. This means that home directory-related file contexts have higher priority than the rest, and local customizations override the system policy. File-context definitions specified by system policy (contents of file_contexts.homedirs and file_contexts files) are sorted by the length of the stem (prefix of the path before any wildcard) before evaluation. This means that the most specific path is chosen. However, file-context definitions specified using semanage fcontext are evaluated in reverse order to how they were defined: the latest entry is evaluated first regardless of the stem length. For more information on: changing the context of a file by using chcon , see Section 4.7.1, "Temporary Changes: chcon" . changing and adding a file-context definition by using semanage fcontext , see Section 4.7.2, "Persistent Changes: semanage fcontext" . changing and adding a file-context definition through a system-policy operation, see Section 4.10, "Maintaining SELinux Labels" or Section 4.12, "Prioritizing and Disabling SELinux Policy Modules" . | [
"~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1",
"~]USD ls -dZ - /etc drwxr-xr-x. root root system_u:object_r: etc_t :s0 /etc",
"~]# touch /etc/file1",
"~]# ls -lZ /etc/file1 -rw-r--r--. root root unconfined_u:object_r: etc_t :s0 /etc/file1",
"~]USD chcon -t httpd_sys_content_t file-name",
"~]USD chcon -R -t httpd_sys_content_t directory-name",
"~]USD touch file1",
"~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1",
"~]USD chcon -t samba_share_t file1",
"~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:samba_share_t:s0 file1",
"~]USD restorecon -v file1 restorecon reset file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:user_home_t:s0",
"~]# mkdir /web",
"~]# touch /web/file{1,2,3}",
"~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web",
"~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3",
"~]# chcon -R -t httpd_sys_content_t /web/",
"~]# ls -dZ /web/ drwxr-xr-x root root unconfined_u:object_r:httpd_sys_content_t:s0 /web/",
"~]# ls -lZ /web/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3",
"~]# restorecon -R -v /web/ restorecon reset /web context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:httpd_sys_content_t:s0->system_u:object_r:default_t:s0",
"~]# semanage fcontext -C -l",
"~]# semanage fcontext -a options file-name | directory-name",
"~]# restorecon -v file-name | directory-name",
"~]# touch /etc/file1",
"~]USD ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1",
"~]USD ls -dZ directory_name",
"~]# semanage fcontext -a -t samba_share_t /etc/file1",
"~]# ls -Z /etc/file1 -rw-r--r-- root root unconfined_u:object_r:etc_t:s0 /etc/file1",
"~]USD semanage fcontext -C -l /etc/file1 unconfined_u:object_r:samba_share_t:s0",
"~]# restorecon -v /etc/file1 restorecon reset /etc/file1 context unconfined_u:object_r:etc_t:s0->system_u:object_r:samba_share_t:s0",
"~]# mkdir /web",
"~]# touch /web/file{1,2,3}",
"~]# ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web",
"~]# ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3",
"~]# semanage fcontext -a -t httpd_sys_content_t \"/web(/.*)?\"",
"~]USD ls -dZ /web drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /web",
"~]USD ls -lZ /web -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:default_t:s0 file3",
"/web(/.*)? system_u:object_r:httpd_sys_content_t:s0",
"~]# restorecon -R -v /web restorecon reset /web context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file2 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file3 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /web/file1 context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0",
"~]# semanage fcontext -d \"/web(/.*)?\"",
"~]# semanage fcontext -d file-name | directory-name",
"/test system_u:object_r:httpd_sys_content_t:s0",
"~]# semanage fcontext -d /test"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Working_with_SELinux-SELinux_Contexts_Labeling_Files |
Chapter 11. Integrating Spring Boot with Kubernetes | Chapter 11. Integrating Spring Boot with Kubernetes The Spring Cloud Kubernetes plugin currently enables you to integrate the following features of Spring Boot and Kubernetes: Spring Boot Externalized Configuration Kubernetes ConfigMap Kubernetes Secrets 11.1. Spring Boot externalized configuration In Spring Boot, externalized configuration is the mechanism that enables you to inject configuration values from external sources into Java code. In your Java code, injection is typically enabled by annotating with the @Value annotation (to inject into a single field) or the @ConfigurationProperties annotation (to inject into multiple properties on a Java bean class). The configuration data can come from a wide variety of different sources (or property sources ). In particular, configuration properties are often set in a project's application.properties file (or application.yaml file, if you prefer). 11.1.1. Kubernetes ConfigMap A Kubernetes ConfigMap is a mechanism that can provide configuration data to a deployed application. A ConfigMap object is typically defined in a YAML file, which is then uploaded to the Kubernetes cluster, making the configuration data available to deployed applications. 11.1.2. Kubernetes Secrets A Kubernetes Secrets is a mechanism for providing sensitive data (such as passwords, certificates, and so on) to deployed applications. 11.1.3. Spring Cloud Kubernetes plugin The Spring Cloud Kubernetes plug-in implements the integration between Kubernetes and Spring Boot. In principle, you could access the configuration data from a ConfigMap using the Kubernetes API. It is much more convenient, however, to integrate Kubernetes ConfigMap directly with the Spring Boot externalized configuration mechanism, so that Kubernetes ConfigMaps behave as an alternative property source for Spring Boot configuration. This is essentially what the Spring Cloud Kubernetes plug-in provides. 11.1.4. Enabling Spring Boot with Kubernetes integration You can enable Kubernetes integration by adding it as a Maven dependency to pom.xml file. Procedure Enable the Kubernetes integration by adding the following Maven dependency to the pom.xml file of your Spring Boot Maven project. <project ...> ... <dependencies> ... <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-config</artifactId> </dependency> ... </dependencies> ... </project> To complete the integration, Add some annotations to your Java source code Create a Kubernetes ConfigMap object Modify the OpenShift service account permissions to allow your application to read the ConfigMap object. Additional resources For more details see Running Tutorial for ConfigMap Property Source . 11.2. Running tutorial for ConfigMap Property Source The following tutorial allows you to experiment with setting Kubernetes Secrets and ConfigMaps. Enable the Spring Cloud Kubernetes plug-in as explained in the Enabling Spring Boot with Kubernetes Integration to integrate Kubernetes configuration objects with Spring Boot Externalized Configuration. 11.2.1. Running Spring Boot Camel Config quickstart The following tutorial is based on the spring-boot-camel-config-archetype Maven archetype, which enables you to set up Kubernetes Secrets and ConfigMaps. Procedure Open a new shell prompt and enter the following Maven command to create a simple Camel Spring Boot project. The archetype plug-in switches to interactive mode to prompt you for the remaining fields: When prompted, enter org.example.fis for the groupId value and fuse713-configmap for the artifactId value. Accept the defaults for the remaining fields. Log in to OpenShift and switch to the OpenShift project where you will deploy your application. For example, to log in as the developer user and deploy to the openshift project, enter the following commands: At the command line, change to the directory of the new fuse713-configmap project and create the Secret object for this application. Note It is necessary to create the Secret object before you deploy the application, otherwise the deployed container enters a wait state until the Secret becomes available. If you subsequently create the Secret, the container will come out of the wait state. For more information on how to set up Secret Object, see Setting up Secret . Build and deploy the quickstart application. From the top level of the fuse713-configmap project, enter: View the application log as follows. Navigate to the OpenShift web console in your browser ( https://OPENSHIFT_IP_ADDR , replace OPENSHIFT_IP_ADDR with the IP address of the cluster) and log in to the console with your credentials (for example, with username developer and password, developer ). In the left hand side panel, expand Home . Click Status to view the Project Status page. All the existing applications in the selected namespace (for example, openshift) are displayed. Click fuse713-configmap to view the Overview information page for the quickstart. In the left hand side panel, expand Workloads . Click Pods and then click fuse713-configmap-xxxx . The pod details for the application are displayed. Click on the Logs tab to view the application logs. The default recipient list, which is configured in src/main/resources/application.properties , sends the generated messages to two dummy endpoints: direct:async-queue and direct:file . This causes messages like the following to be written to the application log: Before you can update the configuration of the fuse713-configmap application using a ConfigMap object, you must give the fuse713-configmap application permission to view data from the OpenShift ApiServer. Enter the following command to give the view permission to the fuse713-configmap application's service account: Note A service account is specified using the syntax system:serviceaccount:PROJECT_NAME:SERVICE_ACCOUNT_NAME . The fis-config deployment descriptor defines the SERVICE_ACCOUNT_NAME to be qs-camel-config . To see the live reload feature in action, create a ConfigMap object as follows: The new ConfigMap overrides the recipient list of the Camel route in the running application, configuring it to send the generated messages to three dummy endpoints: direct:async-queue , direct:file , and direct:mail . For more information about ConfigMap object, see Setting up ConfigMap . This causes messages like the following to be written to the application log: 11.2.2. Configuration properties bean A configuration properties bean is a regular Java bean that can receive configuration settings by injection. It provides the basic interface between your Java code and the external configuration mechanisms. Externalized Configuration and Bean Registry Following image shows how Spring Boot Externalized Configuration works in the spring-boot-camel-config quickstart. The configuration mechanism has the following main parts: Property Sources Provides property settings for injection into configuration. The default property source is the application.properties file for the application, and this can optionally be overridden by a ConfigMap object or a Secret object. Configuration Properties bean Receives configuraton updates from the property sources. A configuration properties bean is a Java bean decorated by the @Configuration and @ConfigurationProperties annotations. Spring bean registry With the requisite annotations, a configuration properties bean is registered in the Spring bean registry. Integration with Camel bean registry The Camel bean registry is automatically integrated with the Spring bean registry, so that registered Spring beans can be referenced in your Camel routes. QuickstartConfiguration class The configuration properties bean for the fuse713-configmap project is defined as the QuickstartConfiguration Java class (under the src/main/java/org/example/fis/ directory), as follows: package org.example.fis; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Configuration; @Configuration 1 @ConfigurationProperties(prefix = "quickstart") 2 public class QuickstartConfiguration { /** * A comma-separated list of routes to use as recipients for messages. */ private String recipients; 3 /** * The username to use when connecting to the async queue (simulation) */ private String queueUsername; 4 /** * The password to use when connecting to the async queue (simulation) */ private String queuePassword; 5 // Setters and Getters for Bean properties // NOT SHOWN ... } 1 The @Configuration annotation causes the QuickstartConfiguration class to be instantiated and registered in Spring as the bean with ID, quickstartConfiguration . This automatically makes the bean accessible from Camel. For example, the target-route-queue route is able to access the queueUserName property using the Camel syntax USD{bean:quickstartConfiguration?method=getQueueUsername} . 2 The @ConfigurationProperties annotation defines a prefix, quickstart , that must be used when defining property values in a property source. For example, a properties file would reference the recipients property as quickstart.recipients . 3 The recipient property is injectable from property sources. 4 The queueUsername property is injectable from property sources. 5 The queuePassword property is injectable from property sources. 11.2.3. Setting up Secret The Kubernetes Secret in this quickstart is set up in the standard way, apart from one additional required step: the Spring Cloud Kubernetes plug-in must be configured with the mount paths of the Secrets, so that it can read the Secrets at run time. To set up the Secret: Create a Sample Secret Object Configure volume mount for the Secret Configure spring-cloud-kubernetes to read Secret properties Sample Secret object The quickstart project provides a sample Secret, sample-secret.yml , as follows. Property values in Secret objects are always base64 encoded (use the base64 command-line utility). When the Secret is mounted in a pod's filesystem, the values are automatically decoded back into plain text. sample-secret.yml file apiVersion: v1 kind: Secret metadata: 1 name: camel-config type: Opaque data: # The username is 'myuser' quickstart.queue-username: bXl1c2VyCg== 2 quickstart.queue-password: MWYyZDFlMmU2N2Rm 3 1 metadata.name: Identifies the Secret. Other parts of the OpenShift system use this identifier to reference the Secret. 2 quickstart.queue-username: Is meant to be injected into the queueUsername property of the quickstartConfiguration bean. The value must be base64 encoded. 3 quickstart.queue-password: Is meant to be injected into the queuePassword property of the quickstartConfiguration bean. The value must be base64 encoded. Note Kubernetes does not allow you to define property names in CamelCase (it requires property names to be all lowercase). To work around this limitation, use the hyphenated form queue-username , which Spring Boot matches with queueUsername . This takes advantage of Spring Boot's relaxed binding rules for externalized configuration. Configure volume mount for the Secret The application must be configured to load the Secret at run time, by configuring the Secret as a volume mount. After the application starts, the Secret properties then become available at the specified location in the filesystem. The deployment.yml file for the application is located under src/main/jkube/ directory, which defines the volume mount for the Secret. deployment.yml file spec: template: spec: serviceAccountName: "qs-camel-config" volumes: 1 - name: "camel-config" secret: # The secret must be created before deploying this application secretName: "camel-config" containers: - volumeMounts: 2 - name: "camel-config" readOnly: true # Mount the secret where spring-cloud-kubernetes is configured to read it # see src/main/resources/bootstrap.yml mountPath: "/etc/secrets/camel-config" resources: # requests: # cpu: "0.2" # memory: 256Mi # limits: # cpu: "1.0" # memory: 256Mi env: - name: SPRING_APPLICATION_JSON value: '{"server":{"undertow":{"io-threads":1, "worker-threads":2 }}}' 1 In the volumes section, the deployment declares a new volume named camel-config , which references the Secret named camel-config . 2 In the volumeMounts section, the deployment declares a new volume mount, which references the camel-config volume and specifies that the Secret volume should be mounted to the path /etc/secrets/camel-config in the pod's filesystem. Configuring spring-cloud-kubernetes to read Secret properties To integrate secrets with Spring Boot externalized configuration, the Spring Cloud Kubernetes plug-in must be configured with the secret's mount path. Spring Cloud Kubernetes reads the secrets from the specified location and makes them available to Spring Boot as property sources. The Spring Cloud Kubernetes plug-in is configured by settings in the bootstrap.yml file, located under src/main/resources in the quickstart project. bootstrap.yml file # Startup configuration of Spring-cloud-kubernetes spring: application: name: camel-config cloud: kubernetes: reload: # Enable live reload on ConfigMap change (disabled for Secrets by default) enabled: true secrets: paths: /etc/secrets/camel-config The spring.cloud.kubernetes.secrets.paths property specifies the list of paths of secrets volume mounts in the pod. Note A bootstrap.properties file (or bootstrap.yml file) behaves similarly to an application.properties file, but it is loaded at an earlier phase of application start-up. It is more reliable to set the properties relating to the Spring Cloud Kubernetes plug-in in the bootstrap.properties file. 11.2.4. Setting up ConfigMap In addition to creating a ConfigMap object and setting the view permission appropriately, the integration with Spring Cloud Kubernetes requires you to match the ConfigMap's metadata.name with the value of the spring.application.name property configured in the project's bootstrap.yml file. To set up the ConfigMap: Create Sample ConfigMap Object Set up the view permission Configure the Spring Cloud Kubernetes plug-in Sample ConfigMap object The quickstart project provides a sample ConfigMap, sample-configmap.yml . kind: ConfigMap apiVersion: v1 metadata: 1 # Must match the 'spring.application.name' property of the application name: camel-config data: application.properties: | 2 # Override the configuration properties here quickstart.recipients=direct:async-queue,direct:file,direct:mail 3 1 metadata.name: Identifies the ConfigMap. Other parts of the OpenShift system use this identifier to reference the ConfigMap. 2 data.application.properties: This section lists property settings that can override settings from the original application.properties file that was deployed with the application. 3 quickstart.recipients: Is meant to be injected into the recipients property of the quickstartConfiguration bean. Setting the view permission As shown in the deployment.yml file for the Secret, the serviceAccountName is set to qs-camel-config in the project's deployment.yml file. Hence, you need to enter the following command to enable the view permission on the quickstart application (assuming that it deploys into the test project namespace): Configuring the Spring Cloud Kubernetes plug-in The Spring Cloud Kubernetes plug-in is configured by the following settings in the bootstrap.yml file. spring.application.name This value must match the metadata.name of the ConfigMap object (for example, as defined in sample-configmap.yml in the quickstart project). It defaults to application . spring.cloud.kubernetes.reload.enabled Setting this to true enables dynamic reloading of ConfigMap objects. For more details about the supported properties, see PropertySource Reload Configuration Properties . 11.3. Using ConfigMap PropertySource Kubernetes has the notion of ConfigMap for passing configuration to the application. The Spring cloud Kubernetes plug-in provides integration with ConfigMap to make config maps accessible by Spring Boot. The ConfigMap PropertySource when enabled will look up Kubernetes for a ConfigMap named after the application (see spring.application.name ). If the map is found it will read its data and do the following: Apply Individual Properties Apply Property Named application.yaml Apply Property Named application.properties 11.3.1. Applying individual properties Let's assume that we have a Spring Boot application named demo that uses properties to read its thread pool configuration. pool.size.core pool.size.max This can be externalized to config map in YAML format: kind: ConfigMap apiVersion: v1 metadata: name: demo data: pool.size.core: 1 pool.size.max: 16 11.3.2. Applying application.yaml ConfigMap property Individual properties work fine for most cases but sometimes we find YAML is more convenient. In this case we use a single property named application.yaml and embed our YAML inside it: kind: ConfigMap apiVersion: v1 metadata: name: demo data: application.yaml: |- pool: size: core: 1 max:16 11.3.3. Applying application.properties ConfigMap property You can also define the ConfigMap properties in the style of a Spring Boot application.properties file. In this case we use a single property named application.properties and list the property settings inside it: kind: ConfigMap apiVersion: v1 metadata: name: demo data: application.properties: |- pool.size.core: 1 pool.size.max: 16 11.3.4. Deploying a ConfigMap To deploy a ConfigMap and make it accessible to a Spring Boot application, perform the following steps. Procedure In your Spring Boot application, use the externalized configuration mechanism to access the ConfigMap property source. For example, by annotating a Java bean with the @Configuration annotation, it becomes possible for the bean's property values to be injected by a ConfigMap. In your project's bootstrap.properties file (or bootstrap.yaml file), set the spring.application.name property to match the name of the ConfigMap. Enable the view permission on the service account that is associated with your application (by default, this would be the service account called default ). For example, to add the view permission to the default service account: 11.4. Using Secrets PropertySource Kubernetes has the notion of Secrets for storing sensitive data such as password, OAuth tokens, etc. The Spring cloud Kubernetes plug-in provides integration with Secrets to make secrets accessible by Spring Boot. The Secrets property source when enabled will look up Kubernetes for Secrets from the following sources. If the secrets are found, their data is made available to the application. Reading recursively from secrets mounts Named after the application (see spring.application.name ) Matching some labels Please note that, by default, consuming Secrets via API (points 2 and 3 above) is not enabled . 11.4.1. Example of setting Secrets Let's assume that we have a Spring Boot application named demo that uses properties to read its ActiveMQ and PostreSQL configuration. These secrets can be externalized to Secrets in YAML format: ActiveMQ Secrets apiVersion: v1 kind: Secret metadata: name: activemq-secrets labels: broker: activemq type: Opaque data: amq.username: bXl1c2VyCg== amq.password: MWYyZDFlMmU2N2Rm PostreSQL Secrets apiVersion: v1 kind: Secret metadata: name: postgres-secrets labels: db: postgres type: Opaque data: pg.username: dXNlcgo= pg.password: cGdhZG1pbgo= 11.4.2. Consuming the Secrets You can select the Secrets to consume in a number of ways: By listing the directories where the secrets are mapped: If you have all the secrets mapped to a common root, you can set them like this: By setting a named secret: By defining a list of labels: 11.4.3. Configuration properties for Secrets PropertySource You can use the following properties to configure the Secrets property source: spring.cloud.kubernetes.secrets.enabled Enable the Secrets property source. Type is Boolean and default is true . spring.cloud.kubernetes.secrets.name Sets the name of the secret to look up. Type is String and default is USD{spring.application.name} . spring.cloud.kubernetes.secrets.labels Sets the labels used to lookup secrets. This property behaves as defined by Map-based binding . Type is java.util.Map and default is null . spring.cloud.kubernetes.secrets.paths Sets the paths where secrets are mounted. This property behaves as defined by Collection-based binding . Type is java.util.List and default is null . spring.cloud.kubernetes.secrets.enableApi Enable/disable consuming secrets via APIs. Type is Boolean and default is false . Note Access to secrets via API may be restricted for security reasons - the preferred way is to mount a secret to the POD. 11.5. Using PropertySource Reload Some applications may need to detect changes on external property sources and update their internal status to reflect the new configuration. The reload feature of Spring Cloud Kubernetes is able to trigger an application reload when a related ConfigMap or Secret change. 11.5.1. Enabling PropertySource Reload The PropertySource reload feature of Spring Cloud Kubernetes is disabled by default. Procedure Navigate to src/main/resources directory of the quickstart project and open the bootstrap.yml file. Change the configuration property spring.cloud.kubernetes.reload.enabled=true . 11.5.2. Levels of PropertySource Reload The following levels of reload are supported for property spring.cloud.kubernetes.reload.strategy : refresh (default) only configuration beans annotated with @ConfigurationProperties or @RefreshScope are reloaded. This reload level leverages the refresh feature of Spring Cloud Context. Note The PropertySource reload feature can only be used for simple properties (that is, not collections) when the reload strategy is set to refresh . Properties backed by collections must not be changed at runtime. restart_context the whole Spring ApplicationContext is gracefully restarted. Beans are recreated with the new configuration. shutdown the Spring ApplicationContext is shut down to activate a restart of the container. When using this level, make sure that the lifecycle of all non-daemon threads is bound to the ApplicationContext and that a replication controller or replica set is configured to restart the pod. 11.5.3. Example of PropertySource Reload The following example explains what happens when the reload feature is enabled. Procedure Assume that the reload feature is enabled with default settings ( refresh mode). The following bean will be refreshed when the config map changes: @Configuration @ConfigurationProperties(prefix = "bean") public class MyConfig { private String message = "a message that can be changed live"; // getter and setters } To see the changes that are happening, create another bean that prints the message periodically as shown below. @Component public class MyBean { @Autowired private MyConfig config; @Scheduled(fixedDelay = 5000) public void hello() { System.out.println("The message is: " + config.getMessage()); } } You can change the message printed by the application by using a ConfigMap as shown below. apiVersion: v1 kind: ConfigMap metadata: name: reload-example data: application.properties: |- bean.message=Hello World! Any change to the property named bean.message in the Config Map associated with the pod will be reflected in the output of the program. 11.5.4. PropertySource Reload operating modes The reload feature supports two operating modes: event (default) watches for changes in ConfigMaps or secrets using the Kubernetes API (web socket). Any event will produce a re-check on the configuration and a reload in case of changes. The view role on the service account is required in order to listen for config map changes. A higher level role (eg. edit ) is required for secrets (secrets are not monitored by default). polling re-creates the configuration periodically from config maps and secrets to see if it has changed. The polling period can be configured using the property spring.cloud.kubernetes.reload.period and defaults to 15 seconds . It requires the same role as the monitored property source. This means, for example, that using polling on file mounted secret sources does not require particular privileges. 11.5.5. PropertySource Reload configuration properties The following properties can be used to configure the reloading feature: spring.cloud.kubernetes.reload.enabled Enables monitoring of property sources and configuration reload. Type is Boolean and default is false . spring.cloud.kubernetes.reload.monitoring-config-maps Allow monitoring changes in config maps. Type is Boolean and default is true . spring.cloud.kubernetes.reload.monitoring-secrets Allow monitoring changes in secrets. Type is Boolean and default is false . spring.cloud.kubernetes.reload.strategy The strategy to use when firing a reload ( refresh , restart_context , shutdown ). Type is Enum and default is refresh . spring.cloud.kubernetes.reload.mode Specifies how to listen for changes in property sources ( event , polling ). Type is Enum and default is event . spring.cloud.kubernetes.reload.period The period in milliseconds for verifying changes when using the polling strategy. Type is Long and default is 15000 . Note the following points: The spring.cloud.kubernetes.reload.* properties should not be used in ConfigMaps or Secrets. Changing such properties at run time may lead to unexpected results; Deleting a property or the whole config map does not restore the original state of the beans when using the refresh level. | [
"<project ...> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-kubernetes-config</artifactId> </dependency> </dependencies> </project>",
"mvn org.apache.maven.plugins:maven-archetype-plugin:2.4:generate -DarchetypeCatalog=https://maven.repository.redhat.com/ga/io/fabric8/archetypes/archetypes-catalog/2.2.0.fuse-7_13_0-00014-redhat-00001/archetypes-catalog-2.2.0.fuse-7_13_0-00014-redhat-00001-archetype-catalog.xml -DarchetypeGroupId=org.jboss.fuse.fis.archetypes -DarchetypeArtifactId=spring-boot-camel-config-archetype -DarchetypeVersion=2.2.0.fuse-7_13_0-00014-redhat-00001",
"Define value for property 'groupId': : org.example.fis Define value for property 'artifactId': : fuse713-configmap Define value for property 'version': 1.0-SNAPSHOT: : Define value for property 'package': org.example.fis: : Confirm properties configuration: groupId: org.example.fis artifactId: fuse713-configmap version: 1.0-SNAPSHOT package: org.example.fis Y: : Y",
"login -u developer -p developer project openshift",
"cd fuse713-configmap create -f sample-secret.yml",
"mvn oc:deploy -Popenshift",
"5:44:57.377 [Camel (camel) thread #0 - timer://order] INFO generate-order-route - Generating message message-44, sending to the recipient list 15:44:57.378 [Camel (camel) thread #0 - timer://order] INFO target-route-queue - ----> message-44 pushed to an async queue (simulation) 15:44:57.379 [Camel (camel) thread #0 - timer://order] INFO target-route-queue - ----> Using username 'myuser' for the async queue 15:44:57.380 [Camel (camel) thread #0 - timer://order] INFO target-route--file - ----> message-44 written to a file",
"policy add-role-to-user view system:serviceaccount:openshift:qs-camel-config",
"create -f sample-configmap.yml",
"16:25:24.121 [Camel (camel) thread #0 - timer://order] INFO generate-order-route - Generating message message-9, sending to the recipient list 16:25:24.124 [Camel (camel) thread #0 - timer://order] INFO target-route-queue - ----> message-9 pushed to an async queue (simulation) 16:25:24.125 [Camel (camel) thread #0 - timer://order] INFO target-route-queue - ----> Using username 'myuser' for the async queue 16:25:24.125 [Camel (camel) thread #0 - timer://order] INFO target-route--file - ----> message-9 written to a file (simulation) 16:25:24.126 [Camel (camel) thread #0 - timer://order] INFO target-route--mail - ----> message-9 sent via mail",
"package org.example.fis; import org.springframework.boot.context.properties.ConfigurationProperties; import org.springframework.context.annotation.Configuration; @Configuration 1 @ConfigurationProperties(prefix = \"quickstart\") 2 public class QuickstartConfiguration { /** * A comma-separated list of routes to use as recipients for messages. */ private String recipients; 3 /** * The username to use when connecting to the async queue (simulation) */ private String queueUsername; 4 /** * The password to use when connecting to the async queue (simulation) */ private String queuePassword; 5 // Setters and Getters for Bean properties // NOT SHOWN }",
"apiVersion: v1 kind: Secret metadata: 1 name: camel-config type: Opaque data: # The username is 'myuser' quickstart.queue-username: bXl1c2VyCg== 2 quickstart.queue-password: MWYyZDFlMmU2N2Rm 3",
"spec: template: spec: serviceAccountName: \"qs-camel-config\" volumes: 1 - name: \"camel-config\" secret: # The secret must be created before deploying this application secretName: \"camel-config\" containers: - volumeMounts: 2 - name: \"camel-config\" readOnly: true # Mount the secret where spring-cloud-kubernetes is configured to read it # see src/main/resources/bootstrap.yml mountPath: \"/etc/secrets/camel-config\" resources: requests: cpu: \"0.2\" memory: 256Mi limits: cpu: \"1.0\" memory: 256Mi env: - name: SPRING_APPLICATION_JSON value: '{\"server\":{\"undertow\":{\"io-threads\":1, \"worker-threads\":2 }}}'",
"Startup configuration of Spring-cloud-kubernetes spring: application: name: camel-config cloud: kubernetes: reload: # Enable live reload on ConfigMap change (disabled for Secrets by default) enabled: true secrets: paths: /etc/secrets/camel-config",
"kind: ConfigMap apiVersion: v1 metadata: 1 # Must match the 'spring.application.name' property of the application name: camel-config data: application.properties: | 2 # Override the configuration properties here quickstart.recipients=direct:async-queue,direct:file,direct:mail 3",
"policy add-role-to-user view system:serviceaccount:test:qs-camel-config",
"kind: ConfigMap apiVersion: v1 metadata: name: demo data: pool.size.core: 1 pool.size.max: 16",
"kind: ConfigMap apiVersion: v1 metadata: name: demo data: application.yaml: |- pool: size: core: 1 max:16",
"kind: ConfigMap apiVersion: v1 metadata: name: demo data: application.properties: |- pool.size.core: 1 pool.size.max: 16",
"policy add-role-to-user view system:serviceaccount:USD(oc project -q):default -n USD(oc project -q)",
"amq.username amq.password pg.username pg.password",
"apiVersion: v1 kind: Secret metadata: name: activemq-secrets labels: broker: activemq type: Opaque data: amq.username: bXl1c2VyCg== amq.password: MWYyZDFlMmU2N2Rm",
"apiVersion: v1 kind: Secret metadata: name: postgres-secrets labels: db: postgres type: Opaque data: pg.username: dXNlcgo= pg.password: cGdhZG1pbgo=",
"-Dspring.cloud.kubernetes.secrets.paths=/etc/secrets/activemq,etc/secrets/postgres",
"-Dspring.cloud.kubernetes.secrets.paths=/etc/secrets",
"-Dspring.cloud.kubernetes.secrets.name=postgres-secrets",
"-Dspring.cloud.kubernetes.secrets.labels.broker=activemq -Dspring.cloud.kubernetes.secrets.labels.db=postgres",
"@Configuration @ConfigurationProperties(prefix = \"bean\") public class MyConfig { private String message = \"a message that can be changed live\"; // getter and setters }",
"@Component public class MyBean { @Autowired private MyConfig config; @Scheduled(fixedDelay = 5000) public void hello() { System.out.println(\"The message is: \" + config.getMessage()); } }",
"apiVersion: v1 kind: ConfigMap metadata: name: reload-example data: application.properties: |- bean.message=Hello World!"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/fuse_on_openshift_guide/integrate-spring-boot-with-kubernetes |
5.6. RAID and Other Disk Devices | 5.6. RAID and Other Disk Devices Some storage technology requires special consideration when using Red Hat Enterprise Linux. Generally, it is important to understand how these technologies are configured, visible to Red Hat Enterprise Linux, and how support for them might have changed between major versions. 5.6.1. Hardware RAID RAID (Redundant Array of Independent Disks) allows a group, or array, of drives to act as a single device. Configure any RAID functions provided by the mainboard of your computer, or attached controller cards, before you begin the installation process. Each active RAID array appears as one drive within Red Hat Enterprise Linux. 5.6.2. Software RAID On systems with more than one hard drive, you can use the Red Hat Enterprise Linux installation program to operate several of the drives as a Linux software RAID array. With a software RAID array, RAID functions are controlled by the operating system rather than dedicated hardware. These functions are explained in detail in Section 8.14.4, "Manual Partitioning" . Note When a pre-existing RAID array's member devices are all unpartitioned disks/drives, the installer will treat the array itself as a disk and will not provide a way to remove the array. 5.6.3. USB Disks You can connect and configure external USB storage after installation. Most such devices are recognized by the kernel and available for use at that time. Some USB drives might not be recognized by the installation program. If configuration of these disks at installation time is not vital, disconnect them to avoid potential problems. 5.6.4. NVDIMM devices To use a Non-Volatile Dual In-line Memory Module (NVDIMM) device as storage, the following conditions must be satisfied: Version of Red Hat Enterprise Linux is 7.6 or later. The architecture of the system is Intel 64 or AMD64. The device is configured to sector mode. Anaconda can reconfigure NVDIMM devices to this mode. The device must be supported by the nd_pmem driver. Booting from a NVDIMM device is possible under the following additional conditions: The system uses UEFI. The device must be supported by firmware available on the system, or by a UEFI driver. The UEFI driver may be loaded from an option ROM of the device itself. The device must be made available under a namespace. To take advantage of the high performance of NVDIMM devices during booting, place the /boot and /boot/efi directories on the device. See Section 8.14.4, "Manual Partitioning" for more information. Note that the Execute-in-place (XIP) feature of NVDIMM devices is not supported during booting and the kernel is loaded into conventional memory. 5.6.5. Considerations for Intel BIOS RAID Sets Red Hat Enterprise Linux 7 uses mdraid for installation onto Intel BIOS RAID sets. These sets are detected automatically during the boot process and their device node paths can change from boot to boot. For this reason, local modifications to /etc/fstab , /etc/crypttab or other configuration files which refer to devices by their device node paths might not work in Red Hat Enterprise Linux 7. Therefore, you should replace device node paths (such as /dev/sda ) with file system labels or device UUIDs instead. You can find the file system labels and device UUIDs using the blkid command. 5.6.6. Considerations for Intel BIOS iSCSI Remote Boot If you are installing using Intel iSCSI Remote Boot, all attached iSCSI storage devices must be disabled, otherwise the installation will succeed but the installed system will not boot. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-planning-partitioning-raid-x86 |
Chapter 2. Understanding disconnected installation mirroring | Chapter 2. Understanding disconnected installation mirroring You can use a mirror registry for disconnected installations and to ensure that your clusters only use container images that satisfy your organization's controls on external content. Before you install a cluster on infrastructure that you provision in a disconnected environment, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 2.1. Mirroring images for a disconnected installation through the Agent-based Installer You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation by using the oc-mirror plugin v2 2.2. About mirroring the OpenShift Container Platform image repository for a disconnected registry To use mirror images for a disconnected installation with the Agent-based Installer, you must modify the install-config.yaml file. You can mirror the release image by using the output of either the oc adm release mirror or oc mirror command. This is dependent on which command you used to set up the mirror registry. The following example shows the output of the oc adm release mirror command. USD oc adm release mirror Example output To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release The following example shows part of the imageContentSourcePolicy.yaml file generated by the oc-mirror plugin. The file can be found in the results directory, for example oc-mirror-workspace/results-1682697932/ . Example imageContentSourcePolicy.yaml file spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release 2.2.1. Configuring the Agent-based Installer to use mirrored images You must use the output of either the oc adm release mirror command or the oc-mirror plugin to configure the Agent-based Installer to use mirrored images. Procedure If you used the oc-mirror plugin to mirror your release images: Open the imageContentSourcePolicy.yaml located in the results directory, for example oc-mirror-workspace/results-1682697932/ . Copy the text in the repositoryDigestMirrors section of the yaml file. If you used the oc adm release mirror command to mirror your release images: Copy the text in the imageContentSources section of the command output. Paste the copied text into the imageContentSources field of the install-config.yaml file. Add the certificate file used for the mirror registry to the additionalTrustBundle field of the yaml file. Important The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Example install-config.yaml file additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- If you are using GitOps ZTP manifests: add the registries.conf and ca-bundle.crt files to the mirror path to add the mirror configuration in the agent ISO image. Note You can create the registries.conf file from the output of either the oc adm release mirror command or the oc mirror plugin. The format of the /etc/containers/registries.conf file has changed. It is now version 2 and in TOML format. Example registries.conf file [[registry]] location = "registry.ci.openshift.org/ocp/release" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" [[registry]] location = "quay.io/openshift-release-dev/ocp-v4.0-art-dev" mirror-by-digest-only = true [[registry.mirror]] location = "virthost.ostest.test.metalkube.org:5000/localimages/local-release-image" 2.3. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer | [
"oc adm release mirror",
"To use the new mirrored repository to install, add the following section to the install-config.yaml: imageContentSources: mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: quay.io/openshift-release-dev/ocp-v4.0-art-dev mirrors: virthost.ostest.test.metalkube.org:5000/localimages/local-release-image source: registry.ci.openshift.org/ocp/release",
"spec: repositoryDigestMirrors: - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - mirrors: - virthost.ostest.test.metalkube.org:5000/openshift/release-images source: quay.io/openshift-release-dev/ocp-release",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"[[registry]] location = \"registry.ci.openshift.org/ocp/release\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\" [[registry]] location = \"quay.io/openshift-release-dev/ocp-v4.0-art-dev\" mirror-by-digest-only = true [[registry.mirror]] location = \"virthost.ostest.test.metalkube.org:5000/localimages/local-release-image\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_an_on-premise_cluster_with_the_agent-based_installer/understanding-disconnected-installation-mirroring |
Chapter 5. Ceph metrics for Datadog | Chapter 5. Ceph metrics for Datadog The Datadog agent collects the following metrics from Ceph. These metrics may be included in custom dashboards and in alerts. Metric Name Description ceph.commit_latency_ms The time taken to commit an operation to the journal. ceph.apply_latency_ms Time taken to flush an update to disks. ceph.op_per_sec The number of I/O operations per second for given pool. ceph.read_bytes_sec The bytes per second being read. ceph.write_bytes_sec The bytes per second being written. ceph.num_osds The number of known storage daemons. ceph.num_in_osds The number of participating storage daemons. ceph.num_up_osds The number of online storage daemons. ceph.num_pgs The number of placement groups available. ceph.num_mons The number of monitor daemons. ceph.aggregate_pct_used The overall capacity usage metric. ceph.total_objects The object count from the underlying object store. ceph.num_objects The object count for a given pool. ceph.read_bytes The per-pool read bytes. ceph.write_bytes The per-pool write bytes. ceph.num_pools The number of pools. ceph.pgstate.active_clean The number of active+clean placement groups. ceph.read_op_per_sec The per-pool read operations per second. ceph.write_op_per_sec The per-pool write operations per second. ceph.num_near_full_osds The number of nearly full OSDs. ceph.num_full_osds The number of full OSDs. ceph.osd.pct_used The percentage used of full or near-full OSDs. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/monitoring_ceph_with_datadog_guide/ceph-metrics-for-datadog_datadog |
2.3. Directory Providers | 2.3. Directory Providers The following directory providers are supported in Infinispan Query: RAM Directory Provider Filesystem Directory Provider Infinispan Directory Provider Report a bug 2.3.1. RAM Directory Provider Storing the global index locally in Red Hat JBoss Data Grid's Query Module allows each node to maintain its own index. use Lucene 's in-memory or filesystem-based index directory. The following example demonstrates an in-memory, RAM-based index store: Report a bug 2.3.2. Filesystem Directory Provider To configure the storage of indexes, set the appropriate properties when enabling indexing in the JBoss Data Grid configuration. This example shows a disk-based index store: Example 2.1. Disk-based Index Store Report a bug 2.3.3. Infinispan Directory Provider In addition to the Lucene directory implementations, Red Hat JBoss Data Grid also ships with an infinispan-directory module. Note Red Hat JBoss Data Grid only supports infinispan-directory in the context of the Querying feature, not as a standalone feature. The infinispan-directory allows Lucene to store indexes within the distributed data grid. This allows the indexes to be distributed, stored in-memory, and optionally written to disk using the cache store for durability. Sharing the same index instance using the Infinispan Directory Provider introduces a write contention point, as only one instance can write on the same index at the same time. Important By default the exclusive_index_use is set to true , as this provides major performance increases; however, if external applications access the same index in use by Infinispan this property must be set to false . The default value is recommended for the majority of applications and use cases due to the performance increases, so only change this if absolutely necessary. InfinispanIndexManager provides a default back end that sends all updates to master node which later applies the updates to the index. In case of master node failure, the update can be lost, therefore keeping the cache and index non-synchronized. Non-default back ends are not supported. Example 2.2. Enable Shared Indexes When using an indexed, clustered cache ensure that the caches containing the index data are also clustered, as described in Section 2.5.2, "Tuning Infinispan Directory" . Report a bug | [
"<namedCache name=\"indexesInMemory\"> <indexing enabled=\"true\"> <properties> <property name=\"default.directory_provider\" value=\"ram\"/> </properties> </indexing> </namedCache>",
"<namedCache name=\"indexesOnDisk\"> <indexing enabled=\"true\"> <properties> <property name=\"default.directory_provider\" value=\"filesystem\"/> <property name=\"default.indexBase\" value=\"/tmp/ispn_index\"/> </properties> </indexing> </namedCache>",
"<namedCache name=\"globalSharedIndexes\"> <clustering mode=\"local\"/> <indexing enabled=\"true\"> <properties> <property name=\"default.directory_provider\" value=\"infinispan\"/> <property name=\"default.indexmanager\" value=\"org.infinispan.query.indexmanager.InfinispanIndexManager\" /> </properties> </indexing> </namedCache>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-directory_providers |
Chapter 4. Creating Data Grid clusters | Chapter 4. Creating Data Grid clusters Create Data Grid clusters running on OpenShift with the Infinispan CR or with the native Data Grid CLI plugin for oc clients. 4.1. Infinispan custom resource (CR) Data Grid Operator adds a new Custom Resource (CR) of type Infinispan that lets you handle Data Grid clusters as complex units on OpenShift. Data Grid Operator listens for Infinispan Custom Resources (CR) that you use to instantiate and configure Data Grid clusters and manage OpenShift resources, such as StatefulSets and Services. Infinispan CR Field Description apiVersion Declares the version of the Infinispan API. kind Declares the Infinispan CR. metadata.name Specifies a name for your Data Grid cluster. spec.replicas Specifies the number of pods in your Data Grid cluster. spec.service.type Specifies the type of Data Grid service to create. spec.version Specifies the Data Grid Server version of your cluster. 4.2. Creating Data Grid clusters Create Data Grid clusters with the native CLI plugin, kubectl-infinispan . Prerequisites Install Data Grid Operator. Have kubectl-infinispan on your PATH . Procedure Run the infinispan create cluster command. For example, create a Data Grid cluster with two pods as follows: Tip Add the --version argument to control the Data Grid version of your cluster. For example, --version=8.4.6-1 . If you don't specify the version, Data Grid Operator creates cluster with the latest supported Data Grid version. Watch Data Grid Operator create the Data Grid pods. steps After you create a Data Grid cluster, use the oc to apply changes to Infinispan CR and configure your Data Grid service. You can also delete Data Grid clusters with kubectl-infinispan and re-create them as required. Additional resources kubectl-infinispan command reference 4.3. Verifying Data Grid cluster views Confirm that Data Grid pods have successfully formed clusters. Prerequisites Create at least one Data Grid cluster. Procedure Retrieve the Infinispan CR for Data Grid Operator. The response indicates that Data Grid pods have received clustered views, as in the following example: Tip Do the following for automated scripts: Retrieving cluster view from logs You can also get the cluster view from Data Grid logs as follows: 4.4. Modifying Data Grid clusters Configure Data Grid clusters by providing Data Grid Operator with a custom Infinispan CR. Prerequisites Install Data Grid Operator. Create at least one Data Grid cluster. Have an oc client. Procedure Create a YAML file that defines your Infinispan CR. For example, create a my_infinispan.yaml file that changes the number of Data Grid pods to two: Apply your Infinispan CR. Watch Data Grid Operator scale the Data Grid pods. 4.5. Stopping and starting Data Grid clusters Stop and start Data Grid pods in a graceful, ordered fashion to correctly preserve cluster state. Clusters of Data Grid service pods must restart with the same number of pods that existed before shutdown. This allows Data Grid to restore the distribution of data across the cluster. After Data Grid Operator fully restarts the cluster you can safely add and remove pods. Procedure Change the spec.replicas field to 0 to stop the Data Grid cluster. spec: replicas: 0 Ensure you have the correct number of pods before you restart the cluster. Change the spec.replicas field to the same number of pods to restart the Data Grid cluster. | [
"apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid",
"infinispan create cluster --replicas=3 -Pservice.type=DataGrid infinispan",
"get pods -w",
"infinispan delete cluster infinispan",
"get infinispan -o yaml",
"conditions: - message: 'View: [infinispan-0, infinispan-1]' status: \"True\" type: wellFormed",
"wait --for condition=wellFormed --timeout=240s infinispan/infinispan",
"logs infinispan-0 | grep ISPN000094",
"INFO [org.infinispan.CLUSTER] (MSC service thread 1-2) ISPN000094: Received new cluster view for channel infinispan: [infinispan-0|0] (1) [infinispan-0] INFO [org.infinispan.CLUSTER] (jgroups-3,infinispan-0) ISPN000094: Received new cluster view for channel infinispan: [infinispan-0|1] (2) [infinispan-0, infinispan-1]",
"cat > cr_minimal.yaml<<EOF apiVersion: infinispan.org/v1 kind: Infinispan metadata: name: infinispan spec: replicas: 2 version: <Data Grid_version> service: type: DataGrid EOF",
"apply -f my_infinispan.yaml",
"get pods -w",
"spec: replicas: 0",
"get infinispan infinispan -o=jsonpath='{.status.replicasWantedAtRestart}'",
"spec: replicas: 6"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/creating-clusters |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/providing-feedback-on-red-hat-documentation_rhodf |
Release notes for Eclipse Temurin 17.0.7 | Release notes for Eclipse Temurin 17.0.7 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.7/index |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_python_client/using_your_subscription |
Part V. Known Issues | Part V. Known Issues This part documents known problems in Red Hat Enterprise Linux 7.6. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/known-issues |
Chapter 7. Installing on Azure Stack Hub | Chapter 7. Installing on Azure Stack Hub 7.1. Preparing to install on Azure Stack Hub 7.1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 7.1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 7.1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 7.1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 7.1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 7.1.4. steps Configuring an Azure Stack Hub account 7.2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 7.2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 7.2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 7.2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 7.2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 7.2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster quickly on Azure Stack Hub . Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates . 7.3. Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure In OpenShift Container Platform version 4.11, you can install a cluster on Microsoft Azure Stack Hub with an installer-provisioned infrastructure. However, you must manually configure the install-config.yaml file to specify values that are specific to Azure Stack Hub. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 7.3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 7.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.3.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 7.3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select Azure as the cloud provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.3.6. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must manually create your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.3.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.3.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.3.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.3.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 7.3.6.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 7.4. Additional Azure Stack Hub parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . compute.platform.azure.type Defines the azure instance type for compute machines. String controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS . controlPlane.platform.azure.type Defines the azure instance type for control plane machines. String platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.armEndpoint The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.region The name of your Azure Stack Hub local region. String platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud clusterOSImage The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 7.3.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{"auths": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 9 11 13 16 17 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 10 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 12 The name of the resource group that contains the DNS zone for your base domain. 14 The name of your Azure Stack Hub local region. 15 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 20 The pull secret required to authenticate your cluster. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 7.3.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating cloud provider resources with manually maintained credentials 7.3.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 7.3.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.3.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.3.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.3.12. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 7.3.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.3.14. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 7.4. Installing a cluster on Azure Stack Hub with network customizations In OpenShift Container Platform version 4.11, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Azure Stack Hub. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 7.4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 7.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.4.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 7.4.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select Azure as the cloud provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.4.6. Manually creating the installation configuration file When installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must manually create your installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.4.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide a customized install-config.yaml installation configuration file that describes the details for your environment. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.4.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.5. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.4.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.6. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) cluster network provider to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI provider for all-Linux networks. OVNKubernetes is a CNI provider for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.4.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.7. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 and vCurrent . v4.11 enables the baremetal Operator, the marketplace Operator, and the openshift-samples content. vCurrent installs the recommended set of capabilities for the current version of OpenShift Container Platform. The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . Valid values are baremetal , marketplace and openshift-samples . You may specify multiple capabilities in this parameter. String array cgroupsV2 Enables Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroup version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. true compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms and IBM Cloud VPC. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 7.4.6.1.4. Additional Azure Stack Hub configuration parameters Additional Azure configuration parameters are described in the following table: Table 7.8. Additional Azure Stack Hub parameters Parameter Description Values compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . compute.platform.azure.type Defines the azure instance type for compute machines. String controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS . controlPlane.platform.azure.type Defines the azure instance type for control plane machines. String platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. standard_LRS or premium_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.armEndpoint The URL of the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. String platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.region The name of your Azure Stack Hub local region. String platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. AzureStackCloud clusterOSImage The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. String, for example, https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 7.4.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{"auths": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 9 11 13 16 17 19 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 10 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 12 The name of the resource group that contains the DNS zone for your base domain. 14 The name of your Azure Stack Hub local region. 15 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 18 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 20 The pull secret required to authenticate your cluster. 21 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 22 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 23 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 7.4.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use by running the following command: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 \ --credentials-requests \ --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component-secret> namespace: <component-namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 7.4.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 7.4.9. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 7.4.10. Specifying advanced network configuration You can use advanced network configuration for your cluster network provider to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 7.4.11. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.4.11.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 7.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 7.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 7.13. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.14. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 7.15. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.4.12. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR cannot overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 7.4.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.4.14. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.4.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.4.16. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console . 7.4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.11, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 7.4.18. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . 7.5. Installing a cluster on Azure Stack Hub using ARM templates In OpenShift Container Platform version 4.11, you can install a cluster on Microsoft Azure Stack Hub by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 7.5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was tested using version 2.28.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 7.5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.11, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.5.3. Configuring your Azure Stack Hub project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure Stack Hub resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure Stack Hub restricts, see Resolve reserved resource name errors in the Azure documentation. 7.5.3.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 7.5.3.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . You can view Azure's DNS solution by visiting this example for creating DNS zones . 7.5.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 7.5.3.4. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 7.5.3.5. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 7.5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on a local computer. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select Azure as the cloud provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 7.5.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5.6. Creating the installation files for Azure Stack Hub To install OpenShift Container Platform on Microsoft Azure Stack Hub using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You manually create the install-config.yaml file, and then generate and customize the Kubernetes manifests and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 7.5.6.1. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications for Azure Stack Hub: Set the replicas parameter to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . The compute machines will be provisioned manually later. Update the platform.azure section of the install-config.yaml file to configure your Azure Stack Hub configuration: platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4 1 Specify the Azure Resource Manager endpoint of your Azure Stack Hub environment, like https://management.local.azurestack.external . 2 Specify the name of the resource group that contains the DNS zone for your base domain. 3 Specify the Azure Stack Hub environment, which is used to configure the Azure SDK with the appropriate Azure API endpoints. 4 Specify the name of your Azure Stack Hub region. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 7.5.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 6 baseDomainResourceGroupName: resource_group 7 region: azure_stack_local_region 8 resourceGroupName: existing_resource_group 9 outboundType: Loadbalancer cloudName: AzureStackCloud 10 pullSecret: '{"auths": ...}' 11 fips: false 12 additionalTrustBundle: | 13 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 14 1 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 2 4 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 5 Specify the name of the cluster. 6 Specify the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 7 Specify the name of the resource group that contains the DNS zone for your base domain. 8 Specify the name of your Azure Stack Hub local region. 9 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 10 Specify the Azure Stack Hub environment as your target platform. 11 Specify the pull secret required to authenticate your cluster. 12 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 13 If your Azure Stack Hub environment uses an internal certificate authority (CA), add the necessary certificate bundle in .pem format. 14 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.5.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.5.6.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure Stack Hub. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into. This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 7.5.6.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. Optional: If your Azure Stack Hub environment uses an internal certificate authority (CA), you must update the .spec.trustedCA.name field in the <installation_directory>/manifests/cluster-proxy-01-config.yaml file to use user-ca-bundle : ... spec: trustedCA: name: user-ca-bundle ... Later, you must update your bootstrap ignition to include the CA. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. Manually create your cloud credentials. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. The format for the secret data varies for each cloud provider. Sample secrets.yaml file: apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region} Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-gate: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. To find CredentialsRequest objects with the TechPreviewNoUpgrade annotation, run the following command: USD grep "release.openshift.io/feature-gate" * Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade Create a cco-configmap.yaml file in the manifests directory with the Cloud Credential Operator (CCO) disabled: Sample ConfigMap object apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: "true" data: disabled: "true" To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 7.5.6.6. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 7.5.7. Creating the Azure resource group You must create a Microsoft Azure resource group . This is used during the installation of your OpenShift Container Platform cluster on Azure Stack Hub. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} 7.5.8. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Download the compressed RHCOS VHD file locally: USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. You can delete the VHD file after you upload it. Copy the local VHD to a blob: USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 7.5.9. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure Stack Hub's datacenter DNS integration is used, so you will create a DNS zone. Note The DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a DNS zone that already exists. You can learn more about configuring a DNS zone in Azure Stack Hub by visiting that section. 7.5.10. Creating a VNet in Azure Stack Hub You must create a virtual network (VNet) in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 7.5.10.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 7.1. 01_vnet.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/01_vnet.json[] 7.5.11. Deploying the RHCOS cluster image for the Azure Stack Hub infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure Stack Hub for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 7.5.11.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 7.2. 02_storage.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/02_storage.json[] 7.5.12. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 7.5.12.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 7.16. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 7.17. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 7.18. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 7.5.13. Creating networking and load balancing components in Azure Stack Hub You must configure networking and load balancing in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Load balancing requires the following DNS records: An api DNS record for the API public load balancer in the DNS zone. An api-int DNS record for the API internal load balancer in the DNS zone. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record and an api-int DNS record. When creating the API DNS records, the USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Export the following variable: USD export PRIVATE_IP=`az network lb frontend-ip show -g "USDRESOURCE_GROUP" --lb-name "USD{INFRA_ID}-internal" -n internal-lb-ip --query "privateIpAddress" -o tsv` Create the api DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 Create the api-int DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z "USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" -n api-int -a USD{PRIVATE_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api-int DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60 7.5.13.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 7.3. 03_infra.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/03_infra.json[] 7.5.14. Creating the bootstrap machine in Azure Stack Hub You must create the bootstrap machine in Microsoft Azure Stack Hub to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: If your environment uses a public certificate authority (CA), run this command: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` If your environment uses an internal CA, you must add your PEM encoded bundle to the bootstrap ignition stub so that your bootstrap virtual machine can pull the bootstrap ignition from the storage account. Run the following commands, which assume your CA is in a file called CA.pem : USD export CA="data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\n')" USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url "USDBOOTSTRAP_URL" --arg cert "USDCA" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create --verbose -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 7.5.14.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 7.4. 04_bootstrap.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/04_bootstrap.json[] 7.5.15. Creating the control plane machines in Azure Stack Hub You must create the control plane machines in Microsoft Azure Stack Hub for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the control plane nodes (also known as the master nodes). 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 7.5.15.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 7.5. 05_masters.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/05_masters.json[] 7.5.16. Wait for bootstrap completion and remove bootstrap resources in Azure Stack Hub After you create all of the required infrastructure in Microsoft Azure Stack Hub, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 7.5.17. Creating additional worker machines in Azure Stack Hub You can create worker machines in Microsoft Azure Stack Hub for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 7.5.17.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 7.6. 06_workers.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/06_workers.json[] 7.5.18. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.11. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture in the Product Variant drop-down menu. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.11 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.11 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.5.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.5.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 7.5.21. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure Stack Hub by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the DNS zone. If you are adding this cluster to a new DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 7.5.22. Completing an Azure Stack Hub installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure Stack Hub user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure Stack Hub infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See About remote health monitoring for more information about the Telemetry service. 7.6. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 7.6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. While you can uninstall the cluster using the copy of the installation program that was used to deploy it, using OpenShift Container Platform version 4.13 or later is recommended. The removal of service principals is dependent on the Microsoft Azure AD Graph API. Using version 4.13 or later of the installation program ensures that service principals are removed without the need for manual intervention, if and when Microsoft decides to retire the Azure AD Graph API. Procedure On the computer that you used to install the cluster, go to the directory that contains the installation program, and run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{\"auths\": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"grep \"release.openshift.io/feature-gate\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 9 10 baseDomainResourceGroupName: resource_group 11 12 region: azure_stack_local_region 13 14 resourceGroupName: existing_resource_group 15 outboundType: Loadbalancer cloudName: AzureStackCloud 16 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 17 18 pullSecret: '{\"auths\": ...}' 19 20 fips: false 21 sshKey: ssh-ed25519 AAAA... 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"openshift-install create manifests --dir <installation_directory>",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component-credentials-request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component-secret> namespace: <component-namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component-secret> namespace: <component-namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"grep \"release.openshift.io/feature-gate\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4",
"apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 6 baseDomainResourceGroupName: resource_group 7 region: azure_stack_local_region 8 resourceGroupName: existing_resource_group 9 outboundType: Loadbalancer cloudName: AzureStackCloud 10 pullSecret: '{\"auths\": ...}' 11 fips: false 12 additionalTrustBundle: | 13 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 14",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"spec: trustedCA: name: user-ca-bundle",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region}",
"grep \"release.openshift.io/feature-gate\" *",
"0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-gate: TechPreviewNoUpgrade",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: \"true\" data: disabled: \"true\"",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". βββ auth β βββ kubeadmin-password β βββ kubeconfig βββ bootstrap.ign βββ master.ign βββ metadata.json βββ worker.ign",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.11.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/01_vnet.json[]",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/02_storage.json[]",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"export PRIVATE_IP=`az network lb frontend-ip show -g \"USDRESOURCE_GROUP\" --lb-name \"USD{INFRA_ID}-internal\" -n internal-lb-ip --query \"privateIpAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z \"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" -n api-int -a USD{PRIVATE_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/03_infra.json[]",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"export CA=\"data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\\n')\"",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url \"USDBOOTSTRAP_URL\" --arg cert \"USDCA\" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create --verbose -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/04_bootstrap.json[]",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/05_masters.json[]",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.11/upi/azurestack/06_workers.json[]",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.24.0 master-1 Ready master 63m v1.24.0 master-2 Ready master 64m v1.24.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.24.0 master-1 Ready master 73m v1.24.0 master-2 Ready master 74m v1.24.0 worker-0 Ready worker 11m v1.24.0 worker-1 Ready worker 11m v1.24.0",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/installing/installing-on-azure-stack-hub |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/deploying_your_red_hat_build_of_quarkus_applications_to_openshift_container_platform/making-open-source-more-inclusive |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/providing-feedback |
Chapter 16. Configuring the loopback interface by using nmcli | Chapter 16. Configuring the loopback interface by using nmcli By default, NetworkManager does not manage the loopback ( lo ) interface. After creating a connection profile for the lo interface, you can configure this device by using NetworkManager. Some of the examples are as follows: Assign additional IP addresses to the lo interface Define DNS addresses Change the Maximum Transmission Unit (MTU) size of the lo interface Procedure Create a new connection of type loopback : Configure custom connection settings, for example: To assign an additional IP address to the interface, enter: Note NetworkManager manages the lo interface by always assigning the IP addresses 127.0.0.1 and ::1 that are persistent across the reboots. You can not override 127.0.0.1 and ::1 . However, you can assign additional IP addresses to the interface. To set a custom Maximum Transmission Unit (MTU), enter: To set an IP address to your DNS server, enter: If you set a DNS server in the loopback connection profile, this entry is always available in the /etc/resolv.conf file. The DNS server entry remains independent of whether or not the host roams between different networks. Activate the connection: Verification Display the settings of the lo interface: Verify the DNS address: | [
"nmcli connection add con-name example-loopback type loopback",
"nmcli connection modify example-loopback +ipv4.addresses 192.0.2.1/24",
"nmcli con mod example-loopback loopback.mtu 16384",
"nmcli connection modify example-loopback ipv4.dns 192.0.2.0",
"nmcli connection up example-loopback",
"ip address show lo 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16384 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 192.0.2.1/24 brd 192.0.2.255 scope global lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever",
"cat /etc/resolv.conf nameserver 192.0.2.0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/proc_configuring-the-loopback-interface-by-using-nmcli_configuring-and-managing-networking |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.