title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 6. Monitoring disaster recovery health
Chapter 6. Monitoring disaster recovery health 6.1. Enabling disaster recovery dashboard on Hub cluster You can enable the disaster recovery dashboard after installing ODF Multicluster Orchestrator with the console plugin enabled. For Regional-DR, the dashboard makes use of monitoring status cards like operator health and cluster health to show metrics, alerts and application count. For Metro-DR, you can configure the dashboard to only monitor the ramen setup health and application count. Note The dashboard only shows data for ApplicationSet-based applications, and not for Subscription-based applications. Prerequisites Ensure that you have installed OpenShift Container Platform version 4.13 and have administrator privileges. Ensure that you have installed Red Hat Advanced Cluster Management for Kubernetes 2.8 (RHACM) from Operator Hub. For instructions on how to install, see Installing RHACM . Ensure you have enabled observability on RHACM. See Enabling observability guidelines . Procedure On the Hub cluster, open a terminal window and perform the steps. Add label to openshift-operator namespace. Create the configmap file named observability-metrics-custom-allowlist.yaml . You can use the following YAML to list the disaster recovery metrics on Hub cluster. For details, see Adding custom metrics . To know more about ramen metrics, see Disaster recovery metrics . In the open-cluster-management-observability namespace, run the following command: After observability-metrics-custom-allowlist yaml is created, RHACM will start collecting the listed OpenShift Data Foundation metrics from all the managed clusters. To exclude a specific managed cluster from collecting the observability data, add the following cluster label to the clusters: observability: disabled . Create the configmap file named thanos-ruler-custom-rules.yaml and add the name of the custom alert rules to the custom_rules.yaml parameter. You can use the following YAML to create an alert against the ramen metrics on the Hub cluster. For details, see Adding custom metrics . To know more about the alerts, see Disaster Recovery alerts . Run the following command in the open-cluster-management-observability namespace: 6.2. Viewing health status of disaster recovery replication relationships Prerequisites Ensure that you have enabled the disaster recovery dashboard for monitoring. For instructions, see chapter Enabling disaster recovery dashboard on Hub cluster . Procedure On the Hub cluster, ensure All Clusters option is selected. Refresh the console to make the DR monitoring dashboard tab accessible. Navigate to Data Services and click Data policies . On the Overview tab, you can view the health status of the operators, clusters and applications. Green tick indicates that the operators are running and available.. Click the Disaster recovery tab to view a list of DR policy details and connected applications. 6.3. Disaster recovery metrics These are the ramen metrics that are scrapped by prometheus. ramen_last_sync_timestamp_seconds ramen_policy_schedule_interval_seconds Ramen's last synchronization timestamp in seconds This is the time in seconds which gives the time of the most recent successful synchronization of all PVCs. Metric name ramen_last_sync_timestamp_seconds Metrics type Gauge Labels ObjType : Type of the object, here its DPPC ObjName : Name of the object, here it is DRPC-Name ObjNamespace : DRPC namespace Policyname : Name of the DRPolicy SchedulingInterval : scheduling interval value from DRPolicy Metric value Set to lastGroupSyncTime from DRPC in seconds. Ramen's policy schedule interval in seconds This gives the scheduling interval in seconds from DRPolicy. Metric name ramen_policy_schedule_interval_seconds Metrics type Gauge Labels Policyname : Name of the DRPolicy Metric value Set to scheduling interval in seconds which is taken from DRPolicy. 6.4. Disaster recovery alerts This section provides a list of all supported alerts associated with Red Hat OpenShift Data Foundation 4.13 and above within disaster recovery environment. Recording rules Record: ramen_sync_duration_seconds Expression Purpose The time interval between the volume group's last sync time and the time now in seconds. Record: ramen_rpo_difference Expression Purpose The difference between the expected sync delay and the actual sync delay taken by the volume replication group. Record: count_persistentvolumeclaim_total Expression Purpose Sum of all PVC from the managed cluster. Alerts Alert: VolumeSynchronizationDelay Impact Critical Purpose Actual sync delay taken by the volume replication group is thrice the expected sync delay. YAML Alert: VolumeSynchronizationDelay Impact Warning Purpose Actual sync delay taken by the volume replication group is twice the expected sync delay. YAML
[ "oc label namespace openshift-operators openshift.io/cluster-monitoring='true'", "kind: ConfigMap apiVersion: v1 metadata: name: observability-metrics-custom-allowlist namespace: open-cluster-management-observability data: metrics_list.yaml: | names: - ramen_last_sync_timestamp_seconds - ramen_policy_schedule_interval_seconds matches: - __name__=\"csv_succeeded\",exported_namespace=\"openshift-dr-system\",name=~\"odr-cluster-operator.*\" - __name__=\"csv_succeeded\",exported_namespace=\"openshift-operators\",name=~\"volsync.*\" recording_rules: - record: count_persistentvolumeclaim_total expr: count(kube_persistentvolumeclaim_info) - record: ramen_sync_duration_seconds expr: sum by (obj_name, obj_namespace, obj_type, job, policyname)(time() - (ramen_last_sync_timestamp_seconds > 0))", "oc apply -n open-cluster-management-observability -f observability-metrics-custom-allowlist.yaml", "kind: ConfigMap apiVersion: v1 metadata: name: thanos-ruler-custom-rules namespace: open-cluster-management-observability data: custom_rules.yaml: | groups: - name: ramen-alerts rules: - record: ramen_rpo_difference expr: ramen_sync_duration_seconds{job=\"ramen-hub-operator-metrics-service\"} / on(policyname, job) group_left() (ramen_policy_schedule_interval_seconds{job=\"ramen-hub-operator-metrics-service\"}) - alert: VolumeSynchronizationDelay expr: ramen_rpo_difference >= 3 for: 5s labels: cluster: \"{{ USDlabels.cluster }}\" severity: critical annotations: description: \"Syncing of volumes (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }}) is taking more than thrice the scheduled snapshot interval. This may cause data loss and a backlog of replication requests.\" alert_type: \"DisasterRecovery\" - alert: VolumeSynchronizationDelay expr: ramen_rpo_difference > 2 and ramen_rpo_difference < 3 for: 5s labels: cluster: \"{{ USDlabels.cluster }}\" severity: warning annotations: description: \"Syncing of volumes (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }}) is taking more than twice the scheduled snapshot interval. This may cause data loss and impact replication requests.\" alert_type: \"DisasterRecovery\"", "oc apply -n open-cluster-management-observability -f thanos-ruler-custom-rules.yaml", "sum by (obj_name, obj_namespace, obj_type, job, policyname)(time() - (ramen_last_sync_timestamp_seconds > 0))", "ramen_sync_duration_seconds{job=\"ramen-hub-operator-metrics-service\"} / on(policyname, job) group_left() (ramen_policy_schedule_interval_seconds{job=\"ramen-hub-operator-metrics-service\"})", "count(kube_persistentvolumeclaim_info)", "alert: VolumeSynchronizationDela expr: ramen_rpo_difference >= 3 for: 5s labels: cluster: '{{ USDlabels.cluster }}' severity: critical annotations: description: >- Syncing of volumes (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }}) is taking more than thrice the scheduled snapshot interval. This may cause data loss and a backlog of replication requests. alert_type: DisasterRecovery", "alert: VolumeSynchronizationDela expr: ramen_rpo_difference > 2 and ramen_rpo_difference < 3 for: 5s labels: cluster: '{{ USDlabels.cluster }}' severity: critical annotations: description: >- Syncing of volumes (DRPC: {{ USDlabels.obj_name }}, Namespace: {{ USDlabels.obj_namespace }}) is taking more than twice the scheduled snapshot interval. This may cause data loss and a backlog of replication requests. alert_type: DisasterRecovery" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/monitoring_disaster_recovery_health
5.74. freetype
5.74. freetype 5.74.1. RHSA-2013:0216 - Important: freetype security update Updated freetype packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. FreeType is a free, high-quality, portable font engine that can open and manage font files. It also loads, hints, and renders individual glyphs efficiently. Security Fix CVE-2012-5669 A flaw was found in the way the FreeType font rendering engine processed certain Glyph Bitmap Distribution Format (BDF) fonts. If a user loaded a specially-crafted font file with an application linked against FreeType, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. Users are advised to upgrade to these updated packages, which contain a backported patch to correct this issue. The X server must be restarted (log out, then log back in) for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/freetype
Chapter 20. KIE Server system properties
Chapter 20. KIE Server system properties KIE Server accepts the following system properties (bootstrap switches) to configure the behavior of the server: Table 20.1. System properties for disabling KIE Server extensions Property Values Default Description org.drools.server.ext.disabled true , false false If set to true , disables the Business Rule Management (BRM) support (for example, rules support). org.jbpm.server.ext.disabled true , false false If set to true , disables the Red Hat Process Automation Manager support (for example, processes support). org.jbpm.ui.server.ext.disabled true , false false If set to true , disables the Red Hat Process Automation Manager UI extension. org.jbpm.case.server.ext.disabled true , false false If set to true , disables the Red Hat Process Automation Manager case management extension. org.optaplanner.server.ext.disabled true , false false If set to true , disables the Red Hat build of OptaPlanner support. org.kie.prometheus.server.ext.disabled true , false true If set to true , disables the Prometheus Server extension. org.kie.scenariosimulation.server.ext.disabled true , false true If set to true , disables the Test scenario Server extension. org.kie.dmn.server.ext.disabled true , false false If set to true , disables the KIE Server DMN support. org.kie.swagger.server.ext.disabled true , false false If set to true , disables the KIE Server swagger documentation support Note Some Process Automation Manager controller properties listed in the following table are marked as required. Set these properties when you create or remove KIE Server containers in Business Central. If you use KIE Server separately without any interaction with Business Central, you do not need to set the required properties. Table 20.2. System properties required for Process Automation Manager controller Property Values Default Description org.kie.server.id String N/A An arbitrary ID to be assigned to the server. If a headless Process Automation Manager controller is configured outside of Business Central, this is the ID under which the server connects to the headless Process Automation Manager controller to fetch the KIE container configurations. If not provided, the ID is automatically generated. org.kie.server.user String kieserver The user name used to connect with KIE Server from the Process Automation Manager controller, required when running in managed mode. Set this property in Business Central system properties. Set this property when using a Process Automation Manager controller. org.kie.server.pwd String kieserver1! The password used to connect with KIE Server from the Process Automation Manager controller, required when running in managed mode. Set this property in Business Central system properties. Set this property when using a Process Automation Manager controller. org.kie.server.token String N/A A property that enables you to use token-based authentication between the Process Automation Manager controller and KIE Server instead of the basic user name and password authentication. The Process Automation Manager controller sends the token as a parameter in the request header. The server requires long-lived access tokens because the tokens are not refreshed. org.kie.server.location URL N/A The URL of the KIE Server instance used by the Process Automation Manager controller to call back on this server, for example, http://localhost:8230/kie-server/services/rest/server . Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller Comma-separated list N/A A comma-separated list of URLs to the Process Automation Manager controller REST endpoints, for example, http://localhost:8080/business-central/rest/controller . Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller.user String kieserver The user name to connect to the Process Automation Manager controller REST API. Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller.pwd String kieserver1! The password to connect to the Process Automation Manager controller REST API. Setting this property is required when using a Process Automation Manager controller. org.kie.server.controller.token String N/A A property that enables you to use token-based authentication between KIE Server and the Process Automation Manager controller instead of the basic user name and password authentication. The server sends the token as a parameter in the request header. The server requires long-lived access tokens because the tokens are not refreshed. org.kie.server.controller.connect Long 10000 The waiting time in milliseconds between repeated attempts to connect KIE Server to the Process Automation Manager controller when the server starts. Table 20.3. Persistence system properties Property Values Default Description org.kie.server.persistence.ds String N/A A data source JNDI name. Set this property when enabling the BPM support. org.kie.server.persistence.tm String N/A A transaction manager platform for Hibernate properties. Set this property when enabling the BPM support. org.kie.server.persistence.dialect String N/A The Hibernate dialect to be used. Set this property when enabling the BPM support. org.kie.server.persistence.schema String N/A The database schema to be used. Table 20.4. Executor system properties Property Values Default Description org.kie.executor.interval Integer 0 The time between the moment the Red Hat Process Automation Manager executor finishes a job and the moment it starts a new one, in a time unit specified in the org.kie.executor.timeunit property. org.kie.executor.timeunit java.util.concurrent.TimeUnit constant SECONDS The time unit in which the org.kie.executor.interval property is specified. org.kie.executor.pool.size Integer 1 The number of threads used by the Red Hat Process Automation Manager executor. org.kie.executor.retry.count Integer 3 The number of retries the Red Hat Process Automation Manager executor attempts on a failed job. org.kie.executor.jms.queue String queue/KIE.SERVER.EXECUTOR Job executor JMS queue for KIE Server. org.kie.executor.jms.jobHeader true , false false If set to true , the request identifier is included in the JMS header as the jobId property. org.kie.executor.disabled true , false false If set to true , disables the KIE Server executor. Table 20.5. Human task system properties Property Values Default Description org.jbpm.ht.callback mvel ldap db jaas props custom jaas A property that specifies the implementation of user group callback to be used: mvel : Default; mostly used for testing. ldap : LDAP; requires additional configuration in the jbpm.usergroup.callback.properties file. db : Database; requires additional configuration in the jbpm.usergroup.callback.properties file. jaas : JAAS; delegates to the container to fetch information about user data. props : A simple property file; requires additional file that keeps all information (users and groups). custom : A custom implementation; specify the fully qualified name of the class in the org.jbpm.ht.custom.callback property. org.jbpm.ht.custom.callback Fully qualified name N/A A custom implementation of the UserGroupCallback interface in case the org.jbpm.ht.callback property is set to custom . org.jbpm.task.cleanup.enabled true , false true Enables task cleanup job listener to remove tasks once the process instance is completed. org.jbpm.task.bam.enabled true , false true Enables task BAM module to store task related information. org.jbpm.ht.admin.user String Administrator User who can access all the tasks from KIE Server. org.jbpm.ht.admin.group String Administrators The group that users must belong to in order to view all the tasks from KIE Server. Table 20.6. System properties for loading keystore Property Values Default Description kie.keystore.keyStoreURL URL N/A The URL is used to load a Java Cryptography Extension KeyStore (JCEKS). For example, file:///home/kie/keystores/keystore.jceks . kie.keystore.keyStorePwd String N/A The password is used for the JCEKS. kie.keystore.key.server.alias String N/A The alias name of the key for REST services where the password is stored. kie.keystore.key.server.pwd String N/A The password of an alias for REST services. kie.keystore.key.ctrl.alias String N/A The alias of the key for default REST Process Automation Manager controller. kie.keystore.key.ctrl.pwd String N/A The password of an alias for default REST Process Automation Manager controller. Table 20.7. System properties for retrying committing transactions Property Values Default Description org.kie.optlock.retries Integer 5 This property describes how many times the process engine retries a transaction before failing permanently. org.kie.optlock.delay Integer 50 The delay time before the first retry, in milliseconds. org.kie.optlock.delayFactor Integer 4 The multiplier for increasing the delay time for each subsequent retry. With the default values, the process engine waits 50 milliseconds before the first retry, 200 milliseconds before the second retry, 800 milliseconds before the third retry, and so on. Table 20.8. Other system properties Property Values Default Description kie.maven.settings.custom Path N/A The location of a custom settings.xml file for Maven configuration. kie.server.jms.queues.response String queue/KIE.SERVER.RESPONSE The response queue JNDI name for JMS. org.drools.server.filter.classes true , false false When set to true , the Drools KIE Server extension accepts custom classes annotated by the XmlRootElement or Remotable annotations only. org.kie.server.bypass.auth.user true , false false A property that enables you to bypass the authenticated user for task-related operations, for example queries. org.jbpm.rule.task.firelimit Integer 10000 This property specifies the maximum number of executed rules to avoid situations where rules run into an infinite loop and make the server completely unresponsive. org.jbpm.ejb.timer.local.cache true , false true This property turns off the EJB Timers local cache. org.kie.server.domain String N/A The JAAS LoginContext domain used to authenticate users when using JMS. org.kie.server.repo Path . The location where KIE Server state files are stored. org.kie.server.sync.deploy true , false false A property that instructs KIE Server to hold the deployment until the Process Automation Manager controller provides the container deployment configuration. This property only affects servers running in managed mode. The following options are available: * false : The connection to the Process Automation Manager controller is asynchronous. The application starts, connects to the Process Automation Manager controller, and once successful, deploys the containers. The application accepts requests even before the containers are available. * true : The deployment of the server application joins the Process Automation Manager controller connection thread with the main deployment and awaits its completion. This option can lead to a potential deadlock in case more applications are on the same server. Use only one application on one server instance. org.kie.server.startup.strategy ControllerBasedStartupStrategy , LocalContainersStartupStrategy ControllerBasedStartupStrategy The Startup strategy of KIE Server used to control the KIE containers that are deployed and the order in which they are deployed. org.kie.server.mgmt.api.disabled true , false false When set to true , disables KIE Server management API. org.kie.server.xstream.enabled.packages Java packages like org.kie.example . You can also specify wildcard expressions like org.kie.example.* . N/A A property that specifies additional packages to allowlist for marshalling using XStream. org.kie.store.services.class String org.drools.persistence.jpa.KnowledgeStoreServiceImpl Fully qualified name of the class that implements KieStoreServices that are responsible for bootstrapping KieSession instances. org.kie.server.strict.id.format true , false false While using JSON marshalling, if the property is set to true , it will always return a response in the proper JSON format. For example, if the original response contains only a single number, then the response is wrapped in a JSON format. For example, {"value" : 1} . org.kie.server.json.customObjectDeserializerCNFEBehavior IGNORE , WARN , EXCEPTION IGNORE While using JSON unmarshalling, when a class in a payload is not found, the behavior can be changed using this property as follows: If the property is set to IGNORE , the payload is converted to a HashMap If the property is set to WARN , the payload is converted to a HashMap and a warning is logged If the property is set to EXCEPTION , KIE Server throws an exception org.kie.server.strict.jaxb.format true , false false When the value of this property is set to true , KIE Server validates the data type of the data in the REST API payload. For example, if a data field has the number data type and contains something other than a number, you will receive an error.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/kie-server-system-properties-ref_execution-server
3.5. Resource Groups
3.5. Resource Groups One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports the concept of groups . You create a resource group with the pcs resource command, specifying the resources to include in the group. If the group does not exist, this command creates the group. If the group exists, this command adds additional resources to the group. The resources will start in the order you specify them with this command, and will stop in the reverse order of their starting order.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/s1-rules-haao
33.4. Adding a JetDirect Printer
33.4. Adding a JetDirect Printer To add a JetDirect or AppSocket connected printer share, click the New Printer button in the main Printer Configuration Tool window to display the window in Figure 33.2, " Adding a Printer " . Enter a unique name for the printer in the Printer Name field. The printer name can contain letters, numbers, dashes (-), and underscores (_); it must not contain any spaces. You can also use the Description and Location fields to further distinguish this printer from others that may be configured on your system. Both of these fields are optional, and may contain spaces. Figure 33.6. Adding a JetDirect Printer Click Forward to continue. Text fields for the following options appear: Hostname - The hostname or IP address of the JetDirect printer. Port Number - The port on the JetDirect printer that is listening for print jobs. The default port is 9100. , select the printer type. Refer to Section 33.5, "Selecting the Printer Model and Finishing" for details.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-printing-jetdirect-printer
Chapter 4. Configuring the instance environment
Chapter 4. Configuring the instance environment You can configure the following in Administration view: General Credentials Repositories Proxy (HTTP and HTTPS proxy settings) Custom migration targets Issue management Assessment questionnaires 4.1. General You can enable or disable the following option: Allow reports to be downloaded after running an analysis 4.2. Configuring credentials You can configure the following types of credentials in Administration view: Source control Maven settings file Proxy Basic auth (Jira) Bearer token (Jira) 4.2.1. Configuring source control credentials You can configure source control credentials in the Credentials view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Credentials . Click Create new . Enter the following information: Name Description (Optional) In the Type list, select Source Control . In the User credentials list, select Credential Type and enter the requested information: Username/Password Username Password (hidden) SCM Private Key/Passphrase SCM Private Key Private Key Passphrase (hidden) Note Type-specific credential information such as keys and passphrases is either hidden or shown as [Encrypted]. Click Create . MTA validates the input and creates a new credential. SCM keys must be parsed and checked for validity. If the validation fails, the following error message is displayed: "not a valid key/XML file" . 4.2.2. Configuring Maven credentials You can configure new Maven credentials in the Credentials view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Credentials . Click Create new . Enter the following information: Name Description (Optional) In the Type list, select Maven Settings File. Upload the settings file or paste its contents. Click Create . MTA validates the input and creates a new credential. The Maven settings.xml file must be parsed and checked for validity. If the validation fails, the following error message is displayed: "not a valid key/XML file" . 4.2.3. Configuring proxy credentials You can configure proxy credentials in the Credentials view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Credentials . Click Create new . Enter the following information: Name Description (Optional) In the Type list, select Proxy . Enter the following information. Username Password Note Type-specific credential information such as keys and passphrases is either hidden or shown as [Encrypted]. Click Create . MTA validates the input and creates a new credential. 4.3. Configuring repositories You can configure the following types of repositories in Administration view: Git Subversion Maven 4.3.1. Configuring Git repositories You can configure Git repositories in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Repositories and then click Git . Toggle the Consume insecure Git repositories switch to the right. 4.3.2. Configuring subversion repositories You can configure subversion repositories in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Repositories and then click Subversion . Toggle the Consume insecure Subversion repositories switch to the right. 4.3.3. Configuring a Maven repository and reducing its size You can use the MTA user interface to both configure a Maven repository and to reduce its size. 4.3.3.1. Configuring a Maven repository You can configure a Maven repository in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Procedure In Administration view, click Repositories and then click Maven . Toggle the Consume insecure artifact repositories switch to the right. 4.3.3.2. Reducing the size of a Maven repository You can reduce the size of a Maven repository in the Repositories view of the Migration Toolkit for Applications (MTA) user interface. Note If the rwx_supported configuration option of the Tackle CR is set to false , both the Local artifact repository field and the Clear repository button are disabled and this procedure is not possible. Procedure In Administration view, click Repositories and then click Maven . Click the Clear repository link. Note Depending on the size of the repository, the size change may not be evident despite the function working properly. 4.4. Configuring HTTP and HTTPS proxy settings You can configure HTTP and HTTPS proxy settings with this management module. Procedure In the Administration view, click Proxy . Toggle HTTP proxy or HTTPS proxy to enable the proxy connection. Enter the following information: Proxy host Proxy port Optional: Toggle HTTP proxy credentials or HTTPS proxy credentials to enable authentication. Click Insert . 4.5. Creating custom migration targets Architects or users with admin permissions can create and maintain custom rulesets associated with custom migration targets. Architects can upload custom rule files and assign them to various custom migration targets. The custom migration targets can then be selected in the analysis configuration wizard. By using ready-made custom migration targets, you can avoid configuring custom rules for each analysis run. This simplifies analysis configuration and execution for non-admin users or third-party developers. Prerequisites You are logged in as a user with admin permissions. Procedure In the Administration view, click Custom migration targets . Click Create new . Enter the name and description of the target. In the Image section, upload an image file for the target's icon. The file can be in either the PNG or JPEG format, up to 1 MB. If you do not upload any file, a default icon is used. In the Custom rules section, select either Upload manually or Retrieve from a repository : If you selected Upload manually , upload or drag and drop the required rule files from your local drive. If you selected Retrieve from a repository , complete the following steps: Choose Git or Subversion . Enter the Source repository , Branch , and Root path fields. If the repository requires credentials, enter these credentials in the Associated credentials field. Click Create . The new migration target appears on the Custom migration targets page. It can now be used by non-admin users in the Migration view. 4.6. Seeding an instance If you are a project architect, you can configure the instance's key parameters in the Controls window, before migration. The parameters can be added and edited as needed. The following parameters define applications, individuals, teams, verticals or areas within an organization affected or participating in the migration: Stakeholders Stakeholder groups Job functions Business services Tag categories Tags You can create and configure an instance in any order. However, the suggested order below is the most efficient for creating stakeholders and tags. Stakeholders: Create Stakeholder groups Create Job functions Create Stakeholders Tags: Create Tag categories Create Tags Stakeholders and defined by: Email Name Job function Stakeholder groups 4.6.1. Creating a new stakeholder group There are no default stakeholder groups defined. You can create a new stakeholder group by following the procedure below. Procedure In Migration view, click Controls . Click Stakeholder groups . Click Create new . Enter the following information: Name Description Member(s) Click Create . 4.6.2. Creating a new job function Migration Toolkit for Applications (MTA) uses the job function attribute to classify stakeholders and provides a list of default values that can be expanded. You can create a new job function, which is not in the default list, by following the procedure below. Procedure In Migration view, click Controls . Click Job functions . Click Create new . Enter a job function title in the Name text box. Click Create . 4.6.3. Creating a new stakeholder You can create a new migration project stakeholder by following the procedure below. Procedure In Migration view, click Controls . Click Stakeholders . Click Create new . Enter the following information: Email Name Job function - custom functions can be created Stakeholder group Click Create . 4.6.4. Creating a new business service Migration Toolkit for Applications (MTA) uses the business service attribute to specify the departments within the organization that use the application and that are affected by the migration. You can create a new business service by following the procedure below. Procedure In Migration view, click Controls . Click Business services . Click Create new . Enter the following information: Name Description Owner Click Create . 4.6.5. Creating new tag categories Migration Toolkit for Applications (MTA) uses tags in multiple categories and provides a list of default values. You can create a new tag category by following the procedure below. Procedure In Migration view, click Controls . Click Tags . Click Create tag category . Enter the following information: Name Rank - the order in which the tags appear on the applications Color Click Create . 4.6.5.1. Creating new tags You can create a new tag, which is not in the default list, by following the procedure below. Procedure In Migration view, click Controls . Click Tags . Click Create tag . Enter the following information: Name Tag category Click Create .
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/user_interface_guide/configuring-the-instance-environment
Chapter 12. Diverting messages and splitting message flows
Chapter 12. Diverting messages and splitting message flows In AMQ Broker, you can configure objects called diverts that enable you to transparently divert messages from one address to another address, without changing any client application logic. You can also configure a divert to forward a copy of a message to a specified forwarding address, effectively splitting the message flow. 12.1. How message diverts work Diverts enable you to transparently divert messages routed to one address to some other address, without changing any client application logic. Think of the set of diverts on a broker server as a type of routing table for messages. A divert can be exclusive , meaning that a message is diverted to a specified forwarding address without going to its original address. A divert can also be non-exclusive , meaning that a message continues to go to its original address, while the broker sends a copy of the message to a specified forwarding address. Therefore, you can use non-exclusive diverts for splitting message flows. For example, you might split a message flow if you want to separately monitor every order sent to an order queue. When an address has both exclusive and non-exclusive diverts configured, the broker processes the exclusive diverts first. If a particular message has already been diverted by an exclusive divert, the broker does not process any non-exclusive diverts for that message. In this case, the message never goes to the original address. When a broker diverts a message, the broker assigns a new message ID and sets the message address to the new forwarding address. You can retrieve the original message ID and address values via the _AMQ_ORIG_ADDRESS (string type) and _AMQ_ORIG_MESSAGE_ID (long type) message properties. If you are using the Core API, use the Message.HDR_ORIGINAL_ADDRESS and Message.HDR_ORIG_MESSAGE_ID properties. Note You can divert a message only to an address on the same broker server. If you want to divert to an address on a different server, a common solution is to first divert the message to a local store-and-forward queue. Then, set up a bridge that consumes from that queue and forwards messages to an address on a different broker. Combining diverts with bridges enables you to create a distributed network of routing connections between geographically distributed broker servers. In this way, you can create a global messaging mesh. 12.2. Configuring message diverts To configure a divert in your broker instance, add a divert element within the core element of your broker.xml configuration file. <core> ... <divert name= > <address> </address> <forwarding-address> </forwarding-address> <filter string= > <routing-type> </routing-type> <exclusive> </exclusive> </divert> ... </core> divert Named instance of a divert. You can add multiple divert elements to your broker.xml configuration file, as long as each divert has a unique name. address Address from which to divert messages forwarding-address Address to which to forward messages filter Optional message filter. If you configure a filter, only messages that match the filter string are diverted. If you do not specify a filter, all messages are considered a match by the divert. routing-type Routing type of the diverted message. You can configure the divert to: Apply the anycast or multicast routing type to a message Strip (that is, remove) the existing routing type Pass through (that is, preserve) the existing routing type Control of the routing type is useful in situations where the message has its routing type already set, but you want to divert the message to an address that uses a different routing type. For example, the broker cannot route a message with the anycast routing type to a queue that uses multicast unless you set the routing-type parameter of the divert to MULTICAST . Valid values for the routing-type parameter of a divert are ANYCAST , MULTICAST , PASS , and STRIP . The default value is STRIP . exclusive Specify whether the divert is exclusive (set the property to true ) or non- exclusive (set the property to false ). The following subsections show configuration examples for exclusive and non-exclusive diverts. 12.2.1. Exclusive divert example Shown below is an example configuration for an exclusive divert. An exclusive divert diverts all matching messages from the originally-configured address to a new address. Matching messages do not get routed to the original address. <divert name="prices-divert"> <address>priceUpdates</address> <forwarding-address>priceForwarding</forwarding-address> <filter string="office='New York'"/> <exclusive>true</exclusive> </divert> In the preceding example, you define a divert called prices-divert that diverts any messages sent to the address priceUpdates to another local address, priceForwarding . You also specify a message filter string. Only messages with the message property office and the value New York are diverted. All other messages are routed to their original address. Finally, you specify that the divert is exclusive. 12.2.2. Non-exclusive divert example Shown below is an example configuration for a non-exclusive divert. In a non-exclusive divert, a message continues to go to its original address, while the broker also sends a copy of the message to a specified forwarding address. Therefore, a non-exclusive divert is a way to split a message flow. <divert name="order-divert"> <address>orders</address> <forwarding-address>spyTopic</forwarding-address> <exclusive>false</exclusive> </divert> In the preceding example, you define a divert called order-divert that takes a copy of every message sent to the address orders and sends it to a local address called spyTopic . You also specify that the divert is non-exclusive. Additional resources For a detailed example that uses both exclusive and non-exclusive diverts, and a bridge to forward messages to another broker, see Divert Example (external).
[ "<core> <divert name= > <address> </address> <forwarding-address> </forwarding-address> <filter string= > <routing-type> </routing-type> <exclusive> </exclusive> </divert> </core>", "<divert name=\"prices-divert\"> <address>priceUpdates</address> <forwarding-address>priceForwarding</forwarding-address> <filter string=\"office='New York'\"/> <exclusive>true</exclusive> </divert>", "<divert name=\"order-divert\"> <address>orders</address> <forwarding-address>spyTopic</forwarding-address> <exclusive>false</exclusive> </divert>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/diverting-messages-configuring
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.5/making-open-source-more-inclusive
Chapter 6. Managing Apicurio Registry content using a Java client
Chapter 6. Managing Apicurio Registry content using a Java client You can write a Apicurio Registry Java client application and use it to manage artifacts stored in Apicurio Registry: Section 6.1, "Apicurio Registry Java client" Section 6.2, "Writing Apicurio Registry Java client applications" Section 6.3, "Apicurio Registry Java client configuration" 6.1. Apicurio Registry Java client You can manage artifacts stored in Apicurio Registry by using a Java client application. You can create, read, update, or delete artifacts by using the Apicurio Registry Java client classes. You can also use the Apicurio Registry Java client to perform administrator functions, such as managing global rules or importing and exporting Apicurio Registry data. You can access the Apicurio Registry Java client by adding the correct dependency to your Apache Maven project. For more details, see Section 6.2, "Writing Apicurio Registry Java client applications" . The Apicurio Registry client is implemented by using the HTTP client provided by the JDK, which you can customize as needed. For example, you can add custom headers or enable configuration options for Transport Layer Security (TLS) authentication. For more details, see Section 6.3, "Apicurio Registry Java client configuration" . 6.2. Writing Apicurio Registry Java client applications You can write a Java client application to manage artifacts stored in Apicurio Registry by using the Apicurio Registry Java client classes. Prerequisites Apicurio Registry is installed and running in your environment. You have created a Maven project for your Java client application. For more details, see Apache Maven . Procedure Add the following dependency to your Maven project: <dependency> <groupId>io.apicurio</groupId> <artifactId>apicurio-registry-client</artifactId> <version>USD{apicurio-registry.version}</version> </dependency> Create the Apicurio Registry client as follows: public class ClientExample { public static void main(String[] args) throws Exception { // Create a registry client String registryUrl = "https://my-registry.my-domain.com/apis/registry/v2"; 1 RegistryClient client = RegistryClientFactory.create(registryUrl); 2 } } 1 If you specify an example Apicurio Registry URL of https://my-registry.my-domain.com , the client will automatically append /apis/registry/v2 . 2 For more options when creating a Apicurio Registry client, see the Java client configuration in the section. When the client is created, you can use all of the operations available in the Apicurio Registry REST API in the client. For more details, see the Apicurio Registry REST API documentation . Additional resources For an open source example of how to use and customize the Apicurio Registry client, see the Apicurio Registry REST client demonstration . For details on how to use the Apicurio Registry Kafka client serializers/deserializers (SerDes) in producer and consumer applications, see Chapter 7, Validating Kafka messages using serializers/deserializers in Java clients . 6.3. Apicurio Registry Java client configuration The Apicurio Registry Java client includes the following configuration options, based on the client factory: Table 6.1. Apicurio Registry Java client configuration options Option Description Arguments Plain client Basic REST client used to interact with a running Apicurio Registry instance. baseUrl Client with custom configuration Apicurio Registry client using the configuration provided by the user. baseUrl , Map<String Object> configs Client with custom configuration and authentication Apicurio Registry client that accepts a map containing custom configuration. For example, this is useful to add custom headers to the calls. You must also provide an authentication server to authenticate the requests. baseUrl, Map<String Object> configs, Auth auth Custom header configuration To configure custom headers, you must add the apicurio.registry.request.headers prefix to the configs map key. For example, a configs map key of apicurio.registry.request.headers.Authorization with a value of Basic: YWxhZGRpbjpvcGVuc2VzYW1 sets the Authorization header with the same value. TLS configuration options You can configure Transport Layer Security (TLS) authentication for the Apicurio Registry Java client using the following properties: apicurio.registry.request.ssl.truststore.location apicurio.registry.request.ssl.truststore.password apicurio.registry.request.ssl.truststore.type apicurio.registry.request.ssl.keystore.location apicurio.registry.request.ssl.keystore.password apicurio.registry.request.ssl.keystore.type apicurio.registry.request.ssl.key.password Additional resources For details on how to configure authentication for Apicurio Registry Kafka client serializers/deserializers (SerDes), see Chapter 7, Validating Kafka messages using serializers/deserializers in Java clients .
[ "<dependency> <groupId>io.apicurio</groupId> <artifactId>apicurio-registry-client</artifactId> <version>USD{apicurio-registry.version}</version> </dependency>", "public class ClientExample { public static void main(String[] args) throws Exception { // Create a registry client String registryUrl = \"https://my-registry.my-domain.com/apis/registry/v2\"; 1 RegistryClient client = RegistryClientFactory.create(registryUrl); 2 } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apicurio_registry/2.6/html/apicurio_registry_user_guide/using-the-registry-client_registry
5.4.16.6. Splitting off a RAID Image as a Separate Logical Volume
5.4.16.6. Splitting off a RAID Image as a Separate Logical Volume You can split off an image of a RAID logical volume to form a new logical volume. The procedure for splitting off a RAID image is the same as the procedure for splitting off a redundant image of a mirrored logical volume, as described in Section 5.4.3.2, "Splitting Off a Redundant Image of a Mirrored Logical Volume" . The format of the command to split off a RAID image is as follows: Just as when you are removing a RAID images from an existing RAID1 logical volume (as described in Section 5.4.16.5, "Changing the Number of Images in an Existing RAID1 Device" ), when you remove a RAID data subvolume (and its associated metadata subvolume) from the middle of the device, any higher numbered images will be shifted down to fill the slot. The index numbers on the logical volumes that make up a RAID array will thus be an unbroken sequence of integers. Note You cannot split off a RAID image if the RAID1 array is not yet in sync. The following example splits a 2-way RAID1 logical volume, my_lv , into two linear logical volumes, my_lv and new . The following example splits a 3-way RAID1 logical volume, my_lv , into a 2-way RAID1 logical volume, my_lv , and a linear logical volume, new
[ "lvconvert --splitmirrors count -n splitname vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 12.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1) new /dev/sdf1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) new /dev/sdg1(1)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/raid-imagesplit
Config APIs
Config APIs OpenShift Container Platform 4.16 Reference guide for config APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/config_apis/index
Chapter 2. Running Red Hat Quay in debug mode
Chapter 2. Running Red Hat Quay in debug mode Red Hat recommends gathering your debugging information when opening a support case. Running Red Hat Quay in debug mode provides verbose logging to help administrators find more information about various issues. Enabling debug mode can speed up the process to reproduce errors and validate a solution for things like geo-replication deployments, Operator deployments, standalone Red Hat Quay deployments, object storage issues, and so on. Additionally, it helps the Red Hat Support to perform a root cause analysis. 2.1. Red Hat Quay debug variables Red Hat Quay offers two configuration fields that can be added to your config.yaml file to help diagnose issues or help obtain log information. Table 2.1. Debug configuration variables Variable Type Description DEBUGLOG Boolean Whether to enable or disable debug logs. Must be true or false . USERS_DEBUG Integer. Either 0 or 1 . Used to debug LDAP operations in clear text, including passwords. Must be used with DEBUGLOG=TRUE . Important Setting USERS_DEBUG=1 exposes credentials in clear text. This variable should be removed from the Red Hat Quay deployment after debugging. The log file that is generated with this environment variable should be scrutinized, and passwords should be removed before sending to other users. Use with caution. 2.2. Running a standalone Red Hat Quay deployment in debug mode Running Red Hat Quay in debug mode provides verbose logging to help administrators find more information about various issues. Enabling debug mode can speed up the process to reproduce errors and validate a solution. Use the following procedure to run a standalone deployment of Red Hat Quay in debug mode. Procedure Enter the following command to run your standalone Red Hat Quay deployment in debug mode: USD podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv} To view the debug logs, enter the following command: USD podman logs <quay_container_name> 2.3. Running an LDAP Red Hat Quay deployment in debug mode Use the following procedure to run an LDAP deployment of Red Hat Quay in debug mode. Procedure Enter the following command to run your LDAP Red Hat Quay deployment in debug mode: USD podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -e USERS_DEBUG=1 -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv} To view the debug logs, enter the following command: USD podman logs <quay_container_name> Important Setting USERS_DEBUG=1 exposes credentials in clear text. This variable should be removed from the Red Hat Quay deployment after debugging. The log file that is generated with this environment variable should be scrutinized, and passwords should be removed before sending to other users. Use with caution. 2.4. Running the Red Hat Quay Operator in debug mode Use the following procedure to run the Red Hat Quay Operator in debug mode. Procedure Enter the following command to edit the QuayRegistry custom resource definition: USD oc edit quayregistry <quay_registry_name> -n <quay_namespace> Update the QuayRegistry to add the following parameters: spec: - kind: quay managed: true overrides: env: - name: DEBUGLOG value: "true" After the Red Hat Quay Operator has restarted with debugging enabled, try pulling an image from the registry. If it is still slow, dump all logs from all Quay pods to a file, and check the files for more information.
[ "podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv}", "podman logs <quay_container_name>", "podman run -p 443:8443 -p 80:8080 -e DEBUGLOG=true -e USERS_DEBUG=1 -v /config:/conf/stack -v /storage:/datastorage -d {productrepo}/{quayimage}:{productminv}", "podman logs <quay_container_name>", "oc edit quayregistry <quay_registry_name> -n <quay_namespace>", "spec: - kind: quay managed: true overrides: env: - name: DEBUGLOG value: \"true\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/troubleshooting_red_hat_quay/running-quay-debug-mode-intro
Appendix B. GFS2 Tracepoints and the debugfs glocks File
Appendix B. GFS2 Tracepoints and the debugfs glocks File This appendix describes both the glock debugfs interface and the GFS2 tracepoints. It is intended for advanced users who are familiar with file system internals who would like to learn more about the design of GFS2 and how to debug GFS2-specific issues. B.1. GFS2 Tracepoint Types There are currently three types of GFS2 tracepoints: glock (pronounced "gee-lock") tracepoints, bmap tracepoints and log tracepoints. These can be used to monitor a running GFS2 file system and give additional information to that which can be obtained with the debugging options supported in releases of Red Hat Enterprise Linux. Tracepoints are particularly useful when a problem, such as a hang or performance issue, is reproducible and thus the tracepoint output can be obtained during the problematic operation. In GFS2, glocks are the primary cache control mechanism and they are the key to understanding the performance of the core of GFS2. The bmap (block map) tracepoints can be used to monitor block allocations and block mapping (lookup of already allocated blocks in the on-disk metadata tree) as they happen and check for any issues relating to locality of access. The log tracepoints keep track of the data being written to and released from the journal and can provide useful information on that part of GFS2. The tracepoints are designed to be as generic as possible. This should mean that it will not be necessary to change the API during the course of Red Hat Enterprise Linux 7. On the other hand, users of this interface should be aware that this is a debugging interface and not part of the normal Red Hat Enterprise Linux 7 API set, and as such Red Hat makes no guarantees that changes in the GFS2 tracepoints interface will not occur. Tracepoints are a generic feature of Red Hat Enterprise Linux 7 and their scope goes well beyond GFS2. In particular they are used to implement the blktrace infrastructure and the blktrace tracepoints can be used in combination with those of GFS2 to gain a fuller picture of the system performance. Due to the level at which the tracepoints operate, they can produce large volumes of data in a very short period of time. They are designed to put a minimum load on the system when they are enabled, but it is inevitable that they will have some effect. Filtering events by a variety of means can help reduce the volume of data and help focus on obtaining just the information which is useful for understanding any particular situation.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/gfs2_tracepoints
21.10. Monitoring Directory Server Using SNMP
21.10. Monitoring Directory Server Using SNMP The server and database activity monitoring log setup described in Chapter 21, Monitoring Server and Database Activity is specific to Directory Server. You can also monitor your Directory Server using Simple Network Management Protocol (SNMP), which is a management protocol used for monitoring network activity which can be used to monitor a wide range of devices in real time. Directory Server can be monitored with SNMP through an AgentX subagent. SNMP monitoring collects useful information about the Directory Server, such as bind information, operations performed on the server, and cache information. The Directory Server SNMP subagent supports SNMP traps to send notifications about changes in the running state of your server instances. 21.10.1. About SNMP SNMP has become interoperable on account of its widespread popularity. It is this interoperability, combined with the fact that SNMP can take on numerous jobs specific to a whole range of different device classes, that make SNMP the ideal standard mechanism for global network control and monitoring. SNMP allows network administrators to unify all network monitoring activities, with Directory Server monitoring part of the broader picture. SNMP is used to exchange data about network activity. With SNMP, data travels between a managed device and a network management application (NMS) where users remotely manage the network. A managed device is anything that runs SNMP, such as hosts, routers, and your Directory Server. An NMS is usually a powerful workstation with one or more network management applications installed. A network management application graphically shows information about managed devices, which device is up or down, which and how many error messages were received, and so on. Information is transferred between the NMS and the managed device through the use of two types of agents: the subagent and the master agent . The subagent gathers information about the managed device and passes the information to the master agent. Directory Server has a subagent. The master agent exchanges information between the various subagents and the NMS. The master agent usually runs on the same host machine as the subagents it talks to, although it can run on a remote machine. Values for SNMP attributes, otherwise known as variables, that can be queried are kept on the managed device and reported to the NMS as necessary. Each variable is known as a managed object , which is anything the agent can access and send to the NMS. All managed objects are defined in a management information base (MIB), which is a database with a tree-like hierarchy. The top level of the hierarchy contains the most general information about the network. Each branch underneath is more specific and deals with separate network areas. SNMP exchanges network information in the form of protocol data units (PDUs). PDUs contain information about variables stored on the managed device. These variables, also known as managed objects, have values and titles that are reported to the NMS as necessary. Communication between an NMS and a managed device takes place either by the NMS sending updates or requesting information or by the managed object sending a notice or warning, called a trap , when a server shuts down or starts up. 21.10.2. Enabling and Disabling SNMP Support By default, the SNMP protocol is enabled in Directory Server and, after configuring the subagent, you can use it. To enable or disable SNMP in an instance, set the nsSNMPEnabled parameter to on or off . For example, to disable SNMP in a Directory Server instance: 21.10.3. Setting Parameters to Identify an Instance Using SNMP Directory Server provides the following attributes which help identifying instances using SNMP: nsSNMPOrganization nsSNMPLocation nsSNMPContact nsSNMPDescription For details about the parameters, see their descriptions in the cn=SNMP section in the Red Hat Directory Server Configuration, Command, and File Reference . For example, to the set the nsSNMPLocation parameter to Munich, Germany : 21.10.4. Setting up an SNMP Agent for Directory Server To query information from Directory Server using the SNMP protocol, set up an SNMP agent: Install the 389-ds-base-snmp and net-snmp packages: To configure the SNMP master agent, edit the /etc/snmp/snmpd.conf file, adding the following entry to enable the agent extensibility (AgentX) protocol: For further details about the AgentX protocol, see RFC 2741 . To configure the SNMP subagent, edit the /etc/dirsrv/config/ldap-agent.conf file, adding a server parameter for each Directory Server instance you want to monitor. For example: Optionally, create an SNMP user account: Stop the snmpd service: Create the SNMP user account. For example: For details about the parameters used in the command, see the net-snmp-create-v3-user (1) man page. Start the snmpd service: Optionally, set the Directory Server descriptive properties. For details, see Section 21.10.3, "Setting Parameters to Identify an Instance Using SNMP" . Start the dirsrv-snmp service: Optionally, to verify the configuration: Install the net-snmp-utils package: Query the Directory Server Object Identifiers (OID). For example: 21.10.5. Configuring SNMP Traps An SNMP trap is essentially a threshold which triggers a notification if it is encountered by the monitored server. To use traps, the master agent must be configured to accept traps and do something with them. For example, a trap can trigger an email notification for an administrator of the Directory Server instance stops. The subagent is only responsible for sending the traps to the master agent. The master agent and a trap handler must be configured according to the documentation for the SNMP master agent you are using. Traps are accompanied by information from the Entity Table , which contains information specific to the Directory Server instance, such as its name and version number. The Entity Table is described in Section 21.10.6.3, "Entity Table" . This means that the action the master agent takes when it receives a trap is flexible, such as sending an email to an email address defined in the dsEntityContact variable for one instance while sending a notification to a pager number in the dsEntityContact variable for another instance. There are two traps supported by the subagent: DirectoryServerDown. This trap is generated whenever the subagent detects the Directory Server is potentially not running. This trap will be sent with the Directory Server instance description, version, physical location, and contact information, which are detailed in the dsEntityDescr , dsEntityVers , dsEntityLocation , and dsEntityContact variables. DirectoryServerStart. This trap is generated whenever the subagent detects that the Directory Server has started or restarted. This trap will be sent with the Directory Server instance description, version, physical location, and contact information, which are detailed in the dsEntityDescr , dsEntityVers , dsEntityLocation , and dsEntityContact variables. 21.10.6. Using the Management Information Base The Directory Server's MIB is a file called redhat-directory.mib stored in the /usr/share/dirsrv/mibs directory. This MIB contains definitions for variables pertaining to network management for the directory. These variables are known as managed objects. Using the directory MIB and Net-SNMP, you can monitor your directory like all other managed devices on your network. For more information on using the MIB, see Section 21.10.4, "Setting up an SNMP Agent for Directory Server" . The client tools need to load the Directory Server MIB to use the variable names listed in the following sections. Using the directory MIB enables administrators to use SNMP to see administrative information about the directory and monitor the server in real-time. The directory MIB is broken into four distinct tables of managed objects: Section 21.10.6.1, "Operations Table" Section 21.10.6.2, "Entries Table" Section 21.10.6.3, "Entity Table" Section 21.10.6.4, "Interaction Table" Note All of the Directory Server attributes monitored by SNMP use 64-bit integers for the counters, even on 32-bit systems. 21.10.6.1. Operations Table The Operations Table provides statistical information about Directory Server access, operations, and errors. Table 21.1, "Operations Table: Managed Objects and Descriptions" describes the managed objects stored in the Operations Table of the redhat-directory.mib file. Table 21.1. Operations Table: Managed Objects and Descriptions Managed Object Description dsAnonymousBinds The number of anonymous binds to the directory since server startup. dsUnauthBinds The number of unauthenticated binds to the directory since server startup. dsSimpleAuthBinds The number of binds to the directory that were established using a simple authentication method (such as password protection) since server startup. dsStrongAuthBinds The number of binds to the directory that were established using a strong authentication method (such as TLS or a SASL mechanism like Kerberos) since server startup. dsBindSecurityErrors The number of bind requests that have been rejected by the directory due to authentication failures or invalid credentials since server startup. dsInOps The number of operations forwarded to this directory from another directory since server startup. dsReadOps The number of read operations serviced by this directory since application start. The value of this object will always be 0 because LDAP implements read operations indirectly using the search operation. dsCompareOps The number of compare operations serviced by this directory since server startup. dsAddEntryOps The number of add operations serviced by this directory since server startup. dsRemoveEntryOps The number of delete operations serviced by this directory since server startup. dsModifyEntryOps The number of modify operations serviced by this directory since server startup. dsModifyRDNOps The number of modify RDN operations serviced by this directory since server startup. dsListOps The number of list operations serviced by this directory since server startup. The value of this object will always be 0 because LDAP implements list operations indirectly using the search operation. dsSearchOps The total number of search operations serviced by this directory since server startup. dsOneLevelSearchOps The number of one-level search operations serviced by this directory since server startup. dsWholeSubtreeSearchOps The number of whole subtree search operations serviced by this directory since server startup. dsReferrals The number of referrals returned by this directory in response to client requests since server startup. dsSecurityErrors The number of operations forwarded to this directory that did not meet security requirements. dsErrors The number of requests that could not be serviced due to errors (other than security or referral errors). Errors include name errors, update errors, attribute errors, and service errors. Partially serviced requests will not be counted as an error. 21.10.6.2. Entries Table The Entries Table provides information about the contents of the directory entries. Table 21.2, "Entries Table: Managed Objects and Descriptions" describes the managed objects stored in the Entries Table in the redhat-directory.mib file. Table 21.2. Entries Table: Managed Objects and Descriptions Managed Object Description dsCopyEntries The number of directory entries for which this directory contains a copy.The value of this object will always be 0 (as no updates are currently performed). dsCacheEntries The number of entries cached in the directory. dsCacheHits The number of operations serviced from the locally held cache since application startup. 21.10.6.3. Entity Table The Entity Table contains identifying information about the Directory Server instance. The values for the Entity Table are set in cn=SNMP,cn=config entry as described in Section 21.10.3, "Setting Parameters to Identify an Instance Using SNMP" . Table 21.3, "Entity Table: Managed Objects and Descriptions" describes the managed objects stored in the Entity Table of the redhat-directory.mib file. Table 21.3. Entity Table: Managed Objects and Descriptions Managed Object Description dsEntityDescr The description set for the Directory Server instance. dsEntityVers The Directory Server version number of the Directory Server instance. dsEntityOrg The organization responsible for the Directory Server instance. dsEntityLocation The physical location of the Directory Server instance. dsEntityContact The name and contact information for the person responsible for the Directory Server instance. dsEntityName The name of the Directory Server instance. 21.10.6.4. Interaction Table Note The Interaction Table is not supported by the subagent. The subagent can query the table, but it will not ever be updated with valid data. Table 21.4, "Interaction Table: Managed Objects and Descriptions" describes the managed objects stored in the Interaction Table of the redhat-directory.mib file. Table 21.4. Interaction Table: Managed Objects and Descriptions Managed Object Description dsIntTable Details, in each row of the table, related to the history of the interaction of the monitored Directory Servers with their respective peer Directory Servers. dsIntEntry The entry containing interaction details of a Directory Server with a peer Directory Server. dsIntIndex Part of the unique key, together with applIndex , to identify the conceptual row which contains useful information on the (attempted) interaction between the Directory Server (referred to by applIndex ) and a peer Directory Server. dsName The distinguished name (DN) of the peer Directory Server to which this entry belongs. dsTimeOfCreation The value of sysUpTime when this row was created. If the entry was created before the network management subsystem was initialized, this object will contain a value of zero. dsTimeOfLastAttempt The value of sysUpTime when the last attempt was made to contact this Directory Server. If the last attempt was made before the network management subsystem was initialized, this object will contain a value of zero. dsTimeOfLastSuccess The value of sysUpTime when the last attempt made to contact this Directory Server was successful. This entry will have a value of zero if there have been no successful attempts or if the last successful attempt was made before the network management subsystem was initialized. dsFailuresSinceLastSuccess The number of failures since the last time an attempt to contact this Directory Server was successful. If there has been no successful attempts, this counter will contain the number of failures since this entry was created. dsFailures Cumulative failures since the creation of this entry. dsSuccesses Cumulative successes since the creation of this entry. dsURL The URL of the Directory Server application.
[ "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=SNMP,cn=config changetype: modify replace: nsSNMPEnabled nsSNMPEnabled: on", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn=SNMP,cn=config changetype: modify replace: nsSNMPLocation nsSNMPLocation: Munich, Germany", "yum install 389-ds-base-snmp net-snmp", "master agentx", "server slapd- instance_name", "systemctl stop snmpd", "net-snmp-create-v3-user -A authentication_password -a SHA -X private_password -x AES user_name", "systemctl start snmpd", "systemctl start dirsrv-snmp", "yum install net-snmp-utils", "snmpwalk -v3 -u user_name -M /usr/share/snmp/mibs:/usr/share/dirsrv/mibs/ -l AuthPriv -m +RHDS-MIB -A authentication_password -a SHA -X private_password -x AES server.example.com .1.3.6.1.4.1.2312.6.1.1" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/monitoring_ds_using_snmp
Chapter 6. Notable Bug Fixes
Chapter 6. Notable Bug Fixes This chapter describes bugs fixed in Red Hat Enterprise Linux 7 that have a significant impact on users. 6.1. Authentication and Interoperability Directory Server rebased to version 1.3.10 The 389-ds-base packages have been upgraded to upstream version 1.3.10, which provides a number of bug fixes over the version. ( BZ#1740693 ) Directory Server now correctly logs the search base if the server rejects a search operation Previously, when Directory Server rejected a search operation because of a protocol error, the server logged base="(null)" instead of the actual search base. With this update, Directory Server passes the correct internal variable to the log operation. As a result, the server correctly logs the search base in the mentioned scenario. ( BZ#1662461 ) Directory Server improved the logging of the etime value Previously, if an operation started and completed at the border of a second and the operation took less than one second, Directory Server logged an incorrectly calculated etime value. As a consequence, the logged value was too big. This updates fixes the problem. As a result, the calculated etime value is now closer to the start and end time stamp. ( BZ#1732053 ) Directory Server now logs the correct etime value in the access log Previously, Directory Server incorrectly formatted the etime field in the /var/log/dirsrv/slapd-<instance_name>/access log file. As a consequence, the time value in nanoseconds was 10 times lower than the actual value. This update fixes the problem. As a result, Directory Server now logs the correct nanosecond value in the etime field. ( BZ#1749236 ) The severity of a Directory Server log message has been changed Previously, Directory Server incorrectly logged Event <event_name> should not occur in state <state_name>; going to sleep messages as error . This update changes the severity of this message to warning . ( BZ#1639342 ) Directory Server is RFC 4511-compliant when searching for the 1.1 and other attributes in one request To retrieve only a list of matching distinguished names (DN), LDAP users can search for the 1.1 special attribute. According to RFC 4511, if an LDAP client searches for the 1.1. special attribute in combination with other attributes in one search request, the server must ignore the 1.1 special attribute. Previously, when a Directory Server user searched for the 1.1 special attribute and other attributes in the same search request, the server returned no attributes. This update fixes the problem. As a result, Directory Server is RFC 4511-compliant in the mentioned scenario. ( BZ#1723545 ) Directory Server returns password policy controls in the correct order Previously, if the password of a user expired, Directory Server returned password policy controls in a different order depending on whether grace logins were exhausted or not. Consequently, this sometimes caused problems in LDAP clients compliant with the RFC 4511 standard. This update fixes the problem, and as a result, Directory Server returns password policy controls in the correct order. (BZ#1724914) Directory Server now also applies limits for maximum concurrent cleanAllRUV tasks received from extended operations Directory Server supports up to 64 concurrent cleanAllRUV tasks. Previously, Directory Server applied this limit only to manually created tasks and not to tasks the server received from extended operations. As a consequence, more than 64 concurrent cleanAllRUV tasks could run at the same time and slow down the server. This update adds a counter to track the number of clean tasks and abort threads. As a result, only up to 64 concurrent cleanAllRUV tasks can run at the same time. ( BZ#1739182 ) Importing large LDIF files to Directory Server databases with many nested-subtrees is now significantly faster Previously, if the Directory Server database contained many nested sub-trees, importing a large LDIF file using the ldif2db and ldif2db.pl utilities was slow. With this update, Directory Server adds the ancestorid index after all entries. As a result, importing LDIF files to a database with many nested sub-trees is now significantly faster. ( BZ#1749595 ) Directory Server now processes new operations only after a SASL bind fully initialized the connection During a bind using the Simple Authentication and Security Layer (SASL) framework, Directory Server initializes a set of callback functions. Previously, if Directory Server received an additional operation on the same connection during a SASL bind, this operation could access and use the callback functions even if they were not fully initialized. Consequently, the Directory Server instance terminated unexpectedly. With this update, the server prevents operations from accessing and using the callback structure until the SASL bind is successfully initialized. As a result, Directory Server no longer crashes in this situation. ( BZ#1756182 ) The cl-dump.pl and cl-dump utilities now remove temporary files after exporting the change log Previously, the cl-dump.pl and cl-dump utilities in Directory Server created temporary LDIF files in the /var/lib/dirsrv/slapd-<instance_name>/changelogdb/ directory. After the change log was exported, the utilities renamed the temporary files to *.done . As a consequence, if the temporary files were large, this could result in low free disk space. With this update, by default, cl-dump.pl and cl-dump now delete the temporary files at the end of the export. Additionally, the -l option has been added to both utilities to manually preserve the temporary files. As a result, cl-dump.pl and cl-dump free the disk space after exporting the change log or user can optionally enforce the old behavior by using the -l option. ( BZ#1685059 ) IdM configures the Apache NSS module to use only TLS 1.2 when installing or updating an IdM server or replica Previously, when an administrator installed an Identity Management (IdM) server or replica, the installer enabled the TLS 1.0, TLS 1.1, and TLS 1.2 protocols in the Apache web server's network security service (NSS) module. This update provides the following changes: When you set up a new server or replica, IdM only enables the strong TLS 1.2 protocol. On existing IdM servers and replicas, this update disables the weak TLS 1.0 and TLS 1.1 protocols. As a result, new and updated IdM servers and replicas use only the strong TLS 1.2 protocol in the Apache web server's NSS module. ( BZ#1711172 ) IdM now correctly updates the certificate record in the cn=CAcert,cn=ipa,cn=etc,<base_DN> entry Previously, after renewing the Identity Management (IdM) certificate authority (CA) certificate or modifying the CA certificate chain, IdM did not update the certificate record stored in the cn=CAcert,cn=ipa,cn=etc,<base_DN> entry. As a consequence, installations of IdM clients on RHEL 6 failed. With this update, IdM now updates the certificate record in cn=CAcert,cn=ipa,cn=etc,<base_DN> . As a result, installing IdM on RHEL 6 now succeeds after the administrator renews the CA certificate or updates the certificate chain on the IdM CA. ( BZ#1544470 ) The ipa-replica-install utility now verifies that the server specified in --server provides all required roles The ipa-replica-install utility provides a --server option to specify the Identity Management (IdM) server which the installer should use for the enrollment. Previously, ipa-replica-install did not verify that the supplied server provided the certificate authority (CA) and key recovery authority (KRA) roles. As a consequence, the installer replicated domain data from the specified server and CA data from a different server that provided the CA and KRA roles. With this update, ipa-replica-install verifies that the specified server provides all required roles. As a result, if the administrator uses the --server option, ipa-replica-install only replicates data from the specified server. ( BZ#1754494 ) ipa sudorule-add-option no longer shows a false error when options are added to an existing sudo rule Previously, when a sudo rule already contained hosts, hostgroups, users, or usergroups, the ipa sudorule-add-option command incorrectly processed the sudo rule content. Consequently, the ipa sudorule-add-option command used with the sudooption argument returned an error despite completing successfully. This bug has been fixed, and ipa sudorule-add-option now displays an accurate output in the described scenario. (BZ#1691939) IdM no longer drops all custom attributes when moving an account from preserved to stage Previously, IdM processed only some of the attributes defined in a preserved account. Consequently, when moving an account from preserved to stage, all the custom attributes were lost. With this update, IdM processes all the attributes defined in a preserved account and the described problem no longer occurs. (BZ#1583950) Sub-CA key replication no longer fails Previously, a change to the credential cache (ccache) behaviour in the Kerberos library caused lightweight Certificate Authority (CA) key replication to fail. This update adapts the IdM lightweight CA key replication client code to the changed ccache behaviour. As a result, the lightweight CA key replication now works correctly. ( BZ#1755223 ) Certificate System now records audit events if the system acts as a client to other subsystems or to the LDAP server Previously, Certificate System did not contain audit events if the system acted as a client to other subsystems or to the LDAP server. As a consequence, the server did not record any events in this situation. This update adds the CLIENT_ACCESS_SESSION_ESTABLISH_FAILURE , CLIENT_ACCESS_SESSION_ESTABLISH_SUCCESS , and CLIENT_ACCESS_SESSION_TERMINATED events to Certificate System. As a result, Certificate System records these events when acting as a client. (BZ#1523330) The python-kdcproxy library no longer drops large Kerberos replies Previously, if an Active Directory Kerberos Distribution Center (KDC) split large Kerberos replies into multiple TCP packets, the python-kdcproxy library dropped these packages. This update fixes the problem. As a result, python-kdcproxy processes large Kerberos replies correctly. ( BZ#1746107 ) 6.2. Compiler and Tools Socket::inet_aton() can now be used from multiple threads safely Previously, the Socket::inet_aton() function, used for resolving a domain name from multiple Perl threads, called the unsafe gethostbyname() glibc function. Consequently, an incorrect IPv4 address was occasionally returned, or the Perl interpreter terminated unexpectedly. With this update, the Socket::inet_aton() implementation has been changed to use the thread-safe getaddrinfo() glibc function instead of gethostbyname() . As a result, the inet_aton() function from Perl Socket module can be used from multiple threads safely. ( BZ#1693293 ) sosreport now generates HTML reports faster Previously, when the sosreport utility collected tens of thousands of files, generation of HTML report was very slow. This update provides changes to the text report code, improving the report structure and formatting. Additionally, support for reports in the JSON file format has been added. As a result, HTML reports are now generated without delay. ( BZ#1704957 ) 6.3. Desktop 32- and 64-bit fwupd packages can now be used together when installing or upgrading the system Previously, the /usr/lib/systemd/system/fwupd.service file in the fwupd packages was different for 32- and 64-bit architectures. Consequently, it was impossible to install both 32- and 64-bit fwupd packages or to upgrade a Red Hat Enterprise Linux 7.5 system with both 32- and 64-bit fwupd packages to a later version. This update fixes fwupd so that the /usr/lib/systemd/system/fwupd.service file is same for both 32- and 64-bit architectures. As a result, installing both 32- and 64-bit fwupd packages, or upgrading a Red Hat Enterprise Linux 7.5 system with both 32- and 64-bit fwupd packages to a later version is now possible. (BZ#1623466) A memory leak in libteam has been fixed Previously, the libteam library used an incorrect JSON API when a user queried the status of a network team. As a consequence, the teamdctl <team_device> state command leaked memory. With this update, the library uses the correct API, and querying the status of a team no longer leaks memory. ( BZ#1704451 ) 6.4. Installation and Booting The installation program correctly sets the connection type for Kickstart network team devices Previously, the installation program used the TYPE="Team" parameter instead of the DEVICETYPE="Team" parameter to specify the connection type in the ifcfg file that is created for Kickstart network team devices. As a consequence, any network team devices using network service were not activated during the boot process. With this update, the installation program uses the DEVICETYPE parameter to specify the connection type in the ifcfg file. As a result, Kickstart network team devices are activated during the boot process even if the system is using network service for network configuration, for example, the NetworkManager service is disabled. (BZ#1680606) The installation program correctly handles an exception when GTK is not installed Previously, the installation program failed to handle an exception when the GTK GUI toolkit was not installed in the environment. As a consequence, the exception was not communicated to the user. With this update, the installation program correctly handles an exception when the GTK GUI toolkit is not installed, and the user is also notified of the exception. (BZ#1712987) 6.5. Kernel The IBM Z systems no longer become unresponsive when using certain BCC tools Previously, due to a bug in the kernel, running dcsnoop , runqlen , and slabratetop utilities from the bcc-tools package caused the IBM Z systems to become unresponsive. This update fixes the problem and IBM Z systems no longer hang in the described scenario. (BZ#1724027) Virtual machines no longer enable unnecessary CPU vulnerability mitigation Previously, the MDS_NO CPU flags, which indicate that the CPU was not vulnerable to the Microarchitectural Data Sampling (MDS) vulnerability, were not exposed to guest operating systems when the virtual machine was using CPU host-passthrough. As a consequence, the guest operating system in some cases automatically enabled CPU vulnerability mitigation features that were not necessary for the host. This update ensures that the MDS_NO flag is properly visible to the guest operating system when using CPU host-passthrough, which prevents the described problem from occurring. (BZ#1708465, BZ#1677209 ) Disabling logging in the nf-logger framework has been fixed Previously, when an admin used the sysctl or echo commands to turn off an assigned netfilter logger, a NUL -character was not added to the end of the NONE string. Consequently, the strcmp() function failed with a No such file or directory error. This update fixes the problem. As a result, commands, such as sysctl net.netfilter.nf_log.2=NONE work as expected and turn off logging. (BZ#1770232) Resuming from hibernation now works on the megaraid_sas driver Previously, when the megaraid_sas driver resumed from hibernation, the Message Signaled Interrupts (MSIx) allocation did not work correctly. As a consequence, resuming from hibernation failed, and restarting the system was required. This bug has been fixed, and resuming from hibernation now works as expected. (BZ#1807077) Kdump no longer fails in the second kernel Previously, the kdump initramfs image could fail in the second kernel after a disk migration or installation of a new machine with a disk image. This update adds the kdumpctl rebuild command for rebuilding the kdump initramfs image. As a result, users can now rebuild initramfs to ensure that kdump does not fail in the second kernel. (BZ#1723492) 6.6. Real-Time Kernel The latency for isolated CPU's is now reduced by avoiding spurious ktimersoftd activation Previously, for a KVM-RT configured system, per-CPU ktimersoftd kernel threads ran once every second even in absence of a timer. Consequently, an increased latency occurred on the isolated CPU's. This update adds an optimization into the real-time kernel that does not wake the ktimersoftd on every tick. As a result, ktimersoftd is not raised on isolated CPU's, which prevents the interference and reduces the latency. ( BZ#1550584 ) 6.7. Networking The tc filter show command now displays filters correctly when the handle is 0xffffffff Previously, a bug in the TC flower code caused an undesired integer overflow. As a consequence, dumping a flower rule that used 0xffffffff as a handle could result in an infinite loop. This update prevents the integer overflow on 64-bit architectures. As a result, tc filter show no longer loops in this scenario, and filters are now shown correctly. (BZ#1712737) The kernel no longer crashes when attempting to apply an invalid TC rule Previously, while attempting to replace a traffic control (TC) rule with a rule having an invalid goto chain parameter, a kernel crash occurred. With this update, the kernel avoids a NULL dereference in the described scenario. As a result, the kernel no longer crashes, and an error message is logged instead. (BZ#1712918) The kernel now correctly updates PMTU when receiving ICMPv6 Packet Too Big message In certain situations, such as for link-local addresses, more than one route can match a source address. Previously, the kernel did not check the input interface when receiving Internet Control Message Protocol Version 6 (ICMPv6) packets. Therefore, the route lookup could return a destination that did not match the input interface. Consequently, when receiving an ICMPv6 Packet Too Big message, the kernel could update the Path Maximum Transmission Unit (PMTU) for a different input interface. With this update, the kernel checks the input interface during the route lookup. As a result, the kernel now updates the correct destination based on the source address and PMTU works as expected in the described scenario. (BZ#1722686) MACsec no longer drops valid frames Previously, if the cryptographic context for AES-GCM was not completely initialized, decryption of incoming frames failed. Consequently, MACsec dropped valid incoming frames, and increased the InPktsNotValid counter. With this update, the initialization of the cryptographic context has been fixed. Now, decryption with AES-GCM succeeds, and MACsec no longer drops valid frames. (BZ#1698551) The kernel no longer crashes when goto chain is used as a secondary TC control action Previously, when the act gact and act police traffic control (TC) rules used an invalid goto chain parameter as a secondary control action, the kernel terminated unexpectedly. With this update, the kernel avoids using goto chain with a NULL dereference and no longer crashes in the described scenario. Instead, the kernel returns an -EINVAL error message. (BZ#1729033) Kernel no longer allows adding duplicate rules with NLM_F_EXCL set Previously, the kernel never checked the rule content when a new policy routing rule was added. Consequently, the kernel could have added two rules that were exactly the same. This complicated the rule set which could cause problems when NetworkManager tried to cache the rules. With this update, the NLM_F_EXCL flag has been added to the kernel. Now, when a rule is added and the flag is set, the kernel checks the rule content, and returns an EEXIST error if the rule already exists. As a result, kernel no longer adds duplicate rules. (BZ#1700691) The ipset list command reports consistent memory for hash set types When you add entries to a hash set type, the ipset utility must resize the in-memory representation to for new entries by allocating an additional memory block. Previously, ipset set the total per-set allocated size to only the size of the new block instead of adding the value to the current in-memory size. As a consequence, the ip list command reported an inconsistent memory size. With this update, ipset correctly calculates the in-memory size. As a result, the ipset list command now displays the correct in-memory size of the set, and the output matches the actual allocated memory for hash set types. ( BZ#1711520 ) firewalld no longer attempts to create IPv6 rules if the IPv6 protocol is disabled Previously, if the IPv6 protocol was disabled, the firewalld service incorrectly attempted to create rules using the ip6tables utility, even though ip6tables should not be usable. As a consequence, when firewalld initialized the firewall, the service logged error messages. This update fixes the problem, and firewalld now only initializes IPv4 rules if IPv6 is disabled. ( BZ#1738785 ) The --remove-rules option of firewall-cmd now removes only direct rules that match the specified criteria Previously, the --remove-rules option of the firewall-cmd command did not check the rules to remove. As a consequence, the command removed all direct rules instead of a subset rule. This update fixes the problem. As a result, firewall-cmd now removes only direct rules that match the specified criteria. (BZ#1723610) Deleting a firewalld rich rule with forward-ports works now as expected Previously, the firewalld service incorrectly handled the deletion of rules with the forward-ports setting. As a consequence, deleting a rich rule with forward-ports from the runtime configuration failed. This update fixes the problem. As a result, deleting a rich rule with forward-ports works as expected. ( BZ#1637675 ) Packets no longer drift to other zones and cause unexpected behavior Previously, when setting up rules in one zone, the firewalld daemon allowed the packets to be affected by multiple zones. This behavior violated the firewalld zone concept, in which packets may only be part of a single zone. This update fixes the bug and firewalld now prevents packets from being affected by multiple zones. Warning: This change may affect the availability of some service if the user was knowingly or unknowingly relying on the zone drifting behavior. ( BZ#1713823 ) 6.8. Security Accessibility of OpenSCAP HTML reports has been improved Previously, an Accessible Rich Internet Applications (ARIA) parameter was incorrectly defined in OpenSCAP HTML reports. As a consequence, rule details in the reports were not accessible to users of screenreading software. With this update, the template for report generation has been changed. As a result, users with screen readers can now navigate through rule details and interact with links and buttons. ( BZ#1767826 ) SELinux policy now allows sysadm_u users to use semanage with sudo Previously, SELinux policy was missing rules to allow users with the sysadm_u label to use the semanage command with the sudo command. As a consequence, sysadm_u users could not configure SELinux on the system. This update adds the missing rules, and SELinux users labeled as sysadm_u can now change SELinux configurations. ( BZ#1651253 ) 6.9. Servers and Services Manual initialization of MariaDB using mysql_install_db no longer fails Prior to this update, the mysql_install_db script for initializing the MariaDB database called the resolveip binary from the /usr/libexec/ directory, while the binary was located in /usr/bin/ . Consequently, manual initialization of the database using mysql_install_db failed. This update fixes mysql_install_db to correctly locate resolveip . As a result, manual initialization of MariaDB using mysql_install_db no longer fails. (BZ#1731062) ReaR updates RHEL 7.8 introduces a number of updates to the Relax-and-Recover ( ReaR ) utility. The build directory handling has been changed. Previously, the build directory was kept in a temporary location in case ReaR encountered a failure. With this update, the build directory is deleted by default in non-interactive runs to prevent consuming disk space. The semantics of the KEEP_BUILD_DIR configuration variable has been enhanced to include a new errors value. You can set the KEEP_BUILD_DIR variable to the following values: errors to preserve the build directory on errors for debugging (the behavior) y ( true ) to always preserve the build directory n ( false ) to never preserve the build directory The default value is an empty string with the meaning of errors when ReaR is being executed interactively (in a terminal) and false if ReaR is being executed non-interactively. Note that KEEP_BUILD_DIR is automatically set to true in debug mode ( -d ) and in debugscript mode ( -D ); this behavior has not been changed. Notable bug fixes include: Support for NetBackup 8.0 has been fixed. ReaR no longer aborts with a bash error similar to xrealloc: cannot allocate on systems with a large number of users, groups, and users per group. The bconsole command now shows its prompt, which enables you to perform a restore operation when using the Bacula integration. ReaR now correctly backs up files also in situations when the docker service is running but no docker root directory has been defined, or when it is impossible to determine the status of the docker service. Recovery no longer fails when using thin pools or recovering a system in Migration Mode. Extremely slow rebuild of initramfs during the recovery process with LVM has been fixed. ( BZ#1693608 ) 6.10. Storage Concurrent SG_IO requests in /dev/sg no longer cause data corruption Previously, the /dev/sg device driver was missing synchronization of kernel data. Concurrent requests on the same file descriptor accessed the same data at the same time in the driver. As a consequence, the ioctl system call sometimes erroneously used the payload of an SG_IO request for a different command that was sent at the same time as the correct one. This led to disk corruption in certain cases. Red Hat observed this bug in Red Hat Virtualization (RHV). With this release, concurrency protection has been added in /dev/sg , and the described problem no longer occurs. (BZ#1710533) When an image is split off from an active/active cluster mirror, the resulting logical volume is now properly activated Previously, when you split off an image from an active/active cluster mirror, the resulting new logical volume appeared active but it had no active component. With this fix, the new logical volume is properly activated. ( BZ#1642162 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.8_release_notes/bug_fixes
7.210. systemtap
7.210. systemtap 7.210.1. RHBA-2015:1333 - systemtap bug fix and enhancement update Updated systemtap packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. SystemTap is an instrumentation system for systems running the Linux kernel, which allows developers to write scripts to collect data on the operation of the system. Note The systemtap packages have been upgraded to upstream version 2.7, which provides a number of bug fixes and enhancements over the version. (BZ# 1158682 ) Bug Fixes BZ# 1118352 Previously, some startup-time scripts required the "uprobes.ko" module built, installed, or loaded, but the init script did not identify whether and how to do so. A patch has been applied to fix this bug, and the init script now performs the appropriate operations. BZ# 1147647 Prior to this update, the systemtap scripts caused the "scheduling while atomic" error when running on the Messaging Real-time Grid kernel. To fix this bug, patches have been applied, and the error no longer occurs. BZ# 1195839 The systemtap's "tapset" system call unconditionally included support for the "execveat" system call, even though "execveat" did not exist in Red Hat Enterprise Linux 6 kernels. Consequently, system call probing scripts could fail with a semantic error. With this update, "execveat" is treated conditionally, and the scripts no longer fail in this situation. Users of systemtap are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-systemtap
Chapter 20. Configuring a custom PKI
Chapter 20. Configuring a custom PKI Some platform components, such as the web console, use Routes for communication and must trust other components' certificates to interact with them. If you are using a custom public key infrastructure (PKI), you must configure it so its privately signed CA certificates are recognized across the cluster. You can leverage the Proxy API to add cluster-wide trusted CA certificates. You must do this either during installation or at runtime. During installation , configure the cluster-wide proxy . You must define your privately signed CA certificates in the install-config.yaml file's additionalTrustBundle setting. The installation program generates a ConfigMap that is named user-ca-bundle that contains the additional CA certificates you defined. The Cluster Network Operator then creates a trusted-ca-bundle ConfigMap that merges these CA certificates with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle; this ConfigMap is referenced in the Proxy object's trustedCA field. At runtime , modify the default Proxy object to include your privately signed CA certificates (part of cluster's proxy enablement workflow). This involves creating a ConfigMap that contains the privately signed CA certificates that should be trusted by the cluster, and then modifying the proxy resource with the trustedCA referencing the privately signed certificates' ConfigMap. Note The installer configuration's additionalTrustBundle field and the proxy resource's trustedCA field are used to manage the cluster-wide trust bundle; additionalTrustBundle is used at install time and the proxy's trustedCA is used at runtime. The trustedCA field is a reference to a ConfigMap containing the custom certificate and key pair used by the cluster component. 20.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 20.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a ConfigMap that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The ConfigMap name that will be referenced from the Proxy object. 4 The ConfigMap must be in the openshift-config namespace. Create the ConfigMap from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the ConfigMap in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the ConfigMap must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 20.3. Certificate injection using Operators Once your custom CA certificate is added to the cluster via ConfigMap, the Cluster Network Operator merges the user-provided and system CA certificates into a single bundle and injects the merged bundle into the Operator requesting the trust bundle injection. Operators request this injection by creating an empty ConfigMap with the following label: config.openshift.io/inject-trusted-cabundle="true" An example of the empty ConfigMap: apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: "true" name: ca-inject 1 namespace: apache 1 Specifies the empty ConfigMap name. The Operator mounts this ConfigMap into the container's local trust store. Note Adding a trusted CA certificate is only needed if the certificate is not included in the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle. Certificate injection is not limited to Operators. The Cluster Network Operator injects certificates across any namespace when an empty ConfigMap is created with the config.openshift.io/inject-trusted-cabundle=true label. The ConfigMap can reside in any namespace, but the ConfigMap must be mounted as a volume to each container within a pod that requires a custom CA. For example: apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: ... spec: ... containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2 1 ca-bundle.crt is required as the ConfigMap key. 2 tls-ca-bundle.pem is required as the ConfigMap path.
[ "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: http://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "config.openshift.io/inject-trusted-cabundle=\"true\"", "apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache", "apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/configuring-a-custom-pki
Chapter 3. Using the Block Storage backup service
Chapter 3. Using the Block Storage backup service You can use the Block Storage backup service to perform full or incremental backups, and to restore a backup to a volume. 3.1. Full backups The cinder backup-create command creates a full backup of the volume by default. You can create backups of volumes you have access to. This means that users with administrative privileges can back up any volume, regardless of owner. 3.1.1. Creating a full volume backup To back up a volume, use the cinder backup-create command. By default, this command creates a full backup of the volume. If the volume has existing backups, you can choose to create an incremental backup instead. For more information, see Section 3.2.3, "Performing incremental backups" . Note Before Red Hat OpenStack Platform (RHOSP) version 16, the cinder backup-create command created incremental backups after the first full Ceph volume backup to a Ceph Storage back end. In RHOSP version 16 and later, you must use the --incremental option to create incremental volume backups. If you do not use the --incremental option with the cinder backup-create command, the default setting creates full backups. For more information, see Section 3.2.3, "Performing incremental backups" . You can create backups of volumes you have access to. This means that users with administrative privileges can back up any volume, regardless of owner. For more information, see Section 3.1.2, "Creating a volume backup as an admin" . Procedure View the ID or Display Name of the volume you want to back up: Back up the volume: Replace VOLUME with the ID or Display Name of the volume you want to back up. For example: The volume_id of the resulting backup is identical to the ID of the source volume. Verify that the volume backup creation is complete: The volume backup creation is complete when the Status of the backup entry is available. 3.1.2. Creating a volume backup as an admin Users with administrative privileges can back up any volume managed by Red Hat OpenStack Platform. When an admin user backs up a volume that is owned by a non-admin user, the backup is hidden from the volume owner by default. Procedure As an admin user, you can use the following command to back up a volume and make the backup available to a specific tenant: Replace the following variables according to your environment requirements: <TENANTNAME> is the name of the tenant where you want to make the backup available. <USERNAME> and <PASSWD> are the username and password credentials of a user within <TENANTNAME>. <VOLUME> is the name or ID of the volume that you want to back up. <KEYSTONEURL> is the URL endpoint of the Identity service, which is typically http:// IP :5000/v2, where IP is the IP address of the Identity service host. When you perform this operation, the size of the resulting backup counts against the quota of TENANTNAME rather than the quota of the tenant admin. 3.1.3. Exporting the metadata of a volume backup You can export and store the metadata of a volume backup so that you can restore the volume backup even if the Block Storage database suffers a catastrophic loss. Procedure Run the following command: Replace <BACKUPID> with the ID or name of the volume backup: The volume backup metadata consists of the backup_service and backup_url values. 3.1.4. Backing up an in-use volume You can create a backup of an in-use volume with the --force option when the Block Storage back end snapshot is supported. Note To use the --force option, the Block Storage back end snapshot must be supported. You can verify snapshot support by checking the documentation for the back end that you are using. By using the --force option, you acknowledge that you are not quiescing the drive before performing the backup. Using this method creates a crash-consistent, but not application-consistent, backup. This means that the backup does not have an awareness of which applications were running when the backup was performed. However, the data is intact. Procedure To create a backup of an in-use volume, run: 3.1.5. Backing up a snapshot You can create a full backup from a snapshot by using the volume ID that is associated with the snapshot. Procedure Locate the snapshot ID of the snapshot to backup using cinder snapshot list . If the snapshot is named, then you can use the following example to locate the ID : Create the backup of a snapshot: Note Snapshot-based backups of NFS volumes fail when you use the --snapshot-id option. This is a known issue. 3.1.6. Backing up and restoring across edge sites You can back up and restore Block Storage service (cinder) volumes across distributed compute node (DCN) architectures in edge site and availability zones. The cinder-backup service runs in the central availability zone (AZ), and backups are stored in the central AZ. The Block Storage service does not store backups at DCN sites. Prerequisites The central site is deployed with the cinder-backup.yaml environment file located in /usr/share/openstack-tripleo-heat-templates/environments . For more information, see Block Storage backup service deployment . The Block Storage service (cinder) CLI is available. All sites must use a common openstack cephx client name. For more information, see Creating a Ceph key for external access . Procedure Create a backup of a volume in the first DCN site: Replace <volume_backup> with a name for the volume backup. Replace <az_central> with the name of the central availability zone that hosts the cinder-backup service. Replace <edge_volume> with the name of the volume that you want to back up. Note If you experience issues with Ceph keyrings, you might need to restart the cinder-backup container so that the keyrings copy from the host to the container successfully. Restore the backup to a new volume in the second DCN site: Replace <az_2> with the name of the availability zone where you want to restore the backup. Replace <new_volume> with a name for the new volume. Replace <volume_backup> with the name of the volume backup that you created in the step. Replace <volume_size> with a value in GB equal to or greater than the size of the original volume. 3.2. Incremental backups If a volume has existing backups, you can use the Block Storage backup service to create an incremental backup instead. 3.2.1. Performance considerations Some backup features like incremental and data compression can impact performance. Incremental backups have a performance impact because all of the data in a volume must be read and checksummed for both the full and each incremental backup. You can use data compression with non-Ceph backends. Enabling data compression requires additional CPU power but uses less network bandwidth and storage space overall. The multipathing configuration also impacts performance. If you attach multiple volumes without enabling multipathing, you might not be able to connect or have full network capabilities, which impacts performance. You can use the advanced configuration options to enable or disable compression, define the number of processes, and add additional CPU resources. For more information, see Section B.1, "Advanced configuration options" . 3.2.2. Impact of backing up from a snapshot Some back ends support creating a backup from a snapshot. A driver that supports this feature can directly attach a snapshot, which is faster than cloning the snapshot into a volume to be able to attach to it. In general, this feature can affect performance because of the extra step of creating the volume from a snapshot. 3.2.3. Performing incremental backups By default, the cinder backup-create command creates a full backup of a volume. However, if the volume has existing backups, you can create an incremental backup. Incremental backups are fully supported on NFS, Object Storage (swift), and Red Hat Ceph Storage backup repositories. An incremental backup captures any changes to the volume since the last full or incremental backup. Performing numerous, regular, full backups of a volume can become resource intensive because the size of the volume increases over time. With incremental backups, you can capture periodic changes to volumes and minimize resource usage. Procedure To create an incremental volume backup, use the --incremental option with the following command: Replace VOLUME with the ID or Display Name of the volume you want to back up. Note You cannot delete a full backup if it already has an incremental backup. If a full backup has multiple incremental backups, you can only delete the latest one. 3.3. Canceling a backup To cancel a backup, an administrator must request a force delete on the backup. Important This operation is not supported if you use the Ceph or RBD back ends. Procedure Run the following command: After you complete the cancellation and the backup no longer appears in the backup listings, there can be a slight delay for the backup to be successfully canceled. To verify that the backup is successfully canceled, the backing-up status in the source resource stops. Note Before Red Hat OpenStack version 12, the backing-up status was stored in the volume, even when backing up a snapshot. Therefore, when backing up a snapshot, any delete operation on the snapshot that followed a cancellation could result in an error if the snapshot was still mapped. In Red Hat OpenStack Platform version 13 and later, ongoing restoration operations can be canceled on any of the supported backup drivers. 3.4. Viewing and modifying tenant backup quota Normally, you can use the dashboard to modify tenant storage quotas, for example, the number of volumes, volume storage, snapshots, or other operational limits that a tenant can have. However, the functionality to modify backup quotas with the dashboard is not yet available. You must use the command-line interface to modify backup quotas. Procedure To view the storage quotas of a specific tenant (TENANT_ID), run the following command: To update the maximum number of backups ( MAXNUM ) that can be created in a specific tenant, run the following command: To update the maximum total size of all backups ( MAXGB ) within a specific tenant, run the following command: To view the storage quota usage of a specific tenant, run the following command: 3.5. Restoring from backups After a database failure or another type of event that results in data loss, use the backups you created to restore data. Important If you configure the cinder-backup service to use the Ceph RBD driver, you can restore backup volumes only to an RBD-based Block Storage (cinder) back end. 3.5.1. Restoring a volume from a backup To create a new volume from a backup, complete the following steps. Procedure Find the ID of the volume backup you want to use: Ensure that the Volume ID matches the ID of the volume that you want to restore. Restore the volume backup: Replace BACKUP_ID with the ID of the volume backup you want to use. If you no longer need the backup, delete it: If you need to restore a backed up volume to a volume of a particular type, use the --volume option to restore a backup to a specific volume: Note If you restore a volume from an encrypted backup, then the destination volume type must also be encrypted. 3.5.2. Restoring a volume after a Block Storage database loss When a Block Storage database loss occurs, you cannot restore a volume backup because the database contains metadata that the volume backup service requires. However, after you create the volume backup, you can export and store the metadata, which consists of backup_service and backup_url values, so that when a database loss occurs, you can restore the volume backup. For more information see Section 3.1.1, "Creating a full volume backup" ). If you exported and stored this metadata, then you can import it to a new Block Storage database, which allows you to restore the volume backup. Note For incremental backups, you must import all exported data before you can restore one of the incremental backups. Procedure As a user with administrative privileges, run the following command: Replace backup_service and backup_url with the metadata you exported. For example, using the exported metadata from Section 3.1.1, "Creating a full volume backup" : After you import the metadata into the Block Storage service database, you can restore the volume as normal, see Section 3.5.1, "Restoring a volume from a backup" . 3.5.3. Canceling a backup restore To cancel a backup restore operation, alter the status of the backup to anything other than restoring . You can use the error state to minimize confusion regarding whether the restore was successful or not. Alternatively, you can change the value to available . Note Backup cancellation is an asynchronous action, because the backup driver must detect the status change before it cancels the backup. When the status changes to available in the destination volume, the cancellation is complete. Note This feature is not currently available on RBD backups. Warning If a restore operation is canceled after it starts, the destination volume is useless, because there is no way of knowing how much data, if any, was actually restored.
[ "cinder list", "cinder backup-create _VOLUME_", "+-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | e9d15fc7-eeae-4ca4-aa72-d52536dc551d | | name | None | | volume_id | 5f75430a-abff-4cc7-b74e-f808234fa6c5 | +-----------+--------------------------------------+", "cinder backup-list", "cinder --os-auth-url <KEYSTONEURL> --os-tenant-name <TENANTNAME> --os-username <USERNAME> --os-password <PASSWD> backup-create <VOLUME>", "cinder backup-export _BACKUPID_", "+----------------+------------------------------------------+ | Property | Value | +----------------+------------------------------------------+ | backup_service | cinder.backup.drivers.swift | | backup_url | eyJzdGF0dXMiOiAiYXZhaWxhYmxlIiwgIm9iam...| | | ...4NS02ZmY4MzBhZWYwNWUiLCAic2l6ZSI6IDF9 | +----------------+------------------------------------------+", "cinder backup-create _VOLUME_ --incremental --force", "cinder snapshot-list --volume-id _VOLUME_ID_", "cinder snapshot-show _SNAPSHOT_NAME_", "cinder backup-create _VOLUME_ --snapshot-id=_SNAPSHOT_ID_", "cinder --os-volume-api-version 3.51 backup-create --name <volume_backup> --availability-zone <az_central> <edge_volume>", "cinder --os-volume-api-version 3.51 create --availability-zone <az_2> --name <new_volume> --backup-id <volume_backup> <volume_size>", "cinder backup-create _VOLUME_ --incremental", "openstack volume backup delete --force <backup>", "cinder quota-show TENANT_ID", "cinder quota-update --backups MAXNUM TENANT_ID", "cinder quota-update --backup-gigabytes MAXGB TENANT_ID", "cinder quota-usage TENANT_ID", "cinder backup-list", "cinder backup-restore _BACKUP_ID_", "cinder backup-delete _BACKUP_ID_", "cinder backup-restore _BACKUP_ID --volume VOLUME_ID_", "cinder backup-import _backup_service_ _backup_url_", "cinder backup-import cinder.backup.drivers.swift eyJzdGF0dXMi...c2l6ZSI6IDF9 +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | id | 77951e2f-4aff-4365-8c64-f833802eaa43 | | name | None | +----------+--------------------------------------+", "openstack volume backup set --state error BACKUP_ID" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/block_storage_backup_guide/assembly_backup-admin-ops
Chapter 2. Verify the Trusted Artifact Signer service installation
Chapter 2. Verify the Trusted Artifact Signer service installation 2.1. Signing and verifying containers by using Cosign from the command-line interface The cosign tool gives you the capability to sign and verify Open Container Initiative (OCI) container images, along with other build artifacts by using Red Hat's Trusted Artifact Signer (RHTAS) service. Important For RHTAS, you must use cosign version 2.2 or later. Prerequisites A RHTAS installation on Red Hat OpenShift Container Platform version 4.13 or later. Access to the OpenShift web console. A workstation with the podman , and oc binaries installed. Procedure Download the cosign binary from the OpenShift cluster to your workstation. Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools , go to the cosign download section, and click the link for your platform. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example Move and rename the binary to a location within your USDPATH environment: Example Log in to the OpenShift cluster: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example Note You can find your login token and URL to use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Syntax oc project PROJECT_NAME Example Note Use the project name for the RHTAS installation. Configure your shell environment for doing container image signing and verifying. Example Initialize The Update Framework (TUF) system: Example Sign a test container image. Create an empty container image: Example Push the empty container image to the ttl.sh ephemeral registry: Example Sign the container image: Syntax cosign sign -y IMAGE_NAME:TAG Example A web browser opens allowing you to sign the container image with an email address. Remove the temporary Docker file: Example Verify a signed container image by using a certificate identity and issuer: Syntax cosign verify --certificate-identity= SIGNING_EMAIL_ADDR IMAGE_NAME:TAG Example Note You can also use regular expressions for the certificate identity and issuer by using the following options to the cosign command, --certificate-identity-regexp and --certificate-oidc-issuer-regexp . Download the rekor-cli binary from the OpenShift cluster to your workstation. Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools , go to the rekor-cli download section, and click the link for your platform. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example Move and rename the binary to a location within your USDPATH environment: Example Query the transparency log by using the Rekor command-line interface. Search based on the log index: Example Search for an email address to get the universal unique identifier (UUID): Syntax rekor-cli search --email SIGNING_EMAIL_ADDR --rekor_server USDCOSIGN_REKOR_URL --format json | jq Example This command returns the UUID for use with the step. Use the UUID to get the transaction details: Syntax rekor-cli get --uuid UUID --rekor_server USDCOSIGN_REKOR_URL --format json | jq Example Additional resources Installing Red Hat Trusted Artifact Signer on OpenShift . Customizing Red Hat Trusted Application Pipeline . See the Signing and verifying commits by using Gitsign from the command-line interface section of the RHTAS Deployment Guide for details on signing and verifying Git commits. The Update Framework home page . 2.2. Signing and verifying commits by using Gitsign from the command-line interface The gitsign tool gives you the ability to sign and verify Git repository commits by using Red Hat's Trusted Artifact Signer (RHTAS) service. Prerequisites A RHTAS installation on Red Hat OpenShift Container Platform version 4.13 or later. Access to the OpenShift web console. A workstation with the oc , and git binaries installed. Downloaded the cosign binary from the OpenShift cluster. You must use cosign version 2.2 or later. Procedure Download the gitsign binary from the OpenShift cluster to your workstation. Login to the OpenShift web console. From the home page, click the ? icon, click Command line tools , go to the gitsign download section, and click the link for your platform. Open a terminal on your workstation, decompress the .gz file, and set the execute bit: Example Move and rename the binary to a location within your USDPATH environment: Example Log in to the OpenShift cluster: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example Note You can find your login token and URL to use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Syntax oc project PROJECT_NAME Example Note Use the project name for the RHTAS installation. Configure your shell environment for doing commit signing and verifying: Example Configure the local repository configuration to sign your commits by using the RHTAS service: Example Make a commit to the local repository: Example A web browser opens allowing you to sign the commit with an email address. Initialize The Update Framework (TUF) system: Example Verify the commit: Syntax gitsign verify --certificate-identity= SIGNING_EMAIL --certificate-oidc-issuer=USDSIGSTORE_OIDC_ISSUER HEAD Example Additional resources Installing Red Hat Trusted Artifact Signer on OpenShift . Customizing Red Hat Trusted Application Pipeline . See the Signing and verifying containers by using Cosign from the command-line interface section in the RHTAS Deployment Guide for details on signing and verifying container images. The Update Framework home page . 2.3. Verifying signatures on container images with Enterprise Contract Enterprise Contract (EC) is a tool for maintaining the security of software supply chains, and you can use it to define and enforce policies for container images. You can use the ec binary to verify the attestation and signature of container images that use Red Hat's Trusted Artifact Signer (RHTAS) signing framework. Prerequisites A RHTAS installation on Red Hat OpenShift Container Platform version 4.13 or later. A workstation with the oc , cosign , and podman binaries installed. Access to the OpenShift web console. Procedure Download the ec binary from the OpenShift cluster. Log in to the OpenShift web console. From the home page, click the ? icon, click Command line tools , go to the ec download section, then click the link for your platform. Open a terminal on your workstation, decompress the binary .gz file, and set the execute bit: Example Move and rename the binary to a location within your USDPATH environment: Example Log in to the OpenShift cluster: Syntax oc login --token= TOKEN --server= SERVER_URL_AND_PORT Example Note You can find your login token and URL to use on the command line from the OpenShift web console. Log in to the OpenShift web console. Click your user name, and click Copy login command . Offer your user name and password again, if asked, and click Display Token to view the command. Switch to the RHTAS project: Syntax oc project PROJECT_NAME Example Note Use the project name for the RHTAS installation. Configure your shell environment for doing container image signing and verifying. Example Initialize The Update Framework (TUF) system: Example Sign a test container image. Create an empty container image: Example Push the empty container image to the ttl.sh ephemeral registry: Example Sign the container image: Syntax cosign sign -y IMAGE_NAME:TAG Example A web browser opens allowing you to sign the container image with an email address. Remove the temporary Docker file: Example Create a predicate.json file: Example Refer to the SLSA provenance predicate specifications for more information about the schema layout. Associate the predicate.json file with the container image: Syntax cosign attest -y --predicate ./predicate.json --type slsaprovenance IMAGE_NAME:TAG Example Verify that the container image has at least one attestation and signature: Syntax cosign tree IMAGE_NAME:TAG Example Verify the container image by using Enterprise Contact: Syntax ec validate image --image IMAGE_NAME:TAG --certificate-identity-regexp ' SIGNER_EMAIL_ADDR ' --certificate-oidc-issuer-regexp 'keycloak-keycloak-system' --output yaml --show-successes Example Enterprise Contract generates a pass-fail report with details on any security violations. When you add the --info flag, the report includes more details and possible solutions for any violations found. Additional resources Installing Red Hat Trusted Artifact Signer on OpenShift . Managing compliance with Enterprise Contract . See the Enterprise Contract website for more information.
[ "gunzip cosign-amd64.gz chmod +x cosign-amd64", "sudo mv cosign-amd64 /usr/local/bin/cosign", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "project PROJECT_NAME", "oc project trusted-artifact-signer", "export TUF_URL=USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export OIDC_ISSUER_URL=https://USD(oc get route keycloak -n keycloak-system | tail -n 1 | awk '{print USD2}')/auth/realms/trusted-artifact-signer export COSIGN_FULCIO_URL=USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_MIRROR=USDTUF_URL export COSIGN_ROOT=USDTUF_URL/root.json export COSIGN_OIDC_CLIENT_ID=\"trusted-artifact-signer\" export COSIGN_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_CERTIFICATE_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_YES=\"true\" export SIGSTORE_FULCIO_URL=USDCOSIGN_FULCIO_URL export SIGSTORE_OIDC_ISSUER=USDCOSIGN_OIDC_ISSUER export SIGSTORE_REKOR_URL=USDCOSIGN_REKOR_URL export REKOR_REKOR_SERVER=USDCOSIGN_REKOR_URL", "cosign initialize", "echo \"FROM scratch\" > ./tmp.Dockerfile podman build . -f ./tmp.Dockerfile -t ttl.sh/rhtas/test-image:1h", "podman push ttl.sh/rhtas/test-image:1h", "cosign sign -y IMAGE_NAME:TAG", "cosign sign -y ttl.sh/rhtas/test-image:1h", "rm ./tmp.Dockerfile", "cosign verify --certificate-identity= SIGNING_EMAIL_ADDR IMAGE_NAME:TAG", "cosign verify [email protected] ttl.sh/rhtas/test-image:1h", "gunzip rekor-cli-amd64.gz chmod +x rekor-cli-amd64", "sudo mv rekor-cli-amd64 /usr/local/bin/rekor-cli", "rekor-cli get --log-index 0 --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli search --email SIGNING_EMAIL_ADDR --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli search --email [email protected] --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli get --uuid UUID --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "rekor-cli get --uuid 24296fb24b8ad77a71b9c1374e207537bafdd75b4f591dcee10f3f697f150d7cc5d0b725eea641e7 --rekor_server USDCOSIGN_REKOR_URL --format json | jq", "gunzip gitsign-amd64.gz chmod +x gitsign-amd64", "sudo mv gitsign-amd64 /usr/local/bin/gitsign", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "project PROJECT_NAME", "oc project trusted-artifact-signer", "export TUF_URL=USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export OIDC_ISSUER_URL=https://USD(oc get route keycloak -n keycloak-system | tail -n 1 | awk '{print USD2}')/auth/realms/trusted-artifact-signer export COSIGN_FULCIO_URL=USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_MIRROR=USDTUF_URL export COSIGN_ROOT=USDTUF_URL/root.json export COSIGN_OIDC_CLIENT_ID=\"trusted-artifact-signer\" export COSIGN_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_CERTIFICATE_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_YES=\"true\" export SIGSTORE_FULCIO_URL=USDCOSIGN_FULCIO_URL export SIGSTORE_OIDC_ISSUER=USDCOSIGN_OIDC_ISSUER export SIGSTORE_REKOR_URL=USDCOSIGN_REKOR_URL export REKOR_REKOR_SERVER=USDCOSIGN_REKOR_URL", "git config --local commit.gpgsign true git config --local tag.gpgsign true git config --local gpg.x509.program gitsign git config --local gpg.format x509 git config --local gitsign.fulcio USDSIGSTORE_FULCIO_URL git config --local gitsign.rekor USDSIGSTORE_REKOR_URL git config --local gitsign.issuer USDSIGSTORE_OIDC_ISSUER git config --local gitsign.clientID trusted-artifact-signer", "git commit --allow-empty -S -m \"Test of a signed commit\"", "cosign initialize", "gitsign verify --certificate-identity= SIGNING_EMAIL --certificate-oidc-issuer=USDSIGSTORE_OIDC_ISSUER HEAD", "gitsign verify [email protected] --certificate-oidc-issuer=USDSIGSTORE_OIDC_ISSUER HEAD", "gunzip ec-amd64.gz chmod +x ec-amd64", "sudo mv ec-amd64 /usr/local/bin/ec", "login --token= TOKEN --server= SERVER_URL_AND_PORT", "oc login --token=sha256~ZvFDBvoIYAbVECixS4-WmkN4RfnNd8Neh3y1WuiFPXC --server=https://example.com:6443", "project PROJECT_NAME", "oc project trusted-artifact-signer", "export TUF_URL=USD(oc get tuf -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export OIDC_ISSUER_URL=https://USD(oc get route keycloak -n keycloak-system | tail -n 1 | awk '{print USD2}')/auth/realms/trusted-artifact-signer export COSIGN_FULCIO_URL=USD(oc get fulcio -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_REKOR_URL=USD(oc get rekor -o jsonpath='{.items[0].status.url}' -n trusted-artifact-signer) export COSIGN_MIRROR=USDTUF_URL export COSIGN_ROOT=USDTUF_URL/root.json export COSIGN_OIDC_CLIENT_ID=\"trusted-artifact-signer\" export COSIGN_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_CERTIFICATE_OIDC_ISSUER=USDOIDC_ISSUER_URL export COSIGN_YES=\"true\" export SIGSTORE_FULCIO_URL=USDCOSIGN_FULCIO_URL export SIGSTORE_OIDC_ISSUER=USDCOSIGN_OIDC_ISSUER export SIGSTORE_REKOR_URL=USDCOSIGN_REKOR_URL export REKOR_REKOR_SERVER=USDCOSIGN_REKOR_URL", "cosign initialize", "echo \"FROM scratch\" > ./tmp.Dockerfile podman build . -f ./tmp.Dockerfile -t ttl.sh/rhtas/test-image:1h", "podman push ttl.sh/rhtas/test-image:1h", "cosign sign -y IMAGE_NAME:TAG", "cosign sign -y ttl.sh/rhtas/test-image:1h", "rm ./tmp.Dockerfile", "{ \"builder\": { \"id\": \"https://localhost/dummy-id\" }, \"buildType\": \"https://example.com/tekton-pipeline\", \"invocation\": {}, \"buildConfig\": {}, \"metadata\": { \"completeness\": { \"parameters\": false, \"environment\": false, \"materials\": false }, \"reproducible\": false }, \"materials\": [] }", "cosign attest -y --predicate ./predicate.json --type slsaprovenance IMAGE_NAME:TAG", "cosign attest -y --predicate ./predicate.json --type slsaprovenance ttl.sh/rhtas/test-image:1h", "cosign tree IMAGE_NAME:TAG", "cosign tree ttl.sh/rhtas/test-image:1h 📦 Supply Chain Security Related artifacts for an image: ttl.sh/rhtas/test-image@sha256:7de5fa822a9d1e507c36565ee0cf50c08faa64505461c844a3ce3944d23efa35 └── 💾 Attestations for an image tag: ttl.sh/rhtas/test-image:sha256-7de5fa822a9d1e507c36565ee0cf50c08faa64505461c844a3ce3944d23efa35.att └── 🍒 sha256:40d94d96a6d3ab3d94b429881e1b470ae9a3cac55a3ec874051bdecd9da06c2e └── 🔐 Signatures for an image tag: ttl.sh/rhtas/test-image:sha256-7de5fa822a9d1e507c36565ee0cf50c08faa64505461c844a3ce3944d23efa35.sig └── 🍒 sha256:f32171250715d4538aec33adc40fac2343f5092631d4fc2457e2116a489387b7", "ec validate image --image IMAGE_NAME:TAG --certificate-identity-regexp ' SIGNER_EMAIL_ADDR ' --certificate-oidc-issuer-regexp 'keycloak-keycloak-system' --output yaml --show-successes", "ec validate image --image ttl.sh/rhtas/test-image:1h --certificate-identity-regexp '[email protected]' --certificate-oidc-issuer-regexp 'keycloak-keycloak-system' --output yaml --show-successes success: true successes: - metadata: code: builtin.attestation.signature_check msg: Pass - metadata: code: builtin.attestation.syntax_check msg: Pass - metadata: code: builtin.image.signature_check msg: Pass ec-version: v0.1.2427-499ef12 effective-time: \"2024-01-21T19:57:51.338191Z\" key: \"\" policy: {} success: true" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/deployment_guide/verify_the_trusted_artifact_signer_service_installation
Configure
Configure builds for Red Hat OpenShift 1.1 Configuring Builds Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/builds_for_red_hat_openshift/1.1/html/configure/index
4.2.6. Monitoring Changes to File Attributes
4.2.6. Monitoring Changes to File Attributes This section describes how to monitor if any processes are changing the attributes of a targeted file, in real time. inodewatch2-simple.stp Like inodewatch.stp from Section 4.2.5, "Monitoring Reads and Writes to a File" , inodewatch2-simple.stp takes the targeted file's device number (in integer format) and inode number as arguments. For more information on how to retrieve this information, refer to Section 4.2.5, "Monitoring Reads and Writes to a File" . The output for inodewatch2-simple.stp is similar to that of inodewatch.stp , except that inodewatch2-simple.stp also contains the attribute changes to the monitored file, as well as the ID of the user responsible ( uid() ). Example 4.10, "inodewatch2-simple.stp Sample Output" shows the output of inodewatch2-simple.stp while monitoring /home/joe/bigfile when user joe executes chmod 777 /home/joe/bigfile and chmod 666 /home/joe/bigfile . Example 4.10. inodewatch2-simple.stp Sample Output
[ "global ATTR_MODE = 1 probe kernel.function(\"inode_setattr\") { dev_nr = USDinode->i_sb->s_dev inode_nr = USDinode->i_ino if (dev_nr == (USD1 << 20 | USD2) # major/minor device && inode_nr == USD3 && USDattr->ia_valid & ATTR_MODE) printf (\"%s(%d) %s 0x%x/%u %o %d\\n\", execname(), pid(), probefunc(), dev_nr, inode_nr, USDattr->ia_mode, uid()) }", "chmod(17448) inode_setattr 0x800005/6011835 100777 500 chmod(17449) inode_setattr 0x800005/6011835 100666 500" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/inodewatch2sect
Chapter 2. New features and enhancements
Chapter 2. New features and enhancements A list of all major enhancements, and new features introduced in this release of Red Hat Trusted Profile Analyzer (RHTPA). The features and enhancements added by this release are: Download license data from an SBOM With this release, you can download license data from a Software Bill of Materials (SBOM) documents in either the CycloneDX or Software Package Data Exchange (SPDX) formats. This new feature can help identify potential license compliant issues early in the developmental cycle, and can help organizations mitigate legal risks and adherence to open source licensing obligations.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/release_notes/enhancements
Chapter 16. Jakarta Connectors Management
Chapter 16. Jakarta Connectors Management 16.1. About Jakarta Connectors Jakarta Connectors define a standard architecture for Jakarta EE systems to external heterogeneous Enterprise Information Systems (EIS). Examples of EISs include Enterprise Resource Planning (ERP) systems, mainframe transaction processing (TP), databases, and messaging systems. A resource adapter is a component that implements the Jakarta Connectors architecture. Jakarta Connectors 1.7 provides features for managing: connections transactions security life-cycle work instances transaction inflow message inflow 16.2. About Resource Adapters A resource adapter is a deployable Jakarta EE component that provides communication between a Jakarta EE application and an Enterprise Information System (EIS) using the Jakarta Connectors specification. A resource adapter is often provided by EIS vendors to allow easy integration of their products with Jakarta EE applications. An Enterprise Information System can be any other software system within an organization. Examples include Enterprise Resource Planning (ERP) systems, database systems, e-mail servers and proprietary messaging systems. A resource adapter is packaged in a Resource Adapter Archive (RAR) file which can be deployed to JBoss EAP. A RAR file may also be included in an Enterprise Archive (EAR) deployment. The resource adapter itself is defined within the resource-adapters subsystem, which is provided by the IronJacamar project. 16.3. Configuring the jca subsystem The jca subsystem controls the general settings for the Jakarta Connectors container and resource adapter deployments. You can configure the jca subsystem using the management console or the management CLI. The main jca subsystem elements to configure are: Archive validation Bean validation Work managers Distributed work managers Bootstrap contexts Cached connection manager Configure jca subsystem settings from the management console The jca subsystem can be configured from the management console by navigating to Configuration Subsystems JCA and clicking View . Then, select the appropriate tab: Configuration - Contains settings for the cached connection manager, archive validation, and bean validation. Each of these is contained in their own tab as well. Modify these settings by opening the appropriate tab and clicking the Edit link. Bootstrap Context - Contains the list of configured bootstrap contexts. New bootstrap context objects can be added, removed, and configured. Each bootstrap context must be assigned a work manager. Workmanager - Contains the list of configured work managers. New work managers can be added, removed, and their thread pools configured here. Each work manager can have one short-running thread pool and an optional long-running thread pool. The thread pool attributes can be configured by clicking Thread Pools on the selected work manager. Configure jca subsystem settings from the management CLI The jca subsystem can be configured from the management CLI from the /subsystem=jca address. In a managed domain, you must precede the command with /profile= PROFILE_NAME . Note Attribute names in the tables in the following sections are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-jca_5_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Archive Validation This determines whether archive validation will be performed on the deployment units. The following table describes the attributes you can set for archive validation. Table 16.1. Archive Validation Attributes Attribute Default Value Description enabled true Specifies whether archive validation is enabled. fail-on-error true Specifies whether an archive validation error report fails the deployment. fail-on-warn false Specifies whether an archive validation warning report fails the deployment. If an archive does not implement the Jakarta Connectors specification correctly and archive validation is enabled, an error message will display during deployment describing the problem. For example: If archive validation is not specified, it is considered present and the enabled attribute defaults to true . Bean Validation This setting determines whether bean validation is performed. For information about the specification, see in Jakarta Bean Validation specification . The following table describes the attributes you can set for bean validation. Table 16.2. Bean Validation Attributes Attribute Default Value Description enabled true Specifies whether bean validation is enabled. If bean validation is not specified, it is considered present and the enabled attribute defaults to true . Work Managers There are two types of work managers: Default work manager The default work manager and its thread pools. Custom work manager A custom work manager definition and its thread pools. The following table describes the attributes you can set for work managers. Table 16.3. Work Manager Attributes Attribute Description name Specifies the name of the work manager. elytron-enabled This attribute enables Elytron security for the workmanager . A work manager also has the following child elements. Table 16.4. Work Manager Child Elements Child Element Description short-running-threads Thread pool for standard Work instances. Each work manager has one short-running thread pool. long-running-threads Thread pool for Jakarta Connectors 1.7 Work instances that set the LONG_RUNNING hint. Each work manager can have one optional long-running thread pool. The following table describes the attributes you can set for work manager thread pools. Table 16.5. Thread Pool Attributes Attribute Description allow-core-timeout Boolean setting that determines whether core threads may time out. The default value is false . core-threads The core thread pool size. This must be equal to or smaller than the maximum thread pool size. handoff-executor An executor to delegate tasks to in the event that a task cannot be accepted. If not specified, tasks that cannot be accepted will be silently discarded. keepalive-time Specifies the amount of time that pool threads should be kept after doing work. max-threads The maximum thread pool size. name Specifies the name of the thread pool. queue-length The maximum queue length. thread-factory Reference to the thread factory. Distributed Work Managers A distributed work manager is a work manager instance that is able to reschedule work execution on another work manager instance. The following example management CLI commands configure a distributed work manager. Note that you must use a configuration that provides high availability capabilities, such as the standalone-ha.xml or standalone-full-ha.xml configuration file for a standalone server. Example: Configure a Distributed Work Manager Note The name of the short-running-threads element must be the same as the name of the distributed-workmanager element. The following table describes the attributes you can configure for distributed work managers. Table 16.6. Distributed Work Manager Attributes Attribute Description elytron-enabled Enables Elytron security for the work manager. name The name of the distributed work manager. policy The policy decides when to redistribute a work instance. Allowed values are: NEVER - Never distribute the work instance to another node. ALWAYS - Always distribute the work instance to another node. WATERMARK - Distribute the work instance to another node based on how many free worker threads the current node has available. policy-options List of the policy's key/value pair options. If you use the WATERMARK policy, then you can use the watermark policy option to specify at what number of free threads that work should be distributed. For example: selector The selector decides to which nodes in the network to redistribute the work instance. Allowed values are: FIRST_AVAILABLE - Select the first available node in the list. PING_TIME - Select the node with the lowest ping time. MAX_FREE_THREADS - Select the node with highest number of free worker threads. selector-options List of the selector's key/value pair options. A distributed work manager also has the following child elements. Table 16.7. Distributed Work Manager Child Elements Child Element Description long-running-threads The thread pool for work instances that set the LONG_RUNNING hint. Each distributed work manager can optionally have a long-running thread pool. short-running-threads The thread pool for standard work instances. Each distributed work manager must have a short-running thread pool. Bootstrap Contexts This is used to define custom bootstrap contexts. The following table describes the attributes you can set for bootstrap contexts. Table 16.8. Bootstrap Context Attributes Attribute Description name Specifies the name of the bootstrap context. workmanager Specifies the name of the work manager to use for this context. Cached Connection Manager This is used for debugging connections and supporting lazy enlistment of a connection in a transaction, tracking whether they are used and released properly by the application. The following table describes the attributes you can set for the cached connection manager. Table 16.9. Cached Connection Manager Attributes Attribute Default Value Description debug false Outputs warning on failure to explicitly close connections. error false Throws exception on failure to explicitly close connections. ignore-unknown-connections false Specifies that unknown connections will not be cached. install false Enable or disable the cached connection manager valve and interceptor. 16.4. Configuring Resource Adapters 16.4.1. Deploy a Resource Adapter Resource adapters can be deployed just like other deployments using the management CLI or the management console. When running a standalone server, you can also copy the archive to the deployments directory to be picked up by the deployment scanner. Deploy a Resource Adapter using the Management CLI To deploy the resource adapter to a standalone server, enter the following management CLI command. To deploy the resource adapter to all server groups in a managed domain, enter the following management CLI command. Deploy a Resource Adapter using the Management Console Log in to the management console and click on the Deployments tab. Click the Add ( + ) button. In a managed domain, you will first need to select Content Repository . Choose the Upload Deployment option. Browse to the resource adapter archive and click . Verify the upload, then click Finish . In a managed domain, deploy the deployment to the appropriate server groups and enable the deployment. Deploy a Resource Adapter Using the Deployment Scanner To deploy a resource adapter manually to a standalone server, copy the resource adapter archive to the server deployments directory, for example, EAP_HOME /standalone/deployments/ . This will be picked up and deployed by the deployment scanner. Note This option is not available for managed domains. You must use either the management console or the management CLI to deploy the resource adapter to server groups. 16.4.2. Configure a Resource Adapter You can configure resource adapters using the management interfaces. The below example shows how to configure a resource adapter using the management CLI. See your resource adapter vendor's documentation for supported properties and other important information. Add the Resource Adapter Configuration Add the resource adapter configuration. Configure the Resource Adapter Settings Configure any of the following settings as necessary. Configure config-properties . Add the server configuration property. Add the port configuration property. Configure admin-objects . Add an admin object. Configure an admin object configuration property. Configure connection-definitions . Add a connection definition for a managed connection factory. Configure a managed connection factory configuration property. Configure whether to record the enlistment trace. You can enable the recording of enlistment traces by setting the enlistment-trace attribute to true . Warning Enabling enlistment tracing makes tracking down errors during transaction enlistment easier, but comes with a performance impact. See Resource Adapter Attributes for all available configuration options for resource adapters. Activate the Resource Adapter Activate the resource adapter. Note You can also define capacity policies for resource adapters. For more details, see the Capacity Policies section. 16.4.3. Configure Resource Adapters to Use the Elytron Subsystem Two types of communications occur between the server and the resource adapter in IronJacamar. One of them is when the server opens a resource adapter connection. As defined by the specifications, this can be secured by container-managed sign-on, which requires propagation of a JAAS subject with principal and credentials to the resource adapter when opening the connection. This sign-on can be delegated to Elytron. IronJacamar supports security inflow. This mechanism enables a resource adapter to establish security information when submitting a work to the work manager, and when delivering messages to endpoints residing in the same JBoss EAP instance. Container-managed Sign-On In order to achieve container-managed sign-on with Elytron, the elytron-enabled attribute needs to be set to true . This will result in all connections to the resource adapter to be secured by Elytron. The elytron-enabled attribute can be configured using the management CLI by setting the elytron-enabled attribute, in the resource-adapters subsystems, to true . By default this attribute is set to false . The authentication-context attribute defines the name of the Elytron authentication context that will be used for performing sign-on. An Elytron authentication-context attribute can contain one or more authentication-configuration elements, which in turn contains the credentials you want to use. If the authentication-context attribute is not set, JBoss EAP will use the current authentication-context , which is the authentication-context used by the caller code that is opening the connection. Example: Creating an authentication-configuration Example: Creating the authentication-context Using the above Configuration Security Inflow It is also possible for the resource manager to inflow security credentials when submitting the work which is to be executed by the work manager. Security inflow allows the work to authenticate itself before execution. If authentication succeeds, the submitted work will be executed under the resulting authentication context. If it fails, the work execution will be rejected. To enable Elytron security inflow, set the wm-elytron-security-domain attribute when configuring the resource adapter work manager. Elytron will perform the authentication based on the specified domain. Note When the resource adapter work manager is configured to use the Elytron security domain, wm-elytron-security-domain , the referenced work manager should have the elytron-enabled attribute set to true . Note If instead of wm-elytron-security-domain the wm-security-domain attribute is used, the security inflow will be performed by the legacy security subsystem. In the example configuration of jca subsystem below, we can see the configuration of a resource adapter called ra-with-elytron-security-domain . This resource adapter configures work manager security to use the Elytron security domain's wm-realm . <subsystem xmlns="urn:jboss:domain:jca:5.0"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </long-running-threads> </default-workmanager> <workmanager name="customWM"> <elytron-enabled>true</elytron-enabled> <short-running-threads> <core-threads count="20"/> <queue-length count="20"/> <max-threads count="20"/> </short-running-threads> </workmanager> <bootstrap-contexts> <bootstrap-context name="customContext" workmanager="customWM"/> </bootstrap-contexts> <cached-connection-manager/> </subsystem> The work manager is then referenced using the boostrap context from the resource-adapter subsystem. <subsystem xmlns="urn:jboss:domain:resource-adapters:5.0"> <resource-adapters> <resource-adapter id="ra-with-elytron-security-domain"> <archive> ra-with-elytron-security-domain.rar </archive> <bootstrap-context>customContext</bootstrap-context> <transaction-support>NoTransaction</transaction-support> <workmanager> <security> <elytron-security-domain>wm-realm</elytron-security-domain> <default-principal>wm-default-principal</default-principal> <default-groups> <group> wm-default-group </group> </default-groups> </security> </workmanager> </resource-adapter> </resource-adapters> </subsystem> Example: Configuration of the Security Domain The Work class is responsible for providing the credentials for Elytron's authentication under the specified domain. For that, it must implement javax.resource.spi.work.WorkContextProvider . public interface WorkContextProvider { /** * Gets an instance of <code>WorkContexts</code> that needs to be used * by the <code>WorkManager</code> to set up the execution context while * executing a <code>Work</code> instance. * * @return an <code>List</code> of <code>WorkContext</code> instances. */ List<WorkContext> getWorkContexts(); } This interface allows the Work class to use the WorkContext to configure some aspects of the context in which the work will be executed. One of those aspects is the security inflow. For that, the List<WorkContext> getWorkContexts method must provide a javax.resource.spi.work.SecurityContext . This context will use javax.security.auth.callback.Callback objects as defined by Jakarta Authentication. For more information about the specification, see Jakarta Authentication specification . Example: Creation of Callbacks Using Context public class ExampleWork implements Work, WorkContextProvider { private final String username; private final String role; public MyWork(TestBean bean, String username, String role) { this.principals = null; this.roles = null; this.bean = bean; this.username = username; this.role = role; } public List<WorkContext> getWorkContexts() { List<WorkContext> l = new ArrayList<>(1); l.add(new MySecurityContext(username, role)); return l; } public void run() { ... } public void release() { ... } public class ExampleSecurityContext extends SecurityContext { public void setupSecurityContext(CallbackHandler handler, Subject executionSubject, Subject serviceSubject) { try { List<javax.security.auth.callback.Callback> cbs = new ArrayList<>(); cbs.add(new CallerPrincipalCallback(executionSubject, new SimplePrincipal(username))); cbs.add(new GroupPrincipalCallback(executionSubject, new String[]{role})); handler.handle(cbs.toArray(new javax.security.auth.callback.Callback[cbs.size()])); } catch (Throwable t) { throw new RuntimeException(t); } } } In the above example, ExampleWork implements the WorkContextProvider interface to provide ExampleSecurityContext . That context will create the callbacks necessary to provide the security information that will be authenticated by Elytron upon work execution. 16.4.4. Deploy and Configure the IBM MQ Resource Adapter You can find the instructions for deploying the IBM MQ resource adapter in Configuring Messaging for JBoss EAP. 16.4.5. Deploy and Configure the Generic Jakarta Messaging Resource Adapter You can find the instructions for configuring the generic Jakarta Messaging resource adapter in Configuring Messaging for JBoss EAP. 16.5. Configure Managed Connection Pools JBoss EAP provides three implementations of the ManagedConnectionPool interface. org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreConcurrentLinkedQueueManagedConnectionPool This is the default connection pool in JBoss EAP 7 and provides the best out-of-the-box performance. org.jboss.jca.core.connectionmanager.pool.mcp.SemaphoreArrayListManagedConnectionPool This was the default connection pool in JBoss EAP versions. org.jboss.jca.core.connectionmanager.pool.mcp.LeakDumperManagedConnectionPool This connection pool is used for debugging purposes only and will report any leaks upon shutdown or when the pool is flushed. You can set the managed connection pool implementation for a datasource using the following management CLI command. You can set the managed connection pool implementation for a resource adapter using the following management CLI command. You can set the managed connection pool implementation for a messaging server using the following management CLI command. 16.6. View Connection Statistics You can read statistics for a defined connection from the /deployment= NAME .rar subtree. This is so that you can access statistics for any RAR, even if it not defined in a standalone.xml or domain.xml configuration. Note Be sure to specify the include-runtime=true argument, as all statistics are runtime-only information. See Resource Adapter Statistics for information on the available statistics. 16.7. Flushing Resource Adapter Connections You can flush resource adapter connections using the following management CLI commands. Note In a managed domain, you must precede these commands with /host= HOST_NAME /server= SERVER_NAME . Flush all connections in the pool. Gracefully flush all connections in the pool. The server will wait until connections become idle before flushing them. Flush all idle connections in the pool. Flush all invalid connections in the pool. The server will flush all connections that it determines to be invalid. 16.8. Tuning the Resource Adapters Subsystem For tips on monitoring and optimizing performance for the resource-adapters subsystem, see the Datasource and Resource Adapter Tuning section of the Performance Tuning Guide .
[ "Severity: ERROR Section: 19.4.2 Description: A ResourceAdapter must implement a \"public int hashCode()\" method. Code: com.mycompany.myproject.ResourceAdapterImpl Severity: ERROR Section: 19.4.2 Description: A ResourceAdapter must implement a \"public boolean equals(Object)\" method. Code: com.mycompany.myproject.ResourceAdapterImpl", "batch /subsystem=jca/distributed-workmanager=myDistWorkMgr:add(name=myDistWorkMgr) /subsystem=jca/distributed-workmanager=myDistWorkMgr/short-running-threads=myDistWorkMgr:add(queue-length=10,max-threads=10) /subsystem=jca/bootstrap-context=myCustomContext:add(name=myCustomContext,workmanager=myDistWorkMgr) run-batch", "/subsystem=jca/distributed-workmanager=myDistWorkMgr:write-attribute(name=policy-options,value={watermark=3})", "deploy /path/to /resource-adapter.rar", "deploy /path/to /resource-adapter.rar --all-server-groups", "/subsystem=resource-adapters/resource-adapter=eis.rar:add(archive=eis.rar, transaction-support=XATransaction)", "/subsystem=resource-adapters/resource-adapter=eis.rar/config-properties=server:add(value=localhost)", "/subsystem=resource-adapters/resource-adapter=eis.rar/config-properties=port:add(value=9000)", "/subsystem=resource-adapters/resource-adapter=eis.rar/admin-objects=aoName:add(class-name=com.acme.eis.ra.EISAdminObjectImpl, jndi-name=java:/eis/AcmeAdminObject)", "/subsystem=resource-adapters/resource-adapter=eis.rar/admin-objects=aoName/config-properties=threshold:add(value=10)", "/subsystem=resource-adapters/resource-adapter=eis.rar/connection-definitions=cfName:add(class-name=com.acme.eis.ra.EISManagedConnectionFactory, jndi-name=java:/eis/AcmeConnectionFactory)", "/subsystem=resource-adapters/resource-adapter=eis.rar/connection-definitions=cfName/config-properties=name:add(value=Acme Inc)", "/subsystem=resource-adapters/resource-adapter=eis.rar/connection-definitions=cfName:write-attribute(name=enlistment-trace,value=true)", "/subsystem=resource-adapters/resource-adapter=eis.rar:activate", "/subsystem=resource-adapters/resource-adapter= RAR_NAME /connection-definitions= FACTORY_NAME :write-attribute(name=elytron-enabled,value=true)", "/subsystem=elytron/authentication-configuration=exampleAuthConfig:add(authentication-name=sa,credential-reference={clear-text=sa})", "/subsystem=elytron/authentication-context=exampleAuthContext:add(match-rules=[{authentication-configuration=exampleAuthConfig}])", "/subsystem=jca/workmanager=customWM:add(name=customWM, elytron-enabled=true)", "<subsystem xmlns=\"urn:jboss:domain:jca:5.0\"> <archive-validation enabled=\"true\" fail-on-error=\"true\" fail-on-warn=\"false\"/> <bean-validation enabled=\"true\"/> <default-workmanager> <short-running-threads> <core-threads count=\"50\"/> <queue-length count=\"50\"/> <max-threads count=\"50\"/> <keepalive-time time=\"10\" unit=\"seconds\"/> </short-running-threads> <long-running-threads> <core-threads count=\"50\"/> <queue-length count=\"50\"/> <max-threads count=\"50\"/> <keepalive-time time=\"10\" unit=\"seconds\"/> </long-running-threads> </default-workmanager> <workmanager name=\"customWM\"> <elytron-enabled>true</elytron-enabled> <short-running-threads> <core-threads count=\"20\"/> <queue-length count=\"20\"/> <max-threads count=\"20\"/> </short-running-threads> </workmanager> <bootstrap-contexts> <bootstrap-context name=\"customContext\" workmanager=\"customWM\"/> </bootstrap-contexts> <cached-connection-manager/> </subsystem>", "<subsystem xmlns=\"urn:jboss:domain:resource-adapters:5.0\"> <resource-adapters> <resource-adapter id=\"ra-with-elytron-security-domain\"> <archive> ra-with-elytron-security-domain.rar </archive> <bootstrap-context>customContext</bootstrap-context> <transaction-support>NoTransaction</transaction-support> <workmanager> <security> <elytron-security-domain>wm-realm</elytron-security-domain> <default-principal>wm-default-principal</default-principal> <default-groups> <group> wm-default-group </group> </default-groups> </security> </workmanager> </resource-adapter> </resource-adapters> </subsystem>", "/subsystem=elytron/properties-realm=wm-properties-realm:add(users-properties={path=/security-dir/users.properties, plain-text=true}, groups-properties={path=/security-dir/groups.properties}) /subsystem=elytron/simple-role-decoder=wm-role-decoder:add(attribute=groups) /subsystem=elytron/constant-permission-mapper=wm-permission-mapper:add(permissions=[{class-name=\"org.wildfly.security.auth.permission.LoginPermission\"}]) /subsystem=elytron/security-domain=wm-realm:add(default-realm=wm-properties-realm, permission-mapper=wm-permission-mapper, realms=[{role-decoder=wm-role-decoder, realm=wm-properties-realm}])", "public interface WorkContextProvider { /** * Gets an instance of <code>WorkContexts</code> that needs to be used * by the <code>WorkManager</code> to set up the execution context while * executing a <code>Work</code> instance. * * @return an <code>List</code> of <code>WorkContext</code> instances. */ List<WorkContext> getWorkContexts(); }", "public class ExampleWork implements Work, WorkContextProvider { private final String username; private final String role; public MyWork(TestBean bean, String username, String role) { this.principals = null; this.roles = null; this.bean = bean; this.username = username; this.role = role; } public List<WorkContext> getWorkContexts() { List<WorkContext> l = new ArrayList<>(1); l.add(new MySecurityContext(username, role)); return l; } public void run() { } public void release() { } public class ExampleSecurityContext extends SecurityContext { public void setupSecurityContext(CallbackHandler handler, Subject executionSubject, Subject serviceSubject) { try { List<javax.security.auth.callback.Callback> cbs = new ArrayList<>(); cbs.add(new CallerPrincipalCallback(executionSubject, new SimplePrincipal(username))); cbs.add(new GroupPrincipalCallback(executionSubject, new String[]{role})); handler.handle(cbs.toArray(new javax.security.auth.callback.Callback[cbs.size()])); } catch (Throwable t) { throw new RuntimeException(t); } } }", "/subsystem=datasources/data-source= DATA_SOURCE :write-attribute(name=mcp,value= MCP_CLASS )", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :write-attribute(name=mcp,value= MCP_CLASS )", "/subsystem=messaging-activemq/server= SERVER /pooled-connection-factory= CONNECTION_FACTORY :write-attribute(name=managed-connection-pool,value= MCP_CLASS )", "/deployment= NAME .rar/subsystem=resource-adapters/statistics=statistics/connection-definitions=java\\:\\/testMe:read-resource(include-runtime=true)", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-all-connection-in-pool", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-gracefully-connection-in-pool", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-idle-connection-in-pool", "/subsystem=resource-adapters/resource-adapter= RESOURCE_ADAPTER /connection-definitions= CONNECTION_DEFINITION :flush-invalid-connection-in-pool" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/jakarta_connectors_management
Chapter 3. Important Changes to External Kernel Parameters
Chapter 3. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 7.6. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. Kernel parameters hardened_usercopy = [KNL] This parameter specifies whether hardening is enabled (default) or not enabled for the boot. Hardened usercopy checking is used to protect the kernel from reading or writing beyond known memory allocation boundaries as a proactive defense against bounds-checking flaws in the kernel's copy_to_user() / copy_from_user() interface. The valid settings are: on , off . on - Perform hardened usercopy checks (default). off - Disable hardened usercopy checks. no-vmw-sched-clock [X86,PV_OPS] Disables paravirtualized VMware scheduler clock and uses the default one. rdt = [HW,X86,RDT] Turns on or off individual RDT features. Available features are: cmt , mbmtotal , mbmlocal , l3cat , l3cdp , l2cat , l2cdp , mba . For example, to turn on cmt and turn off mba , use: nospec_store_bypass_disable [HW] Disables all mitigations for the Speculative Store Bypass vulnerability. For more in-depth information about the Speculative Store Bypass (SSB) vulnerability, see Kernel Side-Channel Attack using Speculative Store Bypass - CVE-2018-3639 . spec_store_bypass_disable = [HW] Certain CPUs are vulnerable to an exploit against a common industry wide performance optimization known as Speculative Store Bypass. In such cases, recent stores to the same memory location cannot always be observed by later loads during speculative execution. However, such stores are unlikely and thus they can be detected prior to instruction retirement at the end of a particular speculation execution window. In vulnerable processors, the speculatively forwarded store can be used in a cache side channel attack. An example of this is reading memory to which the attacker does not directly have access, for example inside the sandboxed code. This parameter controls whether the Speculative Store Bypass (SSB) optimization to mitigate the SSB vulnerability is used. Possible values are: on - Unconditionally disable SSB. off - Unconditionally enable SSB. auto - Kernel detects whether the CPU model contains an implementation of SSB and selects the most appropriate mitigation. prctl - Controls SSB for a thread using prctl. SSB is enabled for a process by default. The state of the control is inherited on fork. Not specifying this option is equivalent to spec_store_bypass_disable=auto . For more in-depth information about the Speculative Store Bypass (SSB) vulnerability, see Kernel Side-Channel Attack using Speculative Store Bypass - CVE-2018-3639 . nmi_watchdog = [KNL,BUGS=X86] These settings can now be accessed at runtime with the use of the nmi_watchdog and hardlockup_panic sysctls. New and updated /proc/sys/kernel/ entries hardlockup_panic This parameter controls whether the kernel panics if a hard lockup is detected. Possible values are: 0 - Do not panic on hard lockup. 1 - Panic on hard lockup. This can also be set using the nmi_watchdog kernel parameter. perf_event_mlock_kb Controls size of per-cpu ring buffer not counted against mlock limit. The default value is 512 + 1 page. perf_event_paranoid Controls use of the performance events system by unprivileged users (without CAP_SYS_ADMIN ). The default value is 2 . Possible values are: -1 - Allow use of the majority of events by all users. >=0 - Disallow ftrace function tracepoint and raw tracepoint access by users without CAP_SYS_ADMIN . >=1 - Disallow CPU event access by users without CAP_SYS_ADMIN . >=2 - Disallow kernel profiling by users without CAP_SYS_ADMIN . New /proc/sys/net/core entries bpf_jit_harden Enables hardening for the Berkeley Packet Filter (BPF) Just in Time (JIT) compiler. Supported are Extended Berkeley Packet Filter (eBPF) JIT backends. Enabling hardening trades off performance, but can mitigate JIT spraying. Possible values are: 0 - Disable JIT hardening (default value). 1 - Enable JIT hardening for unprivileged users only. 2 - Enable JIT hardening for all users.
[ "rdt=cmt,!mba" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/chap-red_hat_enterprise_linux-7.6_release_notes-kernel_parameters_changes
Chapter 2. AMQP
Chapter 2. AMQP Since Camel 1.2 Both producer and consumer are supported The AMQP component supports the AMQP 1.0 protocol using the JMS Client API of the Qpid project. 2.1. Dependencies When using camel-amqp with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-amqp-starter</artifactId> </dependency> 2.2. URI format amqp:[queue:|topic:]destinationName[?options] 2.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 2.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 2.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 2.4. Component Options The AMQP component supports 100 options, which are listed below. Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. The clientId option is compulsory with JMS 1.1 durable topic subscriptions, because the client ID is used to control which client messages have to be stored for. With JMS 2.0 clients, clientId may be omitted, which creates a 'global' subscription. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured for a JMS 1.1 durable subscription, and may be configured for JMS 2.0, to create a private durable subscription. String includeAmqpAnnotations (common) Whether to include AMQP annotations when mapping from AMQP to Camel Message. Setting this to true maps AMQP message annotations that contain a JMS_AMQP_MA_ prefix to message headers. Due to limitations in Apache Qpid JMS API, currently delivery annotations are ignored. false boolean jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer picks up the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id, if the client ID is configured. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToConsumerType (consumer (advanced)) The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowAutoWiredConnectionFactory (advanced) Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true boolean allowAutoWiredDestinationResolver (advanced) Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared JMS configuration. JmsConfiguration destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean queueBrowseStrategy (advanced) To use a custom QueueBrowseStrategy when browsing queues. QueueBrowseStrategy receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 2.5. Endpoint Options The AMQP endpoint is configured using URI syntax: with the following path and query parameters: 2.5.1. Path Parameters (2 parameters) Name Description Default Type destinationType (common) The kind of destination to use. Enum values: queue topic temp-queue temp-topic queue String destinationName (common) Required Name of the queue or topic to use as destination. String 2.5.2. Query Parameters (96 parameters) Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1. The clientId option is compulsory with JMS 1.1 durable topic subscriptions, because the client ID is used to control which client messages have to be stored for. With JMS 2.0 clients, clientId may be omitted, which creates a 'global' subscription. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured for a JMS 1.1 durable subscription, and may be configured for JMS 2.0, to create a private durable subscription. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer picks up the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id, if the client ID is configured. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToConsumerType (consumer (advanced)) The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 2.6. Usage As AMQP component is inherited from JMS component, the usage of the former is almost identical to the latter: Using AMQP component // Consuming from AMQP queue from("amqp:queue:incoming"). to(...); // Sending message to the AMQP topic from(...). to("amqp:topic:notify"); 2.7. Configuring AMQP component Creating AMQP 1.0 component AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672"); AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent("amqp://localhost:5672", "user", "password"); You can also add an instance of org.apache.camel.component.amqp.AMQPConnectionDetails to the registry in order to automatically configure the AMQP component. For example for Spring Boot you just have to define bean: AMQP connection details auto-configuration @Bean AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); } @Bean AMQPConnectionDetails securedAmqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672", "username", "password"); } Likewise, you can also use CDI producer methods when using Camel-CDI AMQP connection details auto-configuration for CDI @Produces AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); } You can also rely on the to read the AMQP connection details. Factory method AMQPConnectionDetails.discoverAMQP() attempts to read Camel properties in a Kubernetes-like convention, just as demonstrated on the snippet below: AMQP connection details auto-configuration export AMQP_SERVICE_HOST = "mybroker.com" export AMQP_SERVICE_PORT = "6666" export AMQP_SERVICE_USERNAME = "username" export AMQP_SERVICE_PASSWORD = "password" ... @Bean AMQPConnectionDetails amqpConnection() { return AMQPConnectionDetails.discoverAMQP(); } Enabling AMQP specific options If you, for example, need to enable amqp.traceFrames you can do that by appending the option to your URI, like the following example: AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672?amqp.traceFrames=true"); For reference refer QPID JMS client configuration . 2.8. Using topics To have using topics working with camel-amqp you need to configure the component to use topic:// as topic prefix, as shown below: <bean id="amqp" class="org.apache.camel.component.amqp.AmqpComponent"> <property name="connectionFactory"> <bean class="org.apache.qpid.jms.JmsConnectionFactory" factory-method="createFromURL"> <property name="remoteURI" value="amqp://localhost:5672" /> <property name="topicPrefix" value="topic://" /> <!-- only necessary when connecting to ActiveMQ over AMQP 1.0 --> </bean> </property> </bean> Keep in mind that both AMQPComponent#amqpComponent() methods and AMQPConnectionDetails pre-configure the component with the topic prefix, so you don't have to configure it explicitly. 2.9. Spring Boot Auto-Configuration The component supports 101 options, which are listed below. Name Description Default Type camel.component.amqp.accept-messages-while-stopping Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false Boolean camel.component.amqp.acknowledgement-mode-name The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. AUTO_ACKNOWLEDGE String camel.component.amqp.allow-additional-headers This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String camel.component.amqp.allow-auto-wired-connection-factory Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true Boolean camel.component.amqp.allow-auto-wired-destination-resolver Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true Boolean camel.component.amqp.allow-null-body Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true Boolean camel.component.amqp.allow-reply-manager-quick-stop Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false Boolean camel.component.amqp.allow-serialized-headers Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.amqp.always-copy-message If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false Boolean camel.component.amqp.artemis-consumer-priority Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). Integer camel.component.amqp.artemis-streaming-enabled Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false Boolean camel.component.amqp.async-consumer Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer picks up the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false Boolean camel.component.amqp.async-start-listener Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false Boolean camel.component.amqp.async-stop-listener Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false Boolean camel.component.amqp.auto-startup Specifies whether the consumer container should auto-startup. true Boolean camel.component.amqp.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.amqp.cache-level Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. Integer camel.component.amqp.cache-level-name Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String camel.component.amqp.client-id Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions with JMS 1.1. String camel.component.amqp.concurrent-consumers Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 Integer camel.component.amqp.configuration To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. JmsConfiguration camel.component.amqp.connection-factory The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. ConnectionFactory camel.component.amqp.consumer-type The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. ConsumerType camel.component.amqp.correlation-property When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String camel.component.amqp.default-task-executor-type Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutorType camel.component.amqp.delivery-delay Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 Long camel.component.amqp.delivery-mode Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer camel.component.amqp.delivery-persistent Specifies whether persistent delivery is used by default. true Boolean camel.component.amqp.destination-resolver A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. DestinationResolver camel.component.amqp.disable-reply-to Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false Boolean camel.component.amqp.disable-time-to-live Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false Boolean camel.component.amqp.durable-subscription-name The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String camel.component.amqp.eager-loading-of-properties Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false Boolean camel.component.amqp.eager-poison-body If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String camel.component.amqp.enabled Whether to enable auto configuration of the amqp component. This is enabled by default. Boolean camel.component.amqp.error-handler Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. ErrorHandler camel.component.amqp.error-handler-log-stack-trace Allows to control whether stacktraces should be logged or not, by the default errorHandler. true Boolean camel.component.amqp.error-handler-logging-level Allows to configure the default errorHandler logging level for logging uncaught exceptions. LoggingLevel camel.component.amqp.exception-listener Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. ExceptionListener camel.component.amqp.explicit-qos-enabled Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean camel.component.amqp.expose-listener-session Specifies whether the listener session should be exposed when consuming messages. false Boolean camel.component.amqp.force-send-original-message When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false Boolean camel.component.amqp.format-date-headers-to-iso8601 Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false Boolean camel.component.amqp.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.amqp.idle-consumer-limit Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 Integer camel.component.amqp.idle-task-execution-limit Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 Integer camel.component.amqp.include-all-jmsx-properties Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false Boolean camel.component.amqp.include-amqp-annotations Whether to include AMQP annotations when mapping from AMQP to Camel Message. Setting this to true maps AMQP message annotations that contain a JMS_AMQP_MA_ prefix to message headers. Due to limitations in Apache Qpid JMS API, currently delivery annotations are ignored. false Boolean camel.component.amqp.include-sent-jms-message-id Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false Boolean camel.component.amqp.jms-key-format-strategy Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy camel.component.amqp.jms-message-type Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. JmsMessageType camel.component.amqp.lazy-create-transaction-manager If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true Boolean camel.component.amqp.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.amqp.map-jms-message Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true Boolean camel.component.amqp.max-concurrent-consumers Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. Integer camel.component.amqp.max-messages-per-task The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 Integer camel.component.amqp.message-converter To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. MessageConverter camel.component.amqp.message-created-strategy To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. MessageCreatedStrategy camel.component.amqp.message-id-enabled When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true Boolean camel.component.amqp.message-listener-container-factory Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. The option is a org.apache.camel.component.jms.MessageListenerContainerFactory type. MessageListenerContainerFactory camel.component.amqp.message-timestamp-enabled Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true Boolean camel.component.amqp.password Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.amqp.preserve-message-qos Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false Boolean camel.component.amqp.priority Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 Integer camel.component.amqp.pub-sub-no-local Specifies whether to inhibit the delivery of messages published by its own connection. false Boolean camel.component.amqp.queue-browse-strategy To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. QueueBrowseStrategy camel.component.amqp.receive-timeout The timeout for receiving messages (in milliseconds). The option is a long type. 1000 Long camel.component.amqp.recovery-interval Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. The option is a long type. 5000 Long camel.component.amqp.reply-to Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String camel.component.amqp.reply-to-cache-level-name Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String camel.component.amqp.reply-to-concurrent-consumers Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 Integer camel.component.amqp.reply-to-consumer-type The consumer type of the reply consumer (when doing request/reply), which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. ConsumerType camel.component.amqp.reply-to-delivery-persistent Specifies whether to use persistent delivery by default for replies. true Boolean camel.component.amqp.reply-to-destination-selector-name Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String camel.component.amqp.reply-to-max-concurrent-consumers Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. Integer camel.component.amqp.reply-to-on-timeout-max-concurrent-consumers Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 Integer camel.component.amqp.reply-to-override Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String camel.component.amqp.reply-to-same-destination-allowed Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false Boolean camel.component.amqp.reply-to-type Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType camel.component.amqp.request-timeout The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. The option is a long type. 20000 Long camel.component.amqp.request-timeout-checker-interval Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. The option is a long type. 1000 Long camel.component.amqp.selector Sets the JMS selector to use. String camel.component.amqp.stream-message-type-enabled Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false Boolean camel.component.amqp.subscription-durable Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false Boolean camel.component.amqp.subscription-name Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String camel.component.amqp.subscription-shared Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false Boolean camel.component.amqp.synchronous Sets whether synchronous processing should be strictly used. false Boolean camel.component.amqp.task-executor Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. TaskExecutor camel.component.amqp.test-connection-on-startup Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false Boolean camel.component.amqp.time-to-live When sending messages, specifies the time-to-live of the message (in milliseconds). -1 Long camel.component.amqp.transacted Specifies whether to use transacted mode. false Boolean camel.component.amqp.transacted-in-out Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false Boolean camel.component.amqp.transaction-manager The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. PlatformTransactionManager camel.component.amqp.transaction-name The name of the transaction to use. String camel.component.amqp.transaction-timeout The timeout value of the transaction (in seconds), if using transacted mode. -1 Integer camel.component.amqp.transfer-exception If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false Boolean camel.component.amqp.transfer-exchange You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false Boolean camel.component.amqp.use-message-id-as-correlation-id Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false Boolean camel.component.amqp.username Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.amqp.wait-for-provision-correlation-to-be-updated-counter Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 Integer camel.component.amqp.wait-for-provision-correlation-to-be-updated-thread-sleeping-time Interval in millis to sleep each time while waiting for provisional correlation id to be updated. The option is a long type. 100 Long
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-amqp-starter</artifactId> </dependency>", "amqp:[queue:|topic:]destinationName[?options]", "amqp:destinationType:destinationName", "// Consuming from AMQP queue from(\"amqp:queue:incoming\"). to(...); // Sending message to the AMQP topic from(...). to(\"amqp:topic:notify\");", "AMQPComponent amqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672\"); AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672\", \"user\", \"password\");", "@Bean AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\"); } @Bean AMQPConnectionDetails securedAmqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\", \"username\", \"password\"); }", "@Produces AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\"); }", "export AMQP_SERVICE_HOST = \"mybroker.com\" export AMQP_SERVICE_PORT = \"6666\" export AMQP_SERVICE_USERNAME = \"username\" export AMQP_SERVICE_PASSWORD = \"password\" @Bean AMQPConnectionDetails amqpConnection() { return AMQPConnectionDetails.discoverAMQP(); }", "AMQPComponent amqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672?amqp.traceFrames=true\");", "<bean id=\"amqp\" class=\"org.apache.camel.component.amqp.AmqpComponent\"> <property name=\"connectionFactory\"> <bean class=\"org.apache.qpid.jms.JmsConnectionFactory\" factory-method=\"createFromURL\"> <property name=\"remoteURI\" value=\"amqp://localhost:5672\" /> <property name=\"topicPrefix\" value=\"topic://\" /> <!-- only necessary when connecting to ActiveMQ over AMQP 1.0 --> </bean> </property> </bean>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-amqp-component-starter
Chapter 7. Adding certification components
Chapter 7. Adding certification components After creating the new product listing, add the certification components for the newly created product listing. You can configure the following options for the newly added components: Note The component configurations differ for different product categories. Section 7.1, "Certification" Section 7.2, "Component Details" Section 7.3, "Contact Information" Section 7.4, "Associated products" To configure the component options, go to the Components tab and click on any of the existing components. 7.1. Certification Verify the functionality of your product on Red Hat Enterprise Linux Verify the functionality of your product on Red Hat Enterprise Linux by using the Certification tab. You can perform the following functions: This feature allows you to perform the following functions: . Run the Red Hat Certification Tool locally . Download the test plan . Share the test results with the Red Hat certification team . Interact with the certification team, if required. To verify the functionality of your product, perform the following steps: If you are a new partner, click Request a partner subscription . When your request is approved, you get active subscriptions added to your account. When you have active partner subscriptions, then click Start certification . Click Go to Red Hat certification tool . A new certification case gets created on the Red Hat Certification portal , and you are redirected to the appropriate component portal page. The certification team will contact you to start the certification testing process and will follow up with you in case of a problem. After successful verification, a green check mark is displayed with the validate complete message. To review the product details, click Review . 7.2. Component Details Enter the required project details in the following fields: Project name - Enter the project name. This name is not published and is only for internal use. Red Hat Enterprise Linux (RHEL) Version - Specifies the RHEL version on which you wish to certify your non-containerized product component. Note You cannot change the RHEL version after you have created the component. 7.3. Contact Information Note Providing information for this tab is optional. In the Contact Information tab, enter the primary technical contact details of your product component. Optional: In the Technical contact email address field, enter the email address of the image maintainer. Optional: To add additional contacts for your component, click + Add new contact . Click Save . 7.4. Associated products The Associated Product tab provides the list of products that are associated with your product component along with the following information: Product Name Type - Traditional application Visibility - Published or Not Published Last Activity - number of days before you ran the test To add products to your component, perform the following: If you want to find a product by its name, enter the product name in the Search by name text box and click the search icon. If you are not sure of the product name, click Find a product . From the Add product dialog, select the required product from the Available products list box and click the forward arrow. The selected product is added to the Chosen products list box. Click Update attached products . Added products are listed in the Associated product list. Note All the fields marked with an asterisk * are required and must be completed before you can proceed with the certification.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/adding-certification-components_openshift-sw-cert-workflow-onboarding-certification-partners
Preface
Preface You can install Red Hat Developer Hub on OpenShift Container Platform by using one of the following installers: The Red Hat Developer Hub Operator Ready for immediate use in OpenShift Container Platform after an administrator installs it with OperatorHub Uses Operator Lifecycle Management (OLM) to manage automated subscription updates on OpenShift Container Platform Requires preinstallation of Operator Lifecycle Management (OLM) to manage automated subscription updates on Kubernetes The Red Hat Developer Hub Helm chart Ready for immediate use in both OpenShift Container Platform and Kubernetes Requires manual installation and management Use the installation method that best meets your needs and preferences. Additional resources For more information about choosing an installation method, see Helm Charts vs. Operators For more information about the Operator method, see Understanding Operators . For more information about the Helm chart method, see Understanding Helm .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_openshift_container_platform/pr01
probe::nfs.proc.commit
probe::nfs.proc.commit Name probe::nfs.proc.commit - NFS client committing data on server Synopsis nfs.proc.commit Values size read bytes in this execution prot transfer protocol version NFS version server_ip IP address of server bitmask1 V4 bitmask representing the set of attributes supported on this filesystem offset the file offset bitmask0 V4 bitmask representing the set of attributes supported on this filesystem Description All the nfs.proc.commit kernel functions were removed in kernel commit 200baa in December 2006, so these probes do not exist on Linux 2.6.21 and newer kernels. Fires when client writes the buffered data to disk. The buffered data is asynchronously written by client earlier. The commit function works in sync way. This probe point does not exist in NFSv2.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-proc-commit
Chapter 6. Getting Started with nftables
Chapter 6. Getting Started with nftables The nftables framework provides packet classification facilities and it is the designated successor to the iptables , ip6tables , arptables , ebtables , and ipset tools. It offers numerous improvements in convenience, features, and performance over packet-filtering tools, most notably: built-in lookup tables instead of linear processing a single framework for both the IPv4 and IPv6 protocols rules all applied atomically instead of fetching, updating, and storing a complete rule set support for debugging and tracing in the rule set ( nftrace ) and monitoring trace events (in the nft tool) more consistent and compact syntax, no protocol-specific extensions a Netlink API for third-party applications Similarly to iptables , nftables use tables for storing chains. The chains contain individual rules for performing actions. The nft tool replaces all tools from the packet-filtering frameworks. The libnftnl library can be used for low-level interaction with nftables Netlink API over the libmnl library. To display the effect of rule set changes, use the nft list ruleset command. Since these tools add tables, chains, rules, sets, and other objects to the nftables rule set, be aware that nftables rule-set operations, such as the nft flush ruleset command, might affect rule sets installed using the formerly separate legacy commands. When to use firewalld or nftables firewalld : Use the firewalld utility for simple firewall use cases. The utility is easy to use and covers the typical use cases for these scenarios. nftables : Use the nftables utility to set up complex and performance critical firewalls, such as for a whole network. Important To avoid that the different firewall services influence each other, run only one of them on a RHEL host, and disable the other services. 6.1. Writing and executing nftables scripts The nftables framework provides a native scripting environment that brings a major benefit over using shell scripts to maintain firewall rules: the execution of scripts is atomic. This means that the system either applies the whole script or prevents the execution if an error occurs. This guarantees that the firewall is always in a consistent state. Additionally, the nftables script environment enables administrators to: add comments define variables include other rule set files This section explains how to use these features, as well as creating and executing nftables scripts. When you install the nftables package, Red Hat Enterprise Linux automatically creates *.nft scripts in the /etc/nftables/ directory. These scripts contain commands that create tables and empty chains for different purposes. 6.1.1. Supported nftables script formats The nftables scripting environment supports scripts in the following formats: You can write a script in the same format as the nft list ruleset command displays the rule set: You can use the same syntax for commands as in nft commands: 6.1.2. Running nftables scripts You can run nftables script either by passing it to the nft utility or execute the script directly. Prerequisites The procedure of this section assumes that you stored an nftables script in the /etc/nftables/example_firewall.nft file. Procedure 6.1. Running nftables scripts using the nft utility To run an nftables script by passing it to the nft utility, enter: Procedure 6.2. Running the nftables script directly: Steps that are required only once: Ensure that the script starts with the following shebang sequence: Important If you omit the -f parameter, the nft utility does not read the script and displays: Error: syntax error, unexpected newline, expecting string. Optional: Set the owner of the script to root : Make the script executable for the owner: Run the script: If no output is displayed, the system executed the script successfully. Important Even if nft executes the script successfully, incorrectly placed rules, missing parameters, or other problems in the script can cause that the firewall behaves not as expected. Additional resources For details about setting the owner of a file, see the chown(1) man page. For details about setting permissions of a file, see the chmod(1) man page. For more information about loading nftables rules with system boot, see Section 6.1.6, "Automatically loading nftables rules when the system boots" 6.1.3. Using comments in nftables scripts The nftables scripting environment interprets everything to the right of a # character as a comment. Example 6.1. Comments in an nftables script Comments can start at the beginning of a line, as well as to a command: 6.1.4. Using variables in an nftables script To define a variable in an nftables script, use the define keyword. You can store single values and anonymous sets in a variable. For more complex scenarios, use named sets or verdict maps. Variables with a single value The following example defines a variable named INET_DEV with the value enp1s0 : You can use the variable in the script by writing the USD sign followed by the variable name: Variables that contain an anonymous set The following example defines a variable that contains an anonymous set: You can use the variable in the script by writing the USD sign followed by the variable name: Note Note that curly braces have special semantics when you use them in a rule because they indicate that the variable represents a set. Additional resources For more information about sets, see Section 6.4, "Using sets in nftables commands" . For more information about verdict maps, see Section 6.5, "Using verdict maps in nftables commands" . 6.1.5. Including files in an nftables script The nftables scripting environment enables administrators to include other scripts by using the include statement. If you specify only a file name without an absolute or relative path, nftables includes files from the default search path, which is set to /etc on Red Hat Enterprise Linux. Example 6.2. Including files from the default search directory To include a file from the default search directory: Example 6.3. Including all *.nft files from a directory To include all files ending in *.nft that are stored in the /etc/nftables/rulesets/ directory: Note that the include statement does not match files beginning with a dot. Additional resources For further details, see the Include files section in the nft(8) man page. 6.1.6. Automatically loading nftables rules when the system boots The nftables systemd service loads firewall scripts that are included in the /etc/sysconfig/nftables.conf file. This section explains how to load firewall rules when the system boots. Prerequisites The nftables scripts are stored in the /etc/nftables/ directory. Procedure 6.3. Automatically loading nftables rules when the system boots Edit the /etc/sysconfig/nftables.conf file. If you enhance *.nft scripts created in /etc/nftables/ when you installed the nftables package, uncomment the include statement for these scripts. If you write scripts from scratch, add include statements to include these scripts. For example, to load the /etc/nftables/example.nft script when the nftables service starts, add: Optionally, start the nftables service to load the firewall rules without rebooting the system: Enable the nftables service. Additional resources For more information, see Section 6.1.1, "Supported nftables script formats"
[ "#!/usr/sbin/nft -f Flush the rule set flush ruleset table inet example_table { chain example_chain { # Chain for incoming packets that drops all packets that # are not explicitly allowed by any rule in this chain type filter hook input priority 0; policy drop; # Accept connections to port 22 (ssh) tcp dport ssh accept } }", "#!/usr/sbin/nft -f Flush the rule set flush ruleset Create a table add table inet example_table Create a chain for incoming packets that drops all packets that are not explicitly allowed by any rule in this chain add chain inet example_table example_chain { type filter hook input priority 0 ; policy drop ; } Add a rule that accepts connections to port 22 (ssh) add rule inet example_table example_chain tcp dport ssh accept", "nft -f /etc/nftables/example_firewall.nft", "#!/usr/sbin/nft -f", "chown root /etc/nftables/ example_firewall.nft", "chmod u+x /etc/nftables/ example_firewall.nft", "/etc/nftables/ example_firewall.nft", "Flush the rule set flush ruleset add table inet example_table # Create a table", "define INET_DEV = enp1s0", "add rule inet example_table example_chain iifname USDINET_DEV tcp dport ssh accept", "define DNS_SERVERS = { 192.0.2.1, 192.0.2.2 }", "add rule inet example_table example_chain ip daddr USDDNS_SERVERS accept", "include \"example.nft\"", "include \"/etc/nftables/rulesets/*.nft\"", "include \"/etc/nftables/example.nft\"", "systemctl start nftables", "systemctl enable nftables" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/chap-getting_started_with_nftables
Chapter 4. Installing Hosts for Red Hat Virtualization
Chapter 4. Installing Hosts for Red Hat Virtualization Red Hat Virtualization supports two types of hosts: Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts . Depending on your environment, you may want to use one type only, or both. At least two hosts are required for features such as migration and high availability. See Section 4.3, "Recommended Practices for Configuring Host Networks" for networking information. Important SELinux is in enforcing mode upon installation. To verify, run getenforce . SELinux must be in enforcing mode on all hosts and Managers for your Red Hat Virtualization environment to be supported. Table 4.1. Host Types Host Type Other Names Description Red Hat Virtualization Host RHVH, thin host This is a minimal operating system based on Red Hat Enterprise Linux. It is distributed as an ISO file from the Customer Portal and contains only the packages required for the machine to act as a host. Red Hat Enterprise Linux host RHEL host, thick host Red Hat Enterprise Linux systems with the appropriate subscriptions attached can be used as hosts. Host Compatibility When you create a new data center, you can set the compatibility version. Select the compatibility version that suits all the hosts in the data center. Once set, version regression is not allowed. For a fresh Red Hat Virtualization installation, the latest compatibility version is set in the default data center and default cluster; to use an earlier compatibility version, you must create additional data centers and clusters. For more information about compatibility versions see Red Hat Virtualization Manager Compatibility in Red Hat Virtualization Life Cycle . 4.1. Red Hat Virtualization Hosts 4.1.1. Installing Red Hat Virtualization Hosts Red Hat Virtualization Host (RHVH) is a minimal operating system based on Red Hat Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a Red Hat Virtualization environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See http://cockpit-project.org/running.html for the minimum browser requirements. RHVH supports NIST 800-53 partitioning requirements to improve security. RHVH uses a NIST 800-53 partition layout by default. The host must meet the minimum host requirements . Procedure Download the RHVH ISO image from the Customer Portal: Log in to the Customer Portal at https://access.redhat.com . Click Downloads in the menu bar. Click Red Hat Virtualization . Scroll up and click Download Latest to access the product download page. Go to Hypervisor Image for RHV 4.3 and and click Download Now . Create a bootable media device. See Making Media in the Red Hat Enterprise Linux Installation Guide for more information. Start the machine on which you are installing RHVH, booting from the prepared installation media. From the boot menu, select Install RHVH 4.3 and press Enter . Note You can also press the Tab key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing the Enter key. Press the Esc key to clear any changes to the kernel parameters and return to the boot menu. Select a language, and click Continue . Select a time zone from the Date & Time screen and click Done . Select a keyboard layout from the Keyboard screen and click Done . Select the device on which to install RHVH from the Installation Destination screen. Optionally, enable encryption. Click Done . Important Red Hat strongly recommends using the Automatically configure partitioning option. Select a network from the Network & Host Name screen and click Configure... to configure the connection details. Note To use the connection every time the system boots, select the Automatically connect to this network when it is available check box. For more information, see Edit Network Connections in the Red Hat Enterprise Linux 7 Installation Guide . Enter a host name in the Host name field, and click Done . Optionally configure Language Support , Security Policy , and Kdump . See Installing Using Anaconda in the Red Hat Enterprise Linux 7 Installation Guide for more information on each of the sections in the Installation Summary screen. Click Begin Installation . Set a root password and, optionally, create an additional user while RHVH installs. Warning Red Hat strongly recommends not creating untrusted users on RHVH, as this can lead to exploitation of local security vulnerabilities. Click Reboot to complete the installation. Note When RHVH restarts, nodectl check performs a health check on the host and displays the result when you log in on the command line. The message node status: OK or node status: DEGRADED indicates the health status. Run nodectl check to get more information. The service is enabled by default. 4.1.2. Enabling the Red Hat Virtualization Host Repository Register the system to receive updates. Red Hat Virtualization Host only requires one repository. This section provides instructions for registering RHVH with the Content Delivery Network , or with Red Hat Satellite 6 . Registering RHVH with the Content Delivery Network Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Navigate to Subscriptions , click Register System , and enter your Customer Portal user name and password. The Red Hat Virtualization Host subscription is automatically attached to the system. Click Terminal . Enable the Red Hat Virtualization Host 7 repository to allow later updates to the Red Hat Virtualization Host: Registering RHVH with Red Hat Satellite 6 Log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . Click Terminal . Register RHVH with Red Hat Satellite 6: 4.1.3. Advanced Installation 4.1.3.1. Custom Partitioning Custom partitioning on Red Hat Virtualization Host (RHVH) is not recommended. Red Hat strongly recommends using the Automatically configure partitioning option in the Installation Destination window. If your installation requires custom partitioning, select the I will configure partitioning option during the installation, and note that the following restrictions apply: Ensure the default LVM Thin Provisioning option is selected in the Manual Partitioning window. The following directories are required and must be on thin provisioned logical volumes: root ( / ) /home /tmp /var /var/crash /var/log /var/log/audit Important Do not create a separate partition for /usr . Doing so will cause the installation to fail. /usr must be on a logical volume that is able to change versions along with RHVH, and therefore should be left on root ( / ). For information about the required storage sizes for each partition, see Section 2.2.3, "Storage Requirements" . The /boot directory should be defined as a standard partition. The /var directory must be on a separate volume or disk. Only XFS or Ext4 file systems are supported. Configuring Manual Partitioning in a Kickstart File The following example demonstrates how to configure manual partitioning in a Kickstart file. Note If you use logvol --thinpool --grow , you must also include volgroup --reserved-space or volgroup --reserved-percent to reserve space in the volume group for the thin pool to grow. 4.1.3.2. Automating Red Hat Virtualization Host Deployment You can install Red Hat Virtualization Host (RHVH) without a physical media device by booting from a PXE server over the network with a Kickstart file that contains the answers to the installation questions. General instructions for installing from a PXE server with a Kickstart file are available in the Red Hat Enterprise Linux Installation Guide , as RHVH is installed in much the same way as Red Hat Enterprise Linux. RHVH-specific instructions, with examples for deploying RHVH with Red Hat Satellite, are described below. The automated RHVH deployment has 3 stages: Section 4.1.3.2.1, "Preparing the Installation Environment" Section 4.1.3.2.2, "Configuring the PXE Server and the Boot Loader" Section 4.1.3.2.3, "Creating and Running a Kickstart File" 4.1.3.2.1. Preparing the Installation Environment Log in to the Customer Portal . Click Downloads in the menu bar. Click Red Hat Virtualization . Scroll up and click Download Latest to access the product download page. Go to Hypervisor Image for RHV 4.3 and and click Download Now . Make the RHVH ISO image available over the network. See Installation Source on a Network in the Red Hat Enterprise Linux Installation Guide . Extract the squashfs.img hypervisor image file from the RHVH ISO: Note This squashfs.img file, located in the /tmp/usr/share/redhat-virtualization-host/image/ directory, is called redhat-virtualization-host- version_number _version.squashfs.img . It contains the hypervisor image for installation on the physical machine. It should not be confused with the /LiveOS/squashfs.img file, which is used by the Anaconda inst.stage2 option. 4.1.3.2.2. Configuring the PXE Server and the Boot Loader Configure the PXE server. See Preparing for a Network Installation in the Red Hat Enterprise Linux Installation Guide . Copy the RHVH boot images to the /tftpboot directory: Create a rhvh label specifying the RHVH boot images in the boot loader configuration: RHVH Boot Loader Configuration Example for Red Hat Satellite If you are using information from Red Hat Satellite to provision the host, you must create a global or host group level parameter called rhvh_image and populate it with the directory URL where the ISO is mounted or extracted: Make the content of the RHVH ISO locally available and export it to the network, for example, using an HTTPD server: 4.1.3.2.3. Creating and Running a Kickstart File Create a Kickstart file and make it available over the network. See Kickstart Installations in the Red Hat Enterprise Linux Installation Guide . Ensure that the Kickstart file meets the following RHV-specific requirements: The %packages section is not required for RHVH. Instead, use the liveimg option and specify the redhat-virtualization-host- version_number _version.squashfs.img file from the RHVH ISO image: Autopartitioning is highly recommended: Note Thin provisioning must be used with autopartitioning. The --no-home option does not work in RHVH because /home is a required directory. If your installation requires manual partitioning, see Section 4.1.3.1, "Custom Partitioning" for a list of limitations that apply to partitions and an example of manual partitioning in a Kickstart file. A %post section that calls the nodectl init command is required: Kickstart Example for Deploying RHVH on Its Own This Kickstart example shows you how to deploy RHVH. You can include additional commands and options as required. Kickstart Example for Deploying RHVH with Registration and Network Configuration from Satellite This Kickstart example uses information from Red Hat Satellite to configure the host network and register the host to the Satellite server. You must create a global or host group level parameter called rhvh_image and populate it with the directory URL to the squashfs.img file. ntp_server1 is also a global or host group level variable. Add the Kickstart file location to the boot loader configuration file on the PXE server: Install RHVH following the instructions in Booting from the Network Using PXE in the Red Hat Enterprise Linux Installation Guide . 4.2. Red Hat Enterprise Linux hosts 4.2.1. Installing Red Hat Enterprise Linux hosts A Red Hat Enterprise Linux host is based on a standard basic installation of Red Hat Enterprise Linux 7 on a physical server, with the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions attached. For detailed installation instructions, see the Performing a standard {enterprise-linux-shortname} installation . The host must meet the minimum host requirements . Important Virtualization must be enabled in your host's BIOS settings. For information on changing your host's BIOS settings, refer to your host's hardware documentation. Important Third-party watchdogs should not be installed on Red Hat Enterprise Linux hosts, as they can interfere with the watchdog daemon provided by VDSM. 4.2.2. Enabling the Red Hat Enterprise Linux host Repositories To use a Red Hat Enterprise Linux machine as a host, you must register the system with the Content Delivery Network, attach the Red Hat Enterprise Linux Server and Red Hat Virtualization subscriptions, and enable the host repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and record the pool IDs: Use the pool IDs to attach the subscriptions to the system: Note To view currently attached subscriptions: To list all enabled repositories: Configure the repositories: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER8 hardware: For Red Hat Enterprise Linux 7 hosts, little endian, on IBM POWER9 hardware: Ensure that all packages currently installed are up to date: Reboot the machine. 4.2.3. Installing Cockpit on Red Hat Enterprise Linux hosts You can install Cockpit for monitoring the host's resources and performing administrative tasks. Procedure Install the dashboard packages: Enable and start the cockpit.socket service: Check if Cockpit is an active service in the firewall: You should see cockpit listed. If it is not, enter the following with root permissions to add cockpit as a service to your firewall: The --permanent option keeps the cockpit service active after rebooting. You can log in to the Cockpit web interface at https:// HostFQDNorIP :9090 . 4.3. Recommended Practices for Configuring Host Networks If your network environment is complex, you may need to configure a host network manually before adding the host to the Red Hat Virtualization Manager. Red Hat recommends the following practices for configuring a host network: Configure the network with Cockpit. Alternatively, you can use nmtui or nmcli . If a network is not required for a self-hosted engine deployment or for adding a host to the Manager, configure the network in the Administration Portal after adding the host to the Manager. See Creating a New Logical Network in a Data Center or Cluster . Use the following naming conventions: VLAN devices: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD VLAN interfaces: physical_device . VLAN_ID (for example, eth0.23 , eth1.128 , enp3s0.50 ) Bond interfaces: bond number (for example, bond0 , bond1 ) VLANs on bond interfaces: bond number . VLAN_ID (for example, bond0.50 , bond1.128 ) Use network bonding . Networking teaming is not supported in Red Hat Virtualization and will cause errors if the host is used to deploy a self-hosted engine or added to the Manager. Use recommended bonding modes: If the ovirtmgmt network is not used by virtual machines, the network may use any supported bonding mode. If the ovirtmgmt network is used by virtual machines, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to? . Red Hat Virtualization's default bonding mode is (Mode 4) Dynamic Link Aggregation . If your switch does not support Link Aggregation Control Protocol (LACP), use (Mode 1) Active-Backup . See Bonding Modes for details. Configure a VLAN on a physical NIC as in the following example (although nmcli is used, you can use any tool): Configure a VLAN on a bond as in the following example (although nmcli is used, you can use any tool): Do not disable firewalld . Customize the firewall rules in the Administration Portal after adding the host to the Manager. See Configuring Host Firewall Rules . Important When creating a management bridge that uses a static IPv6 address, disable network manager control in its interface configuration (ifcfg) file before adding a host. See https://access.redhat.com/solutions/3981311 for more information. 4.4. Adding Standard Hosts to the Red Hat Virtualization Manager Adding a host to your Red Hat Virtualization environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge. Important When creating a management bridge that uses a static IPv6 address, disable network manager control in its interface configuration (ifcfg) file before adding a host. See https://access.redhat.com/solutions/3981311 for more information. Procedure From the Administration Portal, click Compute Hosts . Click New . Use the drop-down list to select the Data Center and Host Cluster for the new host. Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field. Select an authentication method to use for the Manager to access the host. Enter the root user's password to use password authentication. Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication. Optionally, click the Advanced Parameters button to change the following advanced host settings: Disable automatic firewall configuration. Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically. Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide . Click OK . The new host displays in the list of hosts with a status of Installing , and you can view the progress of the installation in the Events section of the Notification Drawer ( ). After a brief delay the host status changes to Up .
[ "subscription-manager repos --enable=rhel-7-server-rhvh-4-rpms", "rpm -Uvh http://satellite.example.com/pub/katello-ca-consumer-latest.noarch.rpm # subscription-manager register --org=\" org_id \" # subscription-manager list --available # subscription-manager attach --pool= pool_id # subscription-manager repos --disable='*' --enable=rhel-7-server-rhvh-4-rpms", "clearpart --all part /boot --fstype xfs --size=1000 --ondisk=sda part pv.01 --size=42000 --grow volgroup HostVG pv.01 --reserved-percent=20 logvol swap --vgname=HostVG --name=swap --fstype=swap --recommended logvol none --vgname=HostVG --name=HostPool --thinpool --size=40000 --grow logvol / --vgname=HostVG --name=root --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=6000 --grow logvol /var --vgname=HostVG --name=var --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=15000 logvol /var/crash --vgname=HostVG --name=var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=10000 logvol /var/log --vgname=HostVG --name=var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=8000 logvol /var/log/audit --vgname=HostVG --name=var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=2000 logvol /home --vgname=HostVG --name=home --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000 logvol /tmp --vgname=HostVG --name=tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions=\"defaults,discard\" --size=1000", "mount -o loop /path/to/RHVH-ISO /mnt/rhvh cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp cd /tmp rpm2cpio redhat-virtualization-host-image-update* | cpio -idmv", "cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/", "LABEL rhvh MENU LABEL Install Red Hat Virtualization Host KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO", "<%# kind: PXELinux name: RHVH PXELinux %> Created for booting new hosts # DEFAULT rhvh LABEL rhvh KERNEL <%= @kernel %> APPEND initrd=<%= @initrd %> inst.ks=<%= foreman_url(\"provision\") %> inst.stage2=<%= @host.params[\"rhvh_image\"] %> intel_iommu=on console=tty0 console=ttyS1,115200n8 ssh_pwauth=1 local_boot_trigger=<%= foreman_url(\"built\") %> IPAPPEND 2", "cp -a /mnt/rhvh/ /var/www/html/rhvh-install curl URL/to/RHVH-ISO /rhvh-install", "liveimg --url= example.com /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img", "autopart --type=thinp", "%post nodectl init %end", "liveimg --url=http:// FQDN /tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host- version_number _version.squashfs.img clearpart --all autopart --type=thinp rootpw --plaintext ovirt timezone --utc America/Phoenix zerombr text reboot %post --erroronfail nodectl init %end", "<%# kind: provision name: RHVH Kickstart default oses: - RHVH %> install liveimg --url=<%= @host.params['rhvh_image'] %>squashfs.img network --bootproto static --ip=<%= @host.ip %> --netmask=<%= @host.subnet.mask %> --gateway=<%= @host.subnet.gateway %> --nameserver=<%= @host.subnet.dns_primary %> --hostname <%= @host.name %> zerombr clearpart --all autopart --type=thinp rootpw --iscrypted <%= root_pass %> installation answers lang en_US.UTF-8 timezone <%= @host.params['time-zone'] || 'UTC' %> keyboard us firewall --service=ssh services --enabled=sshd text reboot %post --log=/root/ks.post.log --erroronfail nodectl init <%= snippet 'subscription_manager_registration' %> <%= snippet 'kickstart_networking_setup' %> /usr/sbin/ntpdate -sub <%= @host.params['ntp_server1'] || '0.fedora.pool.ntp.org' %> /usr/sbin/hwclock --systohc /usr/bin/curl <%= foreman_url('built') %> sync systemctl reboot %end", "APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2= URL/to/RHVH-ISO inst.ks= URL/to/RHVH-ks .cfg", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= poolid", "subscription-manager list --consumed", "yum repolist", "subscription-manager repos --disable='*' --enable=rhel-7-server-rpms --enable=rhel-7-server-rhv-4-mgmt-agent-rpms --enable=rhel-7-server-ansible-2.9-rpms", "subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-le-rpms --enable=rhel-7-for-power-le-rpms", "subscription-manager repos --disable='*' --enable=rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms --enable=rhel-7-for-power-9-rpms", "yum update", "yum install cockpit-ovirt-dashboard", "systemctl enable cockpit.socket systemctl start cockpit.socket", "firewall-cmd --list-services", "firewall-cmd --permanent --add-service=cockpit", "nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ivp4.gateway 123.123 .0.254", "nmcli connection add type bond con-name bond0 ifname bond0 bond.options \"mode=active-backup,miimon=100\" ipv4.method disabled ipv6.method ignore nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123 .0.1/24 +ivp4.gateway 123.123 .0.254" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Installing_Hosts_for_RHV_SM_localDB_deploy
10.3.9. Configuring Connection Settings
10.3.9. Configuring Connection Settings 10.3.9.1. Configuring 802.1X Security 802.1X security is the name of the IEEE standard for port-based Network Access Control (PNAC). Simply put, 802.1X security is a way of defining a logical network out of a physical one. All clients who want to join the logical network must authenticate with the server (a router, for example) using the correct 802.1X authentication method. 802.1X security is most often associated with securing wireless networks (WLANs), but can also be used to prevent intruders with physical access to the network (LAN) from gaining entry. In the past, DHCP servers were configured not to lease IP addresses to unauthorized users, but for various reasons this practice is both impractical and insecure, and thus is no longer recommended. Instead, 802.1X security is used to ensure a logically-secure network through port-based authentication. 802.1X provides a framework for WLAN and LAN access control and serves as an envelope for carrying one of the Extensible Authentication Protocol (EAP) types. An EAP type is a protocol that defines how WLAN security is achieved on the network. You can configure 802.1X security for a wired or wireless connection type by opening the Network Connections window (see Section 10.2.2, "Configuring New and Editing Existing Connections" ) and following the applicable procedure: Procedure 10.15. For a wired connection... Either click Add , select a new network connection for which you want to configure 802.1X security and then click Create , or select an existing connection and click Edit . Then select the 802.1X Security tab and check the Use 802.1X security for this connection check box to enable settings configuration. Proceed to Section 10.3.9.1.1, "Configuring TLS (Transport Layer Security) Settings" Procedure 10.16. For a wireless connection... Either click on Add , select a new network connection for which you want to configure 802.1X security and then click Create , or select an existing connection and click Edit . Select the Wireless Security tab. Then click the Security dropdown and choose one of the following security methods: LEAP , Dynamic WEP (802.1X) , or WPA & WPA2 Enterprise . See Section 10.3.9.1.1, "Configuring TLS (Transport Layer Security) Settings" for descriptions of which EAP types correspond to your selection in the Security dropdown. 10.3.9.1.1. Configuring TLS (Transport Layer Security) Settings With Transport Layer Security, the client and server mutually authenticate using the TLS protocol. The server demonstrates that it holds a digital certificate, the client proves its own identity using its client-side certificate, and key information is exchanged. Once authentication is complete, the TLS tunnel is no longer used. Instead, the client and server use the exchanged keys to encrypt data using AES, TKIP or WEP. The fact that certificates must be distributed to all clients who want to authenticate means that the EAP-TLS authentication method is very strong, but also more complicated to set up. Using TLS security requires the overhead of a public key infrastructure (PKI) to manage certificates. The benefit of using TLS security is that a compromised password does not allow access to the (W)LAN: an intruder must also have access to the authenticating client's private key. NetworkManager does not determine the version of TLS supported. NetworkManager gathers the parameters entered by the user and passes them to the daemon, wpa_supplicant , that handles the procedure. It in turn uses OpenSSL to establish the TLS tunnel. OpenSSL itself negotiates the SSL/TLS protocol version. It uses the highest version both ends support. Identity Identity string for EAP authentication methods, such as a user name or login name. User certificate Click to browse for, and select, a user's certificate. CA certificate Click to browse for, and select, a Certificate Authority's certificate. Private key Click to browse for, and select, a user's private key file. Note that the key must be password protected. Private key password Enter the user password corresponding to the user's private key. 10.3.9.1.2. Configuring Tunneled TLS Settings Anonymous identity This value is used as the unencrypted identity. CA certificate Click to browse for, and select, a Certificate Authority's certificate. Inner authentication PAP - Password Authentication Protocol. MSCHAP - Challenge Handshake Authentication Protocol. MSCHAPv2 - Microsoft Challenge Handshake Authentication Protocol version 2. CHAP - Challenge Handshake Authentication Protocol. Username Enter the user name to be used in the authentication process. Password Enter the password to be used in the authentication process. 10.3.9.1.3. Configuring Protected EAP (PEAP) Settings Anonymous Identity This value is used as the unencrypted identity. CA certificate Click to browse for, and select, a Certificate Authority's certificate. PEAP version The version of Protected EAP to use. Automatic, 0 or 1. Inner authentication MSCHAPv2 - Microsoft Challenge Handshake Authentication Protocol version 2. MD5 - Message Digest 5, a cryptographic hash function. GTC - Generic Token Card. Username Enter the user name to be used in the authentication process. Password Enter the password to be used in the authentication process. 10.3.9.2. Configuring Wireless Security Security None - Do not encrypt the Wi-Fi connection. WEP 40/128-bit Key - Wired Equivalent Privacy (WEP), from the IEEE 802.11 standard. Uses a single pre-shared key (PSK). WEP 128-bit Passphrase - An MD5 hash of the passphrase will be used to derive a WEP key. LEAP - Lightweight Extensible Authentication Protocol, from Cisco Systems. Dynamic WEP (802.1X) - WEP keys are changed dynamically. WPA & WPA2 Personal - Wi-Fi Protected Access (WPA), from the draft IEEE 802.11i standard. A replacement for WEP. Wi-Fi Protected Access II (WPA2), from the 802.11i-2004 standard. Personal mode uses a pre-shared key (WPA-PSK). WPA & WPA2 Enterprise - WPA for use with a RADIUS authentication server to provide IEEE 802.1X network access control. Password Enter the password to be used in the authentication process. Note In the case of WPA and WPA2 (Personal and Enterprise), an option to select between Auto, WPA and WPA2 has been added. This option is intended for use with an access point that is offering both WPA and WPA2. Select one of the protocols if you would like to prevent roaming between the two protocols. Roaming between WPA and WPA2 on the same access point can cause loss of service. Figure 10.16. Editing the Wireless Security tab and selecting the WPA protocol 10.3.9.3. Configuring PPP (Point-to-Point) Settings Configure Methods Use point-to-point encryption (MPPE) Microsoft Point-To-Point Encryption protocol (RFC 3078). Allow BSD data compression PPP BSD Compression Protocol (RFC 1977). Allow Deflate data compression PPP Deflate Protocol (RFC 1979). Use TCP header compression Compressing TCP/IP Headers for Low-Speed Serial Links (RFC 1144). Send PPP echo packets LCP Echo-Request and Echo-Reply Codes for loopback tests (RFC 1661). 10.3.9.4. Configuring IPv4 Settings Figure 10.17. Editing the IPv4 Settings Tab The IPv4 Settings tab allows you to configure the method by which you connect to the Internet and enter IP address, route, and DNS information as required. The IPv4 Settings tab is available when you create and modify one of the following connection types: wired, wireless, mobile broadband, VPN or DSL. If you are using DHCP to obtain a dynamic IP address from a DHCP server, you can set Method to Automatic (DHCP) . Setting the Method Available IPv4 Methods by Connection Type When you click the Method dropdown menu, depending on the type of connection you are configuring, you are able to select one of the following IPv4 connection methods. All of the methods are listed here according to which connection type or types they are associated with. Method Automatic (DHCP) - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses. You do not need to fill in the DHCP client ID field. Automatic (DHCP) addresses only - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses but you want to assign DNS servers manually. Link-Local Only - Choose this option if the network you are connecting to does not have a DHCP server and you do not want to assign IP addresses manually. Random addresses will be selected as per RFC 3927. Shared to other computers - Choose this option if the interface you are configuring is for sharing an Internet or WAN connection. Wired, Wireless and DSL Connection Methods Manual - Choose this option if the network you are connecting to does not have a DHCP server and you want to assign IP addresses manually. Mobile Broadband Connection Methods Automatic (PPP) - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses. Automatic (PPP) addresses only - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses but you want to assign DNS servers manually. VPN Connection Methods Automatic (VPN) - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses. Automatic (VPN) addresses only - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses but you want to assign DNS servers manually. DSL Connection Methods Automatic (PPPoE) - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses. Automatic (PPPoE) addresses only - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses but you want to assign DNS servers manually. PPPoE Specific Configuration Steps If more than one NIC is installed, and PPPoE will only be run over one NIC but not the other, then for correct PPPoE operation it is also necessary to lock the connection to the specific Ethernet device PPPoE is supposed to be run over. To lock the connection to one specific NIC, do one of the following: Enter the MAC address in nm-connection-editor for that connection. Optionally select Connect automatically and Available to all users to make the connection come up without requiring user login after system start. Set the hardware-address in the [802-3-ethernet] section in the appropriate file for that connection in /etc/NetworkManager/system-connections/ as follows: Mere presence of the file in /etc/NetworkManager/system-connections/ means that it is " available to all users " . Ensure that autoconnect=true appears in the [connection] section for the connection to be brought up without requiring user login after system start. For information on configuring static routes for the network connection, go to Section 10.3.9.6, "Configuring Routes" . 10.3.9.5. Configuring IPv6 Settings Method Ignore - Choose this option if you want to disable IPv6 settings. Automatic - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses. Automatic, addresses only - Choose this option if the network you are connecting to uses a DHCP server to assign IP addresses but you want to assign DNS servers manually. Manual - Choose this option if the network you are connecting to does not have a DHCP server and you want to assign IP addresses manually. Link-Local Only - Choose this option if the network you are connecting to does not have a DHCP server and you do not want to assign IP addresses manually. Random addresses will be selected as per RFC 4862. Shared to other computers - Choose this option if the interface you are configuring is for sharing an Internet or WAN connection. Addresses DNS servers - Enter a comma separated list of DNS servers. Search domains - Enter a comma separated list of domain controllers. For information on configuring static routes for the network connection, go to Section 10.3.9.6, "Configuring Routes" . 10.3.9.6. Configuring Routes A host's routing table will be automatically populated with routes to directly connected networks. The routes are learned by observing the network interfaces when they are " up " . This section is for entering static routes to networks or hosts which can be reached by traversing an intermediate network or connection, such as a VPN or leased line. Figure 10.18. Configuring static network routes Addresses Address - The IP address of a network, sub-net or host. Netmask - The netmask or prefix length of the IP address just entered. Gateway - The IP address of the gateway leading to the network, sub-net or host. Metric - A network cost, that is to say a preference value to give to this route. Lower values will be preferred over higher values. Ignore automatically obtained routes Select this check box to only use manually entered routes for this connection. Use this connection only for resources on its network Select this check box to prevent the connection from becoming the default route. Typical examples are where a connection is a VPN or a leased line to a head office and you do not want any Internet bound traffic to pass over the connection. Selecting this option means that only traffic specifically destined for routes learned automatically over the connection or entered here manually will be routed over the connection.
[ "[802-3-ethernet] mac-address=00:11:22:33:44:55" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Configuring_Connection_Settings
Chapter 10. Configuring certificate mapping rules in Identity Management
Chapter 10. Configuring certificate mapping rules in Identity Management Certificate mapping rules are a convenient way of allowing users to authenticate using certificates in scenarios when the Identity Management (IdM) administrator does not have access to certain users' certificates. This is typically because the certificates have been issued by an external certificate authority. 10.1. Certificate mapping rules for configuring authentication You might need to configure certificate mapping rules in the following scenarios: Certificates have been issued by the Certificate System of the Active Directory (AD) with which the IdM domain is in a trust relationship. Certificates have been issued by an external certificate authority. The IdM environment is large with many users using smart cards. In this case, adding full certificates can be complicated. The subject and issuer are predictable in most scenarios and therefore easier to add ahead of time than the full certificate. As a system administrator, you can create a certificate mapping rule and add certificate mapping data to a user entry even before a certificate is issued to a particular user. Once the certificate is issued, the user can log in using the certificate even though the full certificate has not yet been uploaded to the user entry. In addition, as certificates are renewed at regular intervals, certificate mapping rules reduce administrative overhead. When a user's certificate is renewed, the administrator does not have to update the user entry. For example, if the mapping is based on the Subject and Issuer values, and if the new certificate has the same subject and issuer as the old one, the mapping still applies. If, in contrast, the full certificate was used, then the administrator would have to upload the new certificate to the user entry to replace the old one. To set up certificate mapping: An administrator has to load the certificate mapping data or the full certificate into a user account. An administrator has to create a certificate mapping rule to allow successful logging into IdM for a user whose account contains a certificate mapping data entry that matches the information on the certificate. Once the certificate mapping rules have been created, when the end-user presents the certificate, stored either on a filesystem or a smart card , authentication is successful. Note The Key Distribution Center (KDC) has a cache for certificate mapping rules. The cache is populated on the first certauth request and it has a hard-coded timeout of 300 seconds. KDC will not see any changes to certificate mapping rules unless it is restarted or the cache expires. For details on the individual components that make up a mapping rule and how to obtain and use them, see Components of an identity mapping rule in IdM and Obtaining the issuer from a certificate for use in a matching rule . Note Your certificate mapping rules can depend on the use case for which you are using the certificate. For example, if you are using SSH with certificates, you must have the full certificate to extract the public key from the certificate. 10.2. Components of an identity mapping rule in IdM You configure different components when creating an identity mapping rule in IdM. Each component has a default value that you can override. You can define the components in either the web UI or the CLI. In the CLI, the identity mapping rule is created using the ipa certmaprule-add command. Mapping rule The mapping rule component associates (or maps ) a certificate with one or more user accounts. The rule defines an LDAP search filter that associates a certificate with the intended user account. Certificates issued by different certificate authorities (CAs) might have different properties and might be used in different domains. Therefore, IdM does not apply mapping rules unconditionally, but only to the appropriate certificates. The appropriate certificates are defined using matching rules . Note that if you leave the mapping rule option empty, the certificates are searched in the userCertificate attribute as a DER encoded binary file. Define the mapping rule in the CLI using the --maprule option. Matching rule The matching rule component selects a certificate to which you want to apply the mapping rule. The default matching rule matches certificates with the digitalSignature key usage and clientAuth extended key usage. Define the matching rule in the CLI using the --matchrule option. Domain list The domain list specifies the identity domains in which you want IdM to search the users when processing identity mapping rules. If you leave the option unspecified, IdM searches the users only in the local domain to which the IdM client belongs. Define the domain in the CLI using the --domain option. Priority When multiple rules are applicable to a certificate, the rule with the highest priority takes precedence. All other rules are ignored. The lower the numerical value, the higher the priority of the identity mapping rule. For example, a rule with a priority 1 has higher priority than a rule with a priority 2. If a rule has no priority value defined, it has the lowest priority. Define the mapping rule priority in the CLI using the --priority option. Certificate mapping rule example To define, using the CLI, a certificate mapping rule called simple_rule that allows authentication for a certificate issued by the Smart Card CA of the EXAMPLE.ORG organization if the Subject on that certificate matches a certmapdata entry in a user account in IdM: 10.3. Obtaining data from a certificate for use in a matching rule This procedure describes how to obtain data from a certificate so that you can copy and paste it into the matching rule of a certificate mapping rule. To get data required by a matching rule, use the sssctl cert-show or sssctl cert-eval-rule commands. Prerequisites You have the user certificate in PEM format. Procedure Create a variable pointing to your certificate that also ensures it is correctly encoded so you can retrieve the required data. Use the sssctl cert-eval-rule to determine the matching data. In the following example the certificate serial number is used. In this case, add everything after altSecurityIdentities= to the altSecurityIdentities attribute in AD for the user. If using SKI mapping, use --map='LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u})' . Optional: To create a new mapping rule in the CLI based on a matching rule which specifies that the certificate issuer must match adcs19-WIN1-CA of the ad.example.com domain and the serial number of the certificate must match the altSecurityIdentities entry in a user account: 10.4. Configuring certificate mapping for users stored in IdM To enable certificate mapping in IdM if the user for whom certificate authentication is being configured is stored in IdM, a system administrator must complete the following tasks: Set up a certificate mapping rule so that IdM users with certificates that match the conditions specified in the mapping rule and in their certificate mapping data entries can authenticate to IdM. Enter certificate mapping data to an IdM user entry so that the user can authenticate using multiple certificates provided that they all contain the values specified in the certificate mapping data entry. Prerequisites The user has an account in IdM. The administrator has either the whole certificate or the certificate mapping data to add to the user entry. 10.4.1. Adding a certificate mapping rule in the IdM web UI Log in to the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 10.1. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. For example, to make IdM search for the Issuer and Subject entries in any certificate presented to them, and base its decision to authenticate or not on the information found in these two entries of the presented certificate: Enter the matching rule. For example, to only allow certificates issued by the Smart Card CA of the EXAMPLE.ORG organization to authenticate users to IdM: Figure 10.2. Entering the details for a certificate mapping rule in the IdM web UI Click Add at the bottom of the dialog box to add the rule and close the box. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 10.4.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make IdM search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate, recognizing only certificates issued by the Smart Card CA of the EXAMPLE.ORG organization: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: Now you have a certificate mapping rule set up that compares the type of data specified in the mapping rule that it finds on a smart card certificate with the certificate mapping data in your IdM user entries. Once it finds a match, it authenticates the matching user. 10.4.3. Adding certificate mapping data to a user entry in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Users Active users idm_user . Find the Certificate mapping data option and click Add . Choose one of the following options: If you have the certificate of idm_user : On the command line, display the certificate using the cat utility or a text editor: Copy the certificate. In the IdM web UI, click Add to Certificate and paste the certificate into the window that opens up. Figure 10.3. Adding a user's certificate mapping data: certificate If you do not have the certificate of idm_user at your disposal but know the Issuer and the Subject of the certificate, check the radio button of Issuer and subject and enter the values in the two respective boxes. Figure 10.4. Adding a user's certificate mapping data: issuer and subject Click Add . Verification If you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of idm_user in the SSSD cache and force a reload of the idm_user information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: The output confirms that now you have certificate mapping data added to idm_user and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as idm_user . 10.4.4. Adding certificate mapping data to a user entry in the IdM CLI Obtain the administrator's credentials: Choose one of the following options: If you have the certificate of idm_user , add the certificate to the user account using the ipa user-add-cert command: If you do not have the certificate of idm_user but know the Issuer and the Subject of the user's certificate: Verification If you have access to the whole certificate in the .pem format, verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of idm_user in the SSSD cache and force a reload of the idm_user information: Run the ipa certmap-match command with the name of the file containing the certificate of the IdM user: The output confirms that now you have certificate mapping data added to idm_user and that a corresponding mapping rule exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as idm_user . 10.5. Certificate mapping rules for trusts with Active Directory domains Different certificate mapping use cases are possible if an IdM deployment is in a trust relationship with an Active Directory (AD) domain. Depending on the AD configuration, the following scenarios are possible: If the certificate is issued by AD Certificate System but the user and the certificate are stored in IdM, the mapping and the whole processing of the authentication request takes place on the IdM side. For details of configuring this scenario, see Configuring certificate mapping for users stored in IdM If the user is stored in AD, the processing of the authentication request takes place in AD. There are three different subcases: The AD user entry contains the whole certificate. For details how to configure IdM in this scenario, see Configuring certificate mapping for users whose AD user entry contains the whole certificate . AD is configured to map user certificates to user accounts. In this case, the AD user entry does not contain the whole certificate but instead contains an attribute called altSecurityIdentities . For details how to configure IdM in this scenario, see Configuring certificate mapping if AD is configured to map user certificates to user accounts . The AD user entry contains neither the whole certificate nor the mapping data. In this case, there are two options: If the user certificate is issued by AD Certificate System, the certificate either contains the user principal name as the Subject Alternative Name (SAN) or, if the latest updates are applied to AD, the SID of the user in the SID extension of the certificate. Both of these can be used to map the certificate to the user. If the user certificate is on a smart card, to enable SSH with smart cards, SSSD must derive the public SSH key from the certificate and therefore the full certificate is required. The only solution is to use the ipa idoverrideuser-add command to add the whole certificate to the AD user's ID override in IdM. For details, see Configuring certificate mapping if AD user entry contains no certificate or mapping data . AD domain administrators can manually map certificates to a user in AD using the altSecurityIdentities attribute. There are six supported values for this attribute, though three mappings are considered insecure. As part of May 10,2022 security update , once it is installed, all devices are in compatibility mode and if a certificate is weakly mapped to a user, authentication occurs as expected. However, warning messages are logged identifying any certificates that are not compatible with full enforcement mode. As of November 14, 2023 or later, all devices will be updated to full enforcement mode and if a certificate fails the strong mapping criteria, authentication will be denied. For example, when an AD user requests an IdM Kerberos ticket with a certificate (PKINIT), AD needs to map the certificate to a user internally and uses the new mapping rules for this. However in IdM, the rules continue to work if IdM is used to map a certificate to a user on an IdM client, . IdM supports the new mapping templates, making it easier for an AD administrator to use the new rules and not maintain both. IdM now supports the new mapping templates added to Active Directory to include: Serial Number: LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur}) Subject Key Id: LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u}) User SID: LDAPU1:(objectsid={sid}) If you do not want to reissue certificates with the new SID extension, you can create a manual mapping by adding the appropriate mapping string to a user's altSecurityIdentities attribute in AD. 10.6. Configuring certificate mapping for users whose AD user entry contains the whole certificate This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains the whole certificate. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains a certificate. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. Note To ensure PKINIT works for a user, one of the following conditions must apply: The certificate in the user entry includes the user principal name or the SID extension for the user. The user entry in AD has a suitable entry in the altSecurityIdentities attribute. 10.6.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 10.5. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to what is available in AD: Note If mapping using the full certificate, if you renew the certificate, you must ensure that you add the new certificate to the AD user object. Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Figure 10.6. Certificate mapping rule for a user with a certificate stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI:: 10.6.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to what is available in AD, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Note If mapping using the full certificate, if you renew the certificate, you must ensure that you add the new certificate to the AD user object. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 10.7. Configuring certificate mapping if AD is configured to map user certificates to user accounts This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD, and the user entry in AD contains certificate mapping data. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has access to data on which the IdM certificate mapping rule can be based. 10.7.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 10.7. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. For example, to make AD DC search for the Issuer and Subject entries in any certificate presented, and base its decision to authenticate or not on the information found in these two entries of the presented certificate: Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate users to IdM: Enter the domain: Figure 10.8. Certificate mapping rule if AD is configured for mapping Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI:: 10.7.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. For example, to make AD search for the Issuer and Subject entries in any certificate presented, and only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain: The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 10.7.3. Checking certificate mapping data on the AD side The altSecurityIdentities attribute is the Active Directory (AD) equivalent of certmapdata user attribute in IdM. When configuring certificate mapping in IdM in the scenario when a trusted AD domain is configured to map user certificates to user accounts, the IdM system administrator needs to check that the altSecurityIdentities attribute is set correctly in the user entries in AD. Prerequisites The user account must have user administration access. Procedure To check that AD contains the right information for the user stored in AD, use the ldapsearch command. For example, enter the command below to check with the adserver.ad.example.com server that the following conditions apply: The altSecurityIdentities attribute is set in the user entry of ad_user . The matchrule stipulates that the following conditions apply: The certificate that ad_user uses to authenticate to AD was issued by AD-ROOT-CA of the ad.example.com domain. The subject is <S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user : 10.8. Configuring certificate mapping if AD user entry contains no certificate or mapping data This user story describes the steps necessary for enabling certificate mapping in IdM if the IdM deployment is in trust with Active Directory (AD), the user is stored in AD and the user entry in AD contains neither the whole certificate nor certificate mapping data. Prerequisites The user does not have an account in IdM. The user has an account in AD which contains neither the whole certificate nor the altSecurityIdentities attribute, the AD equivalent of the IdM certmapdata attribute. The IdM administrator has done one of the following: Added the whole AD user certificate to the AD user's user ID override in IdM. Created a certificate mapping rule that maps to an alternative field in the certificate, such as Subject Alternative Name or the SID of the user. 10.8.1. Adding a certificate mapping rule in the IdM web UI Log into the IdM web UI as an administrator. Navigate to Authentication Certificate Identity Mapping Rules Certificate Identity Mapping Rules . Click Add . Figure 10.9. Adding a new certificate mapping rule in the IdM web UI Enter the rule name. Enter the mapping rule. To have the whole certificate that is presented to IdM for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM: Note As the certificate also contains the user principal name as the SAN, or with the latest updates, the SID of the user in the SID extension of the certificate, you can also use these fields to map the certificate to the user. For example, if using the SID of the user, replace this mapping rule with LDAPU1:(objectsid={sid}) . For more information on certificate mapping, see the sss-certmap man page on your system. Enter the matching rule. For example, to only allow certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Enter the domain name. For example, to search for users in the ad.example.com domain: Figure 10.10. Certificate mapping rule for a user with no certificate or mapping data stored in AD Click Add . The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD in the CLI: 10.8.2. Adding a certificate mapping rule in the IdM CLI Obtain the administrator's credentials: Enter the mapping rule and the matching rule the mapping rule is based on. To have the whole certificate that is presented for authentication compared to the certificate stored in the user ID override entry of the AD user entry in IdM, only allowing certificates issued by the AD-ROOT-CA of the AD.EXAMPLE.COM domain to authenticate: Note As the certificate also contains the user principal name as the SAN, or with the latest updates, the SID of the user in the SID extension of the certificate, you can also use these fields to map the certificate to the user. For example, if using the SID of the user, replace this mapping rule with LDAPU1:(objectsid={sid}) . For more information on certificate mapping, see the sss-certmap man page on your system. The System Security Services Daemon (SSSD) periodically re-reads the certificate mapping rules. To force the newly-created rule to be loaded immediately, restart SSSD: 10.8.3. Adding a certificate to an AD user's ID override in the IdM web UI Navigate to Identity ID Views Default Trust View . Click Add . Figure 10.11. Adding a new user ID override in the IdM web UI In the User to override field, enter [email protected] . Copy and paste the certificate of ad_user into the Certificate field. Figure 10.12. Configuring the User ID override for an AD user Click Add . Verification Verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of [email protected] in the SSSD cache and force a reload of the [email protected] information: Run the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule defined in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . Additional resources Using ID views for Active Directory users 10.8.4. Adding a certificate to an AD user's ID override in the IdM CLI Obtain the administrator's credentials: Store the certificate blob in a new variable called CERT : Add the certificate of [email protected] to the user account using the ipa idoverrideuser-add-cert command: Verification Verify that the user and certificate are linked: Use the sss_cache utility to invalidate the record of [email protected] in the SSSD cache and force a reload of the [email protected] information: Run the ipa certmap-match command with the name of the file containing the certificate of the AD user: The output confirms that you have certificate mapping data added to [email protected] and that a corresponding mapping rule defined in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data exists. This means that you can use any certificate that matches the defined certificate mapping data to authenticate as [email protected] . Additional resources Using ID views for Active Directory users 10.9. Combining several identity mapping rules into one To combine several identity mapping rules into one combined rule, use the | (or) character to precede the individual mapping rules, and separate them using () brackets, for example: Certificate mapping filter example 1 In the above example, the filter definition in the --maprule option includes these criteria: ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Adding a certificate mapping rule in IdM altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Adding a certificate mapping rule if the trusted AD domain is configured to map user certificates The addition of the --domain=ad.example.com option means that users mapped to a given certificate are not only searched in the local idm.example.com domain but also in the ad.example.com domain The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria. Certificate mapping filter example 2 In the above example, the filter definition in the --maprule option includes these criteria: userCertificate;binary={cert!bin} is a filter that returns user entries that include the whole certificate. For AD users, creating this type of filter is described in detail in Adding a certificate mapping rule if the AD user entry contains no certificate or mapping data . ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the ipacertmapdata attribute in an IdM user account, as described in Adding a certificate mapping rule in IdM . altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500} is a filter that links the subject and issuer from a smart card certificate to the value of the altSecurityIdentities attribute in an AD user account, as described in Adding a certificate mapping rule if the trusted AD domain is configured to map user certificates . The filter definition in the --maprule option accepts the logical operator | (or), so that you can specify multiple criteria. In this case, the rule maps all user accounts that meet at least one of the criteria. 10.10. Additional resources sss-certmap(5) man page on your system
[ "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'", "CERT=USD(openssl x509 -in /path/to/certificate -outform der|base64 -w0)", "sssctl cert-eval-rule USDCERT --match='<ISSUER>CN=adcs19-WIN1-CA,DC=AD,DC=EXAMPLE,DC=COM' --map='LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur})' Certificate matches rule. Mapping filter: (altSecurityIdentities=X509:<I>DC=com,DC=example,DC=ad,CN=adcs19-WIN1-CA<SR>0F0000000000DB8852DD7B246C9C0F0000003B)", "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=adcs19-WIN1-CA,DC=AD,DC=EXAMPLE,DC=COM' --maprule 'LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur})'", "(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})", "<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add rule_name --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})' ------------------------------------------------------- Added Certificate Identity Mapping Rule \"rule_name\" ------------------------------------------------------- Rule name: rule_name Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500}) Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG Enabled: TRUE", "systemctl restart sssd", "cat idm_user_certificate.pem -----BEGIN CERTIFICATE----- MIIFFTCCA/2gAwIBAgIBEjANBgkqhkiG9w0BAQsFADA6MRgwFgYDVQQKDA9JRE0u RVhBTVBMRS5DT00xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0x ODA5MDIxODE1MzlaFw0yMDA5MDIxODE1MzlaMCwxGDAWBgNVBAoMD0lETS5FWEFN [...output truncated...]", "sss_cache -u idm_user", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=USD(openssl x509 -in idm_user_cert.pem -outform der|base64 -w0) ipa user-add-certmapdata idm_user --certificate USDCERT", "ipa user-add-certmapdata idm_user --subject \"O=EXAMPLE.ORG,CN=test\" --issuer \"CN=Smart Card CA,O=EXAMPLE.ORG\" -------------------------------------------- Added certificate mappings to user \"idm_user\" -------------------------------------------- User login: idm_user Certificate mapping data: X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG", "sss_cache -u idm_user", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "ad.example.com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add ad_configured_for_mapping_rule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})' --domain=ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"ad_configured_for_mapping_rule\" ------------------------------------------------------- Rule name: ad_configured_for_mapping_rule Mapping rule: (altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "ldapsearch -o ldif-wrap=no -LLL -h adserver.ad.example.com -p 389 -D cn=Administrator,cn=users,dc=ad,dc=example,dc=com -W -b cn=users,dc=ad,dc=example,dc=com \"(cn=ad_user)\" altSecurityIdentities Enter LDAP Password: dn: CN=ad_user,CN=Users,DC=ad,DC=example,DC=com altSecurityIdentities: X509:<I>DC=com,DC=example,DC=ad,CN=AD-ROOT-CA<S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=USD(openssl x509 -in /path/to/certificate -outform der|base64 -w0)", "ipa idoverrideuser-add-cert [email protected] --certificate USDCERT", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "ipa certmaprule-add ad_cert_for_ipa_and_ad_users --maprule='(|(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --domain=ad.example.com", "ipa certmaprule-add ipa_cert_for_ad_users --maprule='(|(userCertificate;binary={cert!bin})(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=Certificate Authority,O=REALM.EXAMPLE.COM' --domain=idm.example.com --domain=ad.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/conf-certmap-idm_managing-certificates-in-idm
3.2. Backups and Migration
3.2. Backups and Migration 3.2.1. Backing Up and Restoring the Red Hat Virtualization Manager 3.2.1.1. Backing up Red Hat Virtualization Manager - Overview Use the engine-backup tool to take regular backups of the Red Hat Virtualization Manager. The tool backs up the engine database and configuration files into a single file and can be run without interrupting the ovirt-engine service. 3.2.1.2. Syntax for the engine-backup Command The engine-backup command works in one of two basic modes: # engine-backup --mode=backup # engine-backup --mode=restore These two modes are further extended by a set of options that allow you to specify the scope of the backup and different credentials for the engine database. Run engine-backup --help for a full list of options and their function. Basic Options --mode Specifies whether the command performs a backup operation or a restore operation. The available options are: backup (set by default), restore , and verify . You must define the mode option for verify or restore operations. --file Specifies the path and name of a file (for example, file_name .backup) into which backups are saved in backup mode, and to be read as backup data in restore mode. The path is defined by default as /var/lib/ovirt-engine-backup/ . --log Specifies the path and name of a file (for example, log_file_name ) into which logs of the backup or restore operation are written. The path is defined by default as /var/log/ovirt-engine-backup/ . --scope Specifies the scope of the backup or restore operation. There are four options: all , to back up or restore all databases and configuration data (set by default); files , to back up or restore only files on the system; db , to back up or restore only the Manager database; and dwhdb , to back up or restore only the Data Warehouse database. The --scope option can be specified multiple times in the same engine-backup command. Manager Database Options The following options are only available when using the engine-backup command in restore mode. The option syntax below applies to restoring the Manager database. The same options exist for restoring the Data Warehouse database. See engine-backup --help for the Data Warehouse option syntax. --provision-db Creates a PostgreSQL database for the Manager database backup to be restored to. This is a required option when restoring a backup on a remote host or fresh installation that does not have a PostgreSQL database already configured. When this option is used in restore mode, the --restore-permissions option is added by default. --provision-all-databases Creates databases for all memory dumps included in the archive. When enabled, this is the default. --change-db-credentials Allows you to specify alternate credentials for restoring the Manager database using credentials other than those stored in the backup itself. See engine-backup --help for the additional parameters required by this option. --restore-permissions or --no-restore-permissions Restores or does not restore the permissions of database users. One of these options is required when restoring a backup. When the --provision-* option is used in restore mode, --restore-permissions is applied by default. Note If a backup contains grants for extra database users, restoring the backup with the --restore-permissions and --provision-db (or --provision-dwh-db ) options creates the extra users with random passwords. You must change these passwords manually if the extra users require access to the restored system. See How to grant access to an extra database user after restoring Red Hat Virtualization from a backup . 3.2.1.3. Creating a backup with the engine-backup command You can back up the Red Hat Virtualization Manager with the engine-backup command while the Manager is active. Append one of the following values to the --scope option to specify what you want to back up: all A full backup of all databases and configuration files on the Manager. This is the default setting for the --scope option. files A backup of only the files on the system db A backup of only the Manager database dwhdb A backup of only the Data Warehouse database cinderlibdb A backup of only the Cinderlib database grafanadb A backup of only the Grafana database You can specify the --scope option more than once. You can also configure the engine-backup command to back up additional files. It restores everything that it backs up. Important To restore a database to a fresh installation of Red Hat Virtualization Manager, a database backup alone is not sufficient. The Manager also requires access to the configuration files. If you specify a scope other than all , you must also include --scope=files , or back up the file system. For a complete explanation of the engine-backup command, enter engine-backup --help on the Manager machine. Procedure Log on to the Manager machine. Create a backup: # engine-backup The following settings are applied by default: --scope=all --mode=backup The command generates the backup in /var/lib/ovirt-engine-backup/ file_name .backup , and a log file in /var/log/ovirt-engine-backup/ log_file_name . Use file_name .tar to restore the environment. The following examples demonstrate several different backup scenarios. Example 3.1. Full backup # engine-backup Example 3.2. Manager database backup # engine-backup --scope=files --scope=db Example 3.3. Data Warehouse database backup # engine-backup --scope=files --scope=dwhdb Example 3.4. Adding specific files to the backup Make a directory to store configuration customizations for the engine-backup command: # mkdir -p /etc/ovirt-engine-backup/engine-backup-config.d Create a text file in the new directory named ntp-chrony.sh with the following contents: BACKUP_PATHS="USD{BACKUP_PATHS} /etc/chrony.conf /etc/ntp.conf /etc/ovirt-engine-backup" When you run the engine-backup command, use --scope=files . The backup and restore includes /etc/chrony.conf , /etc/ntp.conf , and /etc/ovirt-engine-backup . 3.2.1.4. Restoring a Backup with the engine-backup Command Restoring a backup using the engine-backup command involves more steps than creating a backup does, depending on the restoration destination. For example, the engine-backup command can be used to restore backups to fresh installations of Red Hat Virtualization, on top of existing installations of Red Hat Virtualization, and using local or remote databases. Important The version of the Red Hat Virtualization Manager (such as 4.4.8) used to restore a backup must be later than or equal to the Red Hat Virtualization Manager version (such as 4.4.7) used to create the backup. Starting with Red Hat Virtualization 4.4.7, this policy is strictly enforced by the engine-backup command. To view the version of Red Hat Virtualization contained in a backup file, unpack the backup file and read the value in the version file located in the root directory of the unpacked files. 3.2.1.5. Restoring a Backup to a Fresh Installation The engine-backup command can be used to restore a backup to a fresh installation of the Red Hat Virtualization Manager. The following procedure must be performed on a machine on which the base operating system has been installed and the required packages for the Red Hat Virtualization Manager have been installed, but the engine-setup command has not yet been run. This procedure assumes that the backup file or files can be accessed from the machine on which the backup is to be restored. Procedure Log on to the Manager machine. If you are restoring the engine database to a remote host, you will need to log on to and perform the relevant actions on that host. Likewise, if also restoring the Data Warehouse to a remote host, you will need to log on to and perform the relevant actions on that host. Restore a complete backup or a database-only backup. Restore a complete backup: # engine-backup --mode=restore --file= file_name --log= log_file_name --provision-db When the --provision-* option is used in restore mode, --restore-permissions is applied by default. If Data Warehouse is also being restored as part of the complete backup, provision the additional database: engine-backup --mode=restore --file= file_name --log= log_file_name --provision-db --provision-dwh-db Restore a database-only backup by restoring the configuration files and database backup: # engine-backup --mode=restore --scope=files --scope=db --file= file_name --log= log_file_name --provision-db The example above restores a backup of the Manager database. # engine-backup --mode=restore --scope=files --scope=dwhdb --file= file_name --log= log_file_name --provision-dwh-db The example above restores a backup of the Data Warehouse database. If successful, the following output displays: You should now run engine-setup. Done. Run the following command and follow the prompts to configure the restored Manager: # engine-setup The Red Hat Virtualization Manager has been restored to the version preserved in the backup. To change the fully qualified domain name of the new Red Hat Virtualization system, see The oVirt Engine Rename Tool . 3.2.1.6. Restoring a Backup to Overwrite an Existing Installation The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup. Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes. Procedure Log in to the Manager machine. Remove the configuration files and clean the database associated with the Manager: # engine-cleanup The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist. Restore a full backup: # engine-backup --mode=restore --file= file_name --log= log_file_name --restore-permissions Restore a database-only backup by restoring the configuration files and the database backup: # engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file= file_name --log= log_file_name --restore-permissions Note To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter. If successful, the following output displays: You should now run engine-setup. Done. Reconfigure the Manager: # engine-setup 3.2.1.7. Restoring a Backup with Different Credentials The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up, but the credentials of the database in the backup are different to those of the database on the machine on which the backup is to be restored. This is useful when you have taken a backup of an installation and want to restore the installation from the backup to a different system. Important When restoring a backup to overwrite an existing installation, you must run the engine-cleanup command to clean up the existing installation before using the engine-backup command. The engine-cleanup command only cleans the engine database, and does not drop the database or delete the user that owns that database. So you do not need to create a new database or specify the database credentials. However, if the credentials for the owner of the engine database are not known, you must change them before you can restore the backup. Procedure Log in to the Red Hat Virtualization Manager machine. Run the following command and follow the prompts to remove the Manager's configuration files and to clean the Manager's database: # engine-cleanup Change the password for the owner of the engine database if the credentials of that user are not known: Enter the postgresql command line: Change the password of the user that owns the engine database: postgres=# alter role user_name encrypted password ' new_password '; Repeat this for the user that owns the ovirt_engine_history database if necessary. Restore a complete backup or a database-only backup with the --change-db-credentials parameter to pass the credentials of the new database. The database_location for a database local to the Manager is localhost . Note The following examples use a --*password option for each database without specifying a password, which prompts for a password for each database. Alternatively, you can use --*passfile= password_file options for each database to securely pass the passwords to the engine-backup tool without the need for interactive prompts. Restore a complete backup: # engine-backup --mode=restore --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --no-restore-permissions If Data Warehouse is also being restored as part of the complete backup, include the revised credentials for the additional database: engine-backup --mode=restore --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host= database_location --dwh-db-name= database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions Restore a database-only backup by restoring the configuration files and the database backup: # engine-backup --mode=restore --scope=files --scope=db --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --no-restore-permissions The example above restores a backup of the Manager database. # engine-backup --mode=restore --scope=files --scope=dwhdb --file= file_name --log= log_file_name --change-dwh-db-credentials --dwh-db-host= database_location --dwh-db-name= database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions The example above restores a backup of the Data Warehouse database. If successful, the following output displays: You should now run engine-setup. Done. Run the following command and follow the prompts to reconfigure the firewall and ensure the ovirt-engine service is correctly configured: # engine-setup 3.2.1.8. Backing up and Restoring a Self-Hosted Engine You can back up a self-hosted engine and restore it in a new self-hosted environment. Use this procedure for tasks such as migrating the environment to a new self-hosted engine storage domain with a different storage type. When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment. The backup and restore operation involves the following key actions: Back up the original Manager using the engine-backup tool. Deploy a new self-hosted engine and restore the backup. Enable the Manager repositories on the new Manager virtual machine. Reinstall the self-hosted engine nodes to update their configuration. Remove the old self-hosted engine storage domain. This procedure assumes that you have access and can make changes to the original Manager. Prerequisites A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager. The original Manager must be updated to the latest minor version. The version of the Red Hat Virtualization Manager (such as 4.4.8) used to restore a backup must be later than or equal to the Red Hat Virtualization Manager version (such as 4.4.7) used to create the backup. Starting with Red Hat Virtualization 4.4.7, this policy is strictly enforced by the engine-backup command. See Updating the Red Hat Virtualization Manager in the Upgrade Guide . Note If you need to restore a backup, but do not have a new appliance, the restore process will pause, and you can log into the temporary Manager machine via SSH, register, subscribe, or configure channels as needed, and upgrade the Manager packages before resuming the restore process. The data center compatibility level must be set to the latest version to ensure compatibility with the updated storage version. There must be at least one regular host in the environment. This host (and any other regular hosts) will remain active to host the SPM role and any running virtual machines. If a regular host is not already the SPM, move the SPM role before creating the backup by selecting a regular host and clicking Management Select as SPM . If no regular hosts are available, there are two ways to add one: Remove the self-hosted engine configuration from a node (but do not remove the node from the environment). See Removing a Host from a Self-Hosted Engine Environment . Add a new regular host. See Adding standard hosts to the Manager host tasks . 3.2.1.8.1. Backing up the Original Manager Back up the original Manager using the engine-backup command, and copy the backup file to a separate location so that it can be accessed at any point during the process. For more information about engine-backup --mode=backup options, see Backing Up and Restoring the Red Hat Virtualization Manager in the Administration Guide . Procedure Log in to one of the self-hosted engine nodes and move the environment to global maintenance mode: Log in to the original Manager and stop the ovirt-engine service: Note Though stopping the original Manager from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Manager and the new Manager from simultaneously managing existing resources. Run the engine-backup command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log: # engine-backup --mode=backup --file= file_name --log= log_file_name Copy the files to an external server. In the following example, storage.example.com is the fully qualified domain name of a network storage server that will store the backup until it is needed, and /backup/ is any designated folder or path. # scp -p file_name log_file_name storage.example.com:/backup/ If you do not require the Manager machine for other purposes, unregister it from Red Hat Subscription Manager: # subscription-manager unregister Log in to one of the self-hosted engine nodes and shut down the original Manager virtual machine: # hosted-engine --vm-shutdown After backing up the Manager, deploy a new self-hosted engine and restore the backup on the new virtual machine. 3.2.1.8.2. Restoring the Backup on a New Self-Hosted Engine Run the hosted-engine script on a new host, and use the --restore-from-file= path/to/file_name option to restore the Manager backup during the deployment. Important If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator's ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment: If you are redeploying on an existing host, you must update the host's iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi . The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable. If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host. Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target). Procedure Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path. # scp -p file_name host.example.com:/backup/ Log in to the new host. If you are deploying on Red Hat Virtualization Host, ovirt-hosted-engine-setup is already installed, so skip this step. If you are deploying on Red Hat Enterprise Linux, install the ovirt-hosted-engine-setup package: # dnf install ovirt-hosted-engine-setup Use the tmux window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run tmux : # dnf -y install tmux # tmux Run the hosted-engine script, specifying the path to the backup file: # hosted-engine --deploy --restore-from-file=backup/ file_name To escape the script at any time, use CTRL + D to abort deployment. Select Yes to begin the deployment. Configure the network. The script detects possible NICs to use as a management bridge for the environment. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance. Enter the root password for the Manager. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user. Enter the virtual machine's CPU and memory configuration. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you. Enter the virtual machine's networking details. If you specify Static , enter the IP address of the Manager. Important The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine's IP must be in the same subnet range (10.1.1.1-254/24). Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine's /etc/hosts file. You must ensure that the host names are resolvable. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications: Enter a password for the admin@internal user to access the Administration Portal. The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed. Note If the host becomes non operational, due to a missing required network or a similar problem, the deployment pauses and a message such as the following is displayed: Pausing the process allows you to: Connect to the Administration Portal using the provided URL. Assess the situation, find out why the host is non operational, and fix whatever is needed. For example, if this deployment was restored from a backup, and the backup included required networks for the host cluster, configure the networks, attaching the relevant host NICs to these networks. Once everything looks OK, and the host status is Up , remove the lock file presented in the message above. The deployment continues. Select the type of storage to use: For NFS, enter the version, full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group. Note To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options. For Gluster storage, enter the full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. Important Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows: gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30 For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide . Enter the Manager disk size. The script continues until the deployment is complete. The deployment process changes the Manager's SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager's entry from the .ssh/known_hosts file on any client machines that accessed the original Manager. When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories. 3.2.1.8.3. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node. 3.2.1.8.4. Reinstalling Hosts Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Prerequisites If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low. Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance. Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks. Procedure Click Compute Hosts and select the host. Click Management Maintenance and OK . Click Installation Reinstall . This opens the Install Host window. Click the Hosted Engine tab and select DEPLOY from the drop-down list. Click OK to reinstall the host. After a host has been reinstalled and its status returns to Up , you can migrate virtual machines back to the host. Important After you register a Red Hat Virtualization Host to the Red Hat Virtualization Manager and reinstall it, the Administration Portal may erroneously display its status as Install Failed . Click Management Activate , and the host will change to an Up status and be ready for use. After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes: During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain. 3.2.1.8.5. Removing a Storage Domain You have a storage domain in your data center that you want to remove from the virtualized environment. Procedure Click Storage Domains . Move the storage domain to maintenance mode and detach it: Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Remove . Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain. Click OK . The storage domain is permanently removed from the environment. 3.2.1.9. Recovering a Self-Hosted Engine from an Existing Backup If a self-hosted engine is unavailable due to problems that cannot be repaired, you can restore it in a new self-hosted environment using a backup taken before the problem began, if one is available. When you specify a backup file during deployment, the backup is restored on a new Manager virtual machine, with a new self-hosted engine storage domain. The old Manager is removed, and the old self-hosted engine storage domain is renamed and can be manually removed after you confirm that the new environment is working correctly. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment. Restoring a self-hosted engine involves the following key actions: Deploy a new self-hosted engine and restore the backup. Enable the Manager repositories on the new Manager virtual machine. Reinstall the self-hosted engine nodes to update their configuration. Remove the old self-hosted engine storage domain. This procedure assumes that you do not have access to the original Manager, and that the new host can access the backup file. Prerequisites A fully qualified domain name prepared for your Manager and the host. Forward and reverse lookup records must both be set in the DNS. The new Manager must have the same fully qualified domain name as the original Manager. 3.2.1.9.1. Restoring the Backup on a New Self-Hosted Engine Run the hosted-engine script on a new host, and use the --restore-from-file= path/to/file_name option to restore the Manager backup during the deployment. Important If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator's ACL, the deployment may fail with a STORAGE_DOMAIN_UNREACHABLE error. To prevent this, you must update your iSCSI configuration before beginning the self-hosted engine deployment: If you are redeploying on an existing host, you must update the host's iSCSI initiator settings in /etc/iscsi/initiatorname.iscsi . The initiator IQN must be the same as was previously mapped on the iSCSI target, or updated to a new IQN, if applicable. If you are deploying on a fresh host, you must update the iSCSI target configuration to accept connections from that host. Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target). Procedure Copy the backup file to the new host. In the following example, host.example.com is the FQDN for the host, and /backup/ is any designated folder or path. # scp -p file_name host.example.com:/backup/ Log in to the new host. If you are deploying on Red Hat Virtualization Host, ovirt-hosted-engine-setup is already installed, so skip this step. If you are deploying on Red Hat Enterprise Linux, install the ovirt-hosted-engine-setup package: # dnf install ovirt-hosted-engine-setup Use the tmux window manager to run the script to avoid losing the session in case of network or terminal disruption. Install and run tmux : # dnf -y install tmux # tmux Run the hosted-engine script, specifying the path to the backup file: # hosted-engine --deploy --restore-from-file=backup/ file_name To escape the script at any time, use CTRL + D to abort deployment. Select Yes to begin the deployment. Configure the network. The script detects possible NICs to use as a management bridge for the environment. If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the RHV-M Appliance. Enter the root password for the Manager. Enter an SSH public key that will allow you to log in to the Manager as the root user, and specify whether to enable SSH access for the root user. Enter the virtual machine's CPU and memory configuration. Enter a MAC address for the Manager virtual machine, or accept a randomly generated one. If you want to provide the Manager virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you. Enter the virtual machine's networking details. If you specify Static , enter the IP address of the Manager. Important The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Manager virtual machine's IP must be in the same subnet range (10.1.1.1-254/24). Specify whether to add entries for the Manager virtual machine and the base host to the virtual machine's /etc/hosts file. You must ensure that the host names are resolvable. Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications: Enter a password for the admin@internal user to access the Administration Portal. The script creates the virtual machine. This can take some time if the RHV-M Appliance needs to be installed. Note If the host becomes non operational, due to a missing required network or a similar problem, the deployment pauses and a message such as the following is displayed: Pausing the process allows you to: Connect to the Administration Portal using the provided URL. Assess the situation, find out why the host is non operational, and fix whatever is needed. For example, if this deployment was restored from a backup, and the backup included required networks for the host cluster, configure the networks, attaching the relevant host NICs to these networks. Once everything looks OK, and the host status is Up , remove the lock file presented in the message above. The deployment continues. Select the type of storage to use: For NFS, enter the version, full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group. Note To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Red Hat Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options. For Gluster storage, enter the full address and path to the storage, and any mount options. Warning Do not use the old self-hosted engine storage domain's mount point for the new storage domain, as you risk losing virtual machine data. Important Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows: gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30 For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide . Enter the Manager disk size. The script continues until the deployment is complete. The deployment process changes the Manager's SSH keys. To allow client machines to access the new Manager without SSH errors, remove the original Manager's entry from the .ssh/known_hosts file on any client machines that accessed the original Manager. When the deployment is complete, log in to the new Manager virtual machine and enable the required repositories. 3.2.1.9.2. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream The Manager and its resources are now running in the new self-hosted environment. The self-hosted engine nodes must be reinstalled in the Manager to update their self-hosted engine configuration. Standard hosts are not affected. Perform the following procedure for each self-hosted engine node. 3.2.1.9.3. Reinstalling Hosts Reinstall Red Hat Virtualization Hosts (RHVH) and Red Hat Enterprise Linux hosts from the Administration Portal. The procedure includes stopping and restarting the host. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. Prerequisites If the cluster has migration enabled, virtual machines can automatically migrate to another host in the cluster. Therefore, reinstall a host while its usage is relatively low. Ensure that the cluster has sufficient memory for its hosts to perform maintenance. If a cluster lacks memory, migration of virtual machines will hang and then fail. To reduce memory usage, shut down some or all of the virtual machines before moving the host to maintenance. Ensure that the cluster contains more than one host before performing a reinstall. Do not attempt to reinstall all the hosts at the same time. One host must remain available to perform Storage Pool Manager (SPM) tasks. Procedure Click Compute Hosts and select the host. Click Management Maintenance and OK . Click Installation Reinstall . This opens the Install Host window. Click the Hosted Engine tab and select DEPLOY from the drop-down list. Click OK to reinstall the host. After a host has been reinstalled and its status returns to Up , you can migrate virtual machines back to the host. Important After you register a Red Hat Virtualization Host to the Red Hat Virtualization Manager and reinstall it, the Administration Portal may erroneously display its status as Install Failed . Click Management Activate , and the host will change to an Up status and be ready for use. After reinstalling the self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes: During the restoration, the old self-hosted engine storage domain was renamed, but was not removed from the new environment in case the restoration was faulty. After confirming that the environment is running normally, you can remove the old self-hosted engine storage domain. 3.2.1.9.4. Removing a Storage Domain You have a storage domain in your data center that you want to remove from the virtualized environment. Procedure Click Storage Domains . Move the storage domain to maintenance mode and detach it: Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Remove . Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain. Click OK . The storage domain is permanently removed from the environment. 3.2.1.10. Overwriting a Self-Hosted Engine from an Existing Backup If a self-hosted engine is accessible, but is experiencing an issue such as database corruption, or a configuration error that is difficult to roll back, you can restore the environment to a state using a backup taken before the problem began, if one is available. Restoring a self-hosted engine's state involves the following steps: Place the environment in global maintenance mode. Restore the backup on the Manager virtual machine. Disable global maintenance mode. For more information about engine-backup --mode=restore options, see Backing Up and Restoring the Manager . 3.2.1.10.1. Enabling global maintenance mode You must place the self-hosted engine environment in global maintenance mode before performing any setup or upgrade tasks on the Manager virtual machine. Procedure Log in to one of the self-hosted engine nodes and enable global maintenance mode: # hosted-engine --set-maintenance --mode=global Confirm that the environment is in global maintenance mode before proceeding: # hosted-engine --vm-status You should see a message indicating that the cluster is in global maintenance mode. 3.2.1.10.2. Restoring a Backup to Overwrite an Existing Installation The engine-backup command can restore a backup to a machine on which the Red Hat Virtualization Manager has already been installed and set up. This is useful when you have taken a backup of an environment, performed changes on that environment, and then want to undo the changes by restoring the environment from the backup. Changes made to the environment since the backup was taken, such as adding or removing a host, will not appear in the restored environment. You must redo these changes. Procedure Log in to the Manager machine. Remove the configuration files and clean the database associated with the Manager: # engine-cleanup The engine-cleanup command only cleans the Manager database; it does not drop the database or delete the user that owns that database. Restore a full backup or a database-only backup. You do not need to create a new database or specify the database credentials because the user and database already exist. Restore a full backup: # engine-backup --mode=restore --file= file_name --log= log_file_name --restore-permissions Restore a database-only backup by restoring the configuration files and the database backup: # engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file= file_name --log= log_file_name --restore-permissions Note To restore only the Manager database (for example, if the Data Warehouse database is located on another machine), you can omit the --scope=dwhdb parameter. If successful, the following output displays: You should now run engine-setup. Done. Reconfigure the Manager: # engine-setup 3.2.1.10.3. Disabling global maintenance mode Procedure Log in to the Manager virtual machine and shut it down. Log in to one of the self-hosted engine nodes and disable global maintenance mode: # hosted-engine --set-maintenance --mode=none When you exit global maintenance mode, ovirt-ha-agent starts the Manager virtual machine, and then the Manager automatically starts. It can take up to ten minutes for the Manager to start. Confirm that the environment is running: # hosted-engine --vm-status The listed information includes Engine Status . The value for Engine status should be: Note When the virtual machine is still booting and the Manager hasn't started yet, the Engine status is: If this happens, wait a few minutes and try again. When the environment is running again, you can start any virtual machines that were stopped, and check that the resources in the environment are behaving as expected. 3.2.2. Migrating the Data Warehouse to a Separate Machine This section describes how to migrate the Data Warehouse database and service from the Red Hat Virtualization Manager machine to a separate machine. Hosting the Data Warehouse service on a separate machine reduces the load on each individual machine, and avoids potential conflicts caused by sharing CPU and memory resources with other processes. Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. You have the following migration options: You can migrate the Data Warehouse service away from the Manager machine and connect it with the existing Data Warehouse database ( ovirt_engine_history ). You can migrate the Data Warehouse database away from the Manager machine and then migrate the Data Warehouse service. 3.2.2.1. Migrating the Data Warehouse Database to a Separate Machine Migrate the Data Warehouse database ( ovirt_engine_history ) before you migrate the Data Warehouse service. Use engine-backup to create a database backup and restore it on the new database machine. For more information on engine-backup , run engine-backup --help . Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. The new database server must have Red Hat Enterprise Linux 8 installed. Enable the required repositories on the new database server. 3.2.2.1.1. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Data Warehouse machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream 3.2.2.1.2. Migrating the Data Warehouse Database to a Separate Machine Procedure Create a backup of the Data Warehouse database and configuration files on the Manager: # engine-backup --mode=backup --scope=grafanadb --scope=dwhdb --scope=files --file= file_name --log= log_file_name Copy the backup file from the Manager to the new machine: # scp /tmp/file_name [email protected]:/tmp Install engine-backup on the new machine: # dnf install ovirt-engine-tools-backup Install the PostgreSQL server package: # dnf install postgresql-server postgresql-contrib Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot: Restore the Data Warehouse database on the new machine. file_name is the backup file copied from the Manager. # engine-backup --mode=restore --scope=files --scope=grafanadb --scope=dwhdb --file= file_name --log= log_file_name --provision-dwh-db When the --provision-* option is used in restore mode, --restore-permissions is applied by default. The Data Warehouse database is now hosted on a separate machine from that on which the Manager is hosted. After successfully restoring the Data Warehouse database, a prompt instructs you to run the engine-setup command. Before running this command, migrate the Data Warehouse service. 3.2.2.2. Migrating the Data Warehouse Service to a Separate Machine You can migrate the Data Warehouse service installed and configured on the Red Hat Virtualization Manager to a separate machine. Hosting the Data Warehouse service on a separate machine helps to reduce the load on the Manager machine. Notice that this procedure migrates the Data Warehouse service only. To migrate the Data Warehouse database ( ovirt_engine_history ) prior to migrating the Data Warehouse service, see Migrating the Data Warehouse Database to a Separate Machine . Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. Prerequisites You must have installed and configured the Manager and Data Warehouse on the same machine. To set up the new Data Warehouse machine, you must have the following: The password from the Manager's /etc/ovirt-engine/engine.conf.d/10-setup-database.conf file. Allowed access from the Data Warehouse machine to the Manager database machine's TCP port 5432. The username and password for the Data Warehouse database from the Manager's /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file. If you migrated the ovirt_engine_history database using the procedures described in Migrating the Data Warehouse Database to a Separate Machine , the backup includes these credentials, which you defined during the database setup on that machine. Installing this scenario requires four steps: Setting up the New Data Warehouse Machine Stopping the Data Warehouse service on the Manager machine Configuring the new Data Warehouse machine Disabling the Data Warehouse package on the Manager machine 3.2.2.2.1. Setting up the New Data Warehouse Machine Enable the Red Hat Virtualization repositories and install the Data Warehouse setup package on a Red Hat Enterprise Linux 8 machine: Enable the required repositories: Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Ensure that all packages currently installed are up to date: # dnf upgrade --nobest Install the ovirt-engine-dwh-setup package: # dnf install ovirt-engine-dwh-setup 3.2.2.2.2. Stopping the Data Warehouse Service on the Manager Machine Procedure Stop the Data Warehouse service: # systemctl stop ovirt-engine-dwhd.service If the database is hosted on a remote machine, you must manually grant access by editing the postgres.conf file. Edit the /var/lib/pgsql/data/postgresql.conf file and modify the listen_addresses line so that it matches the following: listen_addresses = '*' If the line does not exist or has been commented out, add it manually. If the database is hosted on the Manager machine and was configured during a clean setup of the Red Hat Virtualization Manager, access is granted by default. Restart the postgresql service: # systemctl restart postgresql 3.2.2.2.3. Configuring the New Data Warehouse Machine The order of the options or settings shown in this section may differ depending on your environment. If you are migrating both the ovirt_engine_history database and the Data Warehouse service to the same machine, run the following, otherwise proceed to the step. # sed -i '/^ENGINE_DB_/d' \ /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf # sed -i \ -e 's;^\(OVESETUP_ENGINE_CORE/enable=bool\):True;\1:False;' \ -e '/^OVESETUP_CONFIG\/fqdn/d' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf Remove the apache/grafana PKI files, so that they are regenerated by engine-setup with correct values: Run the engine-setup command to begin configuration of Data Warehouse on the machine: # engine-setup Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter : Host fully qualified DNS name of this server [ autodetected host name ]: Press Enter to automatically configure the firewall, or type No and press Enter to maintain existing settings: Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press Enter . This applies even in cases where only one option is listed. Enter the fully qualified domain name and password for the Manager. Press Enter to accept the default values in each other field: Host fully qualified DNS name of the engine server []: engine-fqdn Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: ssh port on remote engine server [22]: root password on remote engine server engine-fqdn : password Enter the FQDN and password for the Manager database machine. Press Enter to accept the default values in each other field: Engine database host []: manager-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password Confirm your installation settings: Please confirm installation settings (OK, Cancel) [OK]: The Data Warehouse service is now configured on the remote machine. Proceed to disable the Data Warehouse service on the Manager machine. 3.2.2.2.4. Disabling the Data Warehouse Service on the Manager Machine Prerequisites The Grafana service on the Manager machine is disabled: # systemctl disable --now grafana-server.service Procedure On the Manager machine, restart the Manager: # service ovirt-engine restart Run the following command to modify the file /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf and set the options to False : # sed -i \ -e 's;^\(OVESETUP_DWH_CORE/enable=bool\):True;\1:False;' \ -e 's;^\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\):True;\1:False;' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf # sed -i \ -e 's;^\(OVESETUP_GRAFANA_CORE/enable=bool\):True;\1:False;' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf Disable the Data Warehouse service: # systemctl disable ovirt-engine-dwhd.service Remove the Data Warehouse files: # rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf /var/lib/ovirt-engine-dwh/backups/* The Data Warehouse service is now hosted on a separate machine from the Manager. 3.2.3. Backing Up and Restoring Virtual Machines Using a Backup Storage Domain 3.2.3.1. Backup storage domains explained A backup storage domain is one that you can use specifically for storing and migrating virtual machines and virtual machine templates for the purpose of backing up and restoring for disaster recovery, migration, or any other backup/restore usage model. A backup domain differs from a non-backup domain in that all virtual machines on a backup domain are in a powered-down state. A virtual machine cannot run on a backup domain. You can set any data storage domain to be a backup domain. You can enable or disable this setting by selecting or deselecting a checkbox in the Manage Domain dialog box. You can enable this setting only after all virtual machines on that storage domain are stopped. You cannot start a virtual machine stored on a backup domain. The Manager blocks this and any other operation that might invalidate the backup. However, you can run a virtual machine based on a template stored on a backup domain if the virtual machine's disks are not part of a backup domain. As with other types of storage domains, you can attach or detach backup domains to or from a data center. So, in addition to storing backups, you can use backup domains to migrate virtual machines between data centers. Advantages Some reasons to use a backup domain, rather than an export domain, are listed here: You can have multiple backup storage domains in a data center, as opposed to only one export domain. You can dedicate a backup storage domain to use for backup and disaster recovery. You can transfer a backup of a virtual machine, a template, or a snapshot to a backup storage domain Migrating a large number of virtual machines, templates, or OVF files is significantly faster with backup domains than export domains. A backup domain uses disk space more efficiently than an export domain. Backup domains support both file storage (NFS, Gluster) and block storage (Fiber Channel and iSCSI). This contrasts with export domains, which only support file storage. You can dynamically enable and disable the backup setting for a storage domain, taking into account the restrictions. Restrictions Any virtual machine or template on a _backup domain must have all its disks on that same domain. All virtual machines on a storage domain must be powered down before you can set it to be a backup domain. You cannot run a virtual machine that is stored on a backup domain, because doing so might manipulate the disk's data. A backup domain cannot be the target of memory volumes because memory volumes are only supported for active virtual machines. You cannot preview a virtual machine on a backup domain. Live migration of a virtual machine to a backup domain is not possible. You cannot set a backup domain to be the master domain. You cannot set a Self-hosted engine's domain to be a backup domain. Do not use the default storage domain as a backup domain. 3.2.3.2. Setting a data storage domain to be a backup domain Prerequisites All disks belonging to a virtual machine or template on the storage domain must be on the same domain. All virtual machines on the domain must be powered down. Procedure In the Administration Portal, select Storage Domains . Create a new storage domain or select an existing storage domain and click Manage Domain . The Manage Domains dialog box opens. Under Advanced Parameters , select the Backup checkbox. The domain is now a backup domain. 3.2.3.3. Backing up or Restoring a Virtual Machine or Snapshot Using a Backup Domain You can back up a powered down virtual machine or snapshot. You can then store the backup on the same data center and restore it as needed, or migrate it to another data center. Procedure: Backing Up a Virtual Machine Create a backup domain. See Setting a storage domain to be a backup domain backup domain . Create a new virtual machine based on the virtual machine you want to back up: To back up a snapshot, first create a virtual machine from a snapshot. See Creating a Virtual Machine from a Snapshot in the Virtual Machine Management Guide . To back up a virtual machine, first clone the virtual machine. See Cloning a Virtual Machine in the Virtual Machine Management Guide . Make sure the clone is powered down before proceeding. Export the new virtual machine to a backup domain. See Exporting a Virtual Machine to a Data Domain in the Virtual Machine Management Guide . Procedure: Restoring a Virtual Machine Make sure that the backup storage domain that stores the virtual machine backup is attached to a data center. Import the virtual machine from the backup domain. See Importing Virtual Machines from a Data Domain . Related information Importing storage domains Migrating storage domains between data centers in same environment Migrating storage domains between data centers in different environments 3.2.4. Backing Up and Restoring Virtual Machines Using the Backup and Restore API 3.2.4.1. The Backup and Restore API The backup and restore API is a collection of functions that allows you to perform full or file-level backup and restoration of virtual machines. The API combines several components of Red Hat Virtualization, such as live snapshots and the REST API, to create and work with temporary volumes that can be attached to a virtual machine containing backup software provided by an independent software provider. For supported third-party backup vendors, consult the Red Hat Virtualization Ecosystem . 3.2.4.2. Backing Up a Virtual Machine Use the backup and restore API to back up a virtual machine. This procedure assumes you have two virtual machines: the virtual machine to back up, and a virtual machine on which the software for managing the backup is installed. Procedure Using the REST API, create a snapshot of the virtual machine to back up: POST /api/vms/ {vm:id} /snapshots/ HTTP/1.1 Accept: application/xml Content-type: application/xml <snapshot> <description>BACKUP</description> </snapshot> Note Here, replace {vm:id} with the VM ID of the virtual machine whose snapshot you are making. This ID is available from the General tab of the New Virtual Machine and Edit Virtual Machine windows in the Administration Portal and VM Portal . Taking a snapshot of a virtual machine stores its current configuration data in the data attribute of the configuration attribute in initialization under the snapshot. Important You cannot take snapshots of disks marked as shareable or based on direct LUN disks. Retrieve the configuration data of the virtual machine from the data attribute under the snapshot: GET /api/vms/ {vm:id} /snapshots/ {snapshot:id} HTTP/1.1 All-Content: true Accept: application/xml Content-type: application/xml Note Here, replace {vm:id} with the ID of the virtual machine whose snapshot you made earlier. Replace {snapshot:id} with the snapshot ID. Add the All-Content: true header to retrieve additional OVF data in the response. The OVF data in the XML response is located within the VM configuration element, <initialization><configuration> . Later, you will use this data to restore the virtual machine. Get the snapshot ID: GET /api/vms/{vm:id}/snapshots/ HTTP/1.1 Accept: application/xml Content-type: application/xml Identify the disk ID of the snapshot: GET /api/vms/ {vm:id} /snapshots/ {snapshot:id} /disks HTTP/1.1 Accept: application/xml Content-type: application/xml Attach the snapshot to a backup virtual machine as an active disk attachment, with the correct interface type (for example, virtio_scsi ): POST /api/vms/ {vm:id} /diskattachments/ HTTP/1.1 Accept: application/xml Content-type: application/xml <disk_attachment> <active>true</active> <interface>_virtio_scsi_</interface> <disk id=" {disk:id} "> <snapshot id=" {snapshot:id} "/> </disk> </disk_attachment> Note Here, replace {vm:id} with the ID of the backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID. Replace {snapshot:id} with the snapshot ID. Use the backup software on the backup virtual machine to back up the data on the snapshot disk. Remove the snapshot disk attachment from the backup virtual machine: DELETE /api/vms/ {vm:id} /diskattachments/ {snapshot:id} HTTP/1.1 Accept: application/xml Content-type: application/xml Note Here, replace {vm:id} with the ID of the backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {snapshot:id} with the snapshot ID. Optionally, delete the snapshot: DELETE /api/vms/ {vm:id} /snapshots/ {snapshot:id} HTTP/1.1 Accept: application/xml Content-type: application/xml Note Here, replace {vm:id} with the ID of the virtual machine whose snapshot you made earlier. Replace {snapshot:id} with the snapshot ID. You have backed up the state of a virtual machine at a fixed point in time using backup software installed on a separate virtual machine. 3.2.4.3. Restoring a Virtual Machine Restore a virtual machine that has been backed up using the backup and restore API. This procedure assumes you have a backup virtual machine on which the software used to manage the backup is installed. Procedure In the Administration Portal, create a floating disk on which to restore the backup. See Creating a Virtual Disk for details on how to create a floating disk. Attach the disk to the backup virtual machine: POST /api/vms/ {vm:id} /disks/ HTTP/1.1 Accept: application/xml Content-type: application/xml <disk id=" {disk:id} "> </disk> Note Here, replace {vm:id} with the ID of this backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID you got while backing up the virtual machine. Use the backup software to restore the backup to the disk. Detach the disk from the backup virtual machine: DELETE /api/vms/ {vm:id} /disks/ {disk:id} HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <detach>true</detach> </action> Note Here, replace {vm:id} with the ID of this backup virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID. Create a new virtual machine using the configuration data of the virtual machine being restored: Note To override any of the values in the ovf while creating the virtual machine, redefine the element before or after the initialization element. Not within the initialization element. Attach the disk to the new virtual machine: POST /api/vms/ {vm:id} /disks/ HTTP/1.1 Accept: application/xml Content-type: application/xml <disk id=" {disk:id} "> </disk> Note Here, replace {vm:id} with the ID of the new virtual machine, not the virtual machine whose snapshot you made earlier. Replace {disk:id} with the disk ID. You have restored a virtual machine using a backup that was created using the backup and restore API. 3.2.5. Backing Up and Restoring Virtual Machines Using the Incremental Backup and Restore API 3.2.5.1. Incremental Backup and Restore API Red Hat Virtualization provides an Incremental Backup API that you can use for full backups of QCOW2 or RAW virtual disks, or incremental backups of QCOW 2 virtual disks, without any temporary snapshots. Data is backed-up in RAW format, whether the virtual disk being backed up is QCOW2 or RAW. You can restore RAW guest data and either RAW or QCOW2 disks. The Incremental Backup API is part of the RHV REST API. You can backup virtual machines that are running or that are not. As a developer, you can use the API to develop a backup application. Features Backups are simpler, faster and more robust than when using the Backup and Restore API. The Incremental Backup API provides improved integration with backup applications, with new support for backing up and restoring RAW guest data, regardless of the underlying disk format. If an invalid bitmap causes a backup to fail, you can remove a specific checkpoint in the backup chain. You do not need to run a full backup. Limitations: Only disks in QCOW2 format can be backed up incrementally, not RAW format disks. The backup process saves the backed up data in RAW format. Only backed up data in RAW format can be restored. Incremental restore does not support restoring snapshots as they existed at the time of the backup, rather incremental restore restores only the data and not the structure of volumes or images in snapshots as they existed at the time of the backup. This limit is common in backup solutions for other systems. As is commonly the case with backup solutions, incremental restore restores only the data and not the structure of volumes or images in snapshots as they existed at the time of the backup. An unclean shutdown of a virtual machine, whatever the cause, might invalidate bitmaps on the disk, which invalidates the entire backup chain. Restoring an incremental backup using an invalid bitmap leads to corrupt virtual machine data. There is no way to detect an invalid bitmap, other than starting a backup. If the disk includes any invalid bitmaps, the operation fails. The following table describes the disk configurations that support incremental backup. Note When you create a disk using the Administration portal, you set the storage type, provisioning type, and whether incremental backup is enabled or disabled. Based on these settings, the Manager determines the virtual disk format. Table 3.1. Supported disk configurations for incremental backup Storage type Provisioning type When incremental backup is... Virtual disk format is... block thin enabled qcow2 block preallocated enabled qcow2 (preallocated) file thin enabled qcow2 file preallocated enabled qcow2 (preallocated) block thin disabled qcow2 block preallocated disabled raw (preallocated) file thin disabled raw (sparse) file preallocated disabled raw (preallocated) network Not applicable disabled raw lun Not applicable disabled raw 3.2.5.1.1. Incremental Backup Flow A backup application that uses the Incremental Backup API must follow this sequence to back up virtual machine disks that have already been enabled for incremental backup: The backup application uses the REST API to find virtual machine disks that should be included in the backup. Only disks in QCOW2 format are included. The backup application starts a full backup or an incremental backup . The API call specifies a virtual machine ID, an optional checkpoint ID, and a list of disks to back up. If the API call does not specify a checkpoint ID, a full backup begins, which includes all data in the specified disks, based on the current state of each disk. The engine prepares the virtual machine for backup. The virtual machine can continue running during the backup. The backup application polls the engine for the backup status, until the engine reports that the backup is ready to begin. When the backup is ready to begin, the backup application creates an image transfer object for every disk included in the backup. The backup application gets a list of changed blocks from ovirt-imageio for every image transfer . If a change list is not available, the backup application gets an error. The backup application downloads changed blocks in RAW format from ovirt-imageio and stores them in the backup media . If a list of changed blocks is not available, the backup application can fall back to copying the entire disk. The backup application finalizes all image transfers. The backup application finalizes the backup using the REST API . 3.2.5.1.2. Incremental Restore Flow A backup application that uses the Incremental Backup API must follow this sequence to restore virtual machine disks that have been backed up: The user selects a restore point based on available backups using the backup application. The backup application creates a new disk or a snapshot with an existing disk to hold the restored data. The backup application starts an upload image transfer for every disk , specifying format is raw . This enables format conversion when uploading RAW data to a QCOW2 disk. The backup application transfers the data included in this restore point to imageio using the API . The backup application finalizes the image transfers. 3.2.5.1.3. Incremental Backup and Restore API Tasks The Incremental Backup and Restore API is documented in the Red Hat Virtualization REST API Guide . The backup and restore flow requires the following actions. Enabling incremental backup on either a new or existing virtual disk: A new disk, using the Administration Portal An existing disk, using the Administration Portal A new or existing disk, using an API call Finding disks that are enabled for incremental backup Starting a full backup Starting an incremental backup Finalizing a backup Getting information about a backup Getting information about the disks in a backup Listing all checkpoints for a virtual machine Listing information for a specific virtual machine checkpoint Removing a checkpoint of a specific virtual machine Downloading an image transfer object to archive a backup Uploading an image transfer object to restore a backup Listing changed blocks Downloading and uploading changed blocks 3.2.5.1.4. Enabling Incremental Backup on a new virtual disk Enable incremental backup for a virtual disk to mark it as included in an incremental backup. When adding a disk, you can enable incremental backup for every disk, either with the REST API or using the Administration Portal. You can back up existing disks that are not enabled for incremental backup using full backup or in the same way you did previously. Note The Manager does not require the disk to be enabled for it to be included in an incremental backup, but you can enable it to keep track of which disks are enabled. Because incremental backup requires disks to be formatted in QCOW2, use QCOW2 format instead of RAW format. Procedure Add a new virtual disk. For more information see Creating a Virtual Disk . When configuring the disk, select the Enable Incremental Backup checkbox. Additional resources Enabling incremental backup for a disk using the API . 3.2.5.1.5. Enabling Incremental Backup on an existing RAW virtual disk Because incremental backup is not supported for disks in RAW format, a QCOW2 format layer must exist on top of any RAW format disks in order to use incremental backup. Creating a snapshot generates a QCOW2 layer, enabling incremental backup on all disks that are included in the snapshot, from the point at which the snapshot is created. Warning If the base layer of a disk uses RAW format, deleting the last snapshot and merging the top QCOW2 layer into the base layer converts the disk to RAW format, thereby disabling incremental backup if it was set. To re-enable incremental backup, you can create a new snapshot, including this disk. Procedure In the Administration Portal, click Compute Virtual Machines . Select a virtual machine and click the Disks tab. Click the Edit button. The Edit Disk dialog box opens. Select the Enable Incremental Backup checkbox. Additional resources Enabling incremental backup for a disk using the API 3.2.5.1.6. Enabling incremental backup You can use a REST API request to enable incremental backup for a virtual machine's disk. Procedure Enable incremental backup for a new disk. For example, for a new disk on a virtual machine with ID 123 , send this request: POST /ovirt-engine/api/vms/123/diskattachments The request body should include backup set to incremental as part of a disk object, like this: <disk_attachment> ... <disk> ... <backup>incremental</backup> ... </disk> </disk_attachment> The response is: <disk_attachment> ... <disk href="/ovirt-engine/api/disks/456" id="456"/> ... </disk_attachment> Additional resources DiskBackup enum in the REST API Guide for RHV 3.2.5.1.7. Finding disks that are enabled for incremental backup For the specified virtual machine, you can list the disks that are enabled for incremental backup, filtered according to the backup property. Procedure List the disks that are attached to the virtual machine. For example, for a virtual machine with the ID 123 , send this request: GET /ovirt-engine/api/vms/123/diskattachments The response includes all disk_attachment objects, each of which includes one or more disk objects. For example: <disk_attachments> <disk_attachment> ... <disk href="/ovirt-engine/api/disks/456" id="456"/> ... </disk_attachment> ... </disk_attachments> Use the disk service to see the properties of a disk from the step. For example, for the disk with the ID 456 , send this request: GET /ovirt-engine/api/disks/456 The response includes all properties for the disk. backup is set to none or incremental . For example: <disk href="/ovirt-engine/api/disks/456" id="456"> ... <backup>incremental</backup> ... </disk> Additional resources backup attribute of Disk struct DiskBackup enum 3.2.5.1.8. Starting a full backup After a full backup you can use the resulting checkpoint ID as the start point in the incremental backup. When taking a backup of a running virtual machine, the process creates a scratch disk on the same storage domain as the disk being backed up. The backup process creates this disk to enable new data to be written to the running virtual machine during the backup. You can see this scratch disk in the Administration Portal during the backup. It is automatically deleted when the backup finishes. Starting a full backup requires a request call with a body, and includes a response. Procedure Send a request specifying a virtual machine to back up. For example, specify a virtual machine with ID 123 like this: POST /ovirt-engine/api/vms/123/backups In the request body, specify a disk to back up. For example, to start a full backup of a disk with ID 456 , send the following request body: <backup> <disks> <disk id="456" /> ... </disks> </backup> The response body should look similar to this: <backup id="789"> <disks> <disk id="456" /> ... ... </disks> <status>initializing</status> <creation_date> </backup> The response includes the following: The backup id The status of the backup, indicating that the backup is initializing. Poll the backup until the status is ready . The response includes to_checkpoint_id . Note this ID and use it for from_checkpoint_id in the incremental backup. Additional resources add method of the VmBackups service in the REST API Guide for RHV 3.2.5.1.9. Starting an incremental backup Once a full backup is completed for a given virtual disk, subsequent incremental backups that disk contain only the changes since the last backup. Use the value of to_checkpoint_id from the most recent backup as the value for from_checkpoint_id in the request body. When taking a backup of a running virtual machine, the process creates a scratch disk on the same storage domain as the disk being backed up. The backup process creates this disk to enable new data to be written to the running virtual machine during the backup. You can see this scratch disk in the Administration Portal during the backup. It is automatically deleted when the backup finishes. Starting an incremental backup or mixed backup requires a request call with a body, and includes a response. Procedure Send a request specifying a virtual machine to back up. For example, specify a virtual machine with ID 123 like this: POST /ovirt-engine/api/vms/123/backups In the request body, specify a disk to back up. For example, to start an incremental backup of a disk with ID 456 , send the following request body: <backup> <from_checkpoint_id> -checkpoint-uuid </from_checkpoint_id> <disks> <disk id="456" /> ... </disks> </backup> Note In the request body, if you include a disk that is not included in the checkpoint, the request also runs a full backup of this disk. For example, a disk with ID 789 has not been backed up yet. To add a full backup of 789 to the above request body, send a request body like this: <backup> <from_checkpoint_id> -checkpoint-uuid </from_checkpoint_id> <disks> <disk id="456" /> <disk id="789" /> ... </disks> </backup> The response body should look similar to this: <backup id="101112"> <from_checkpoint_id> -checkpoint-uuid </from_checkpoint_id> <to_checkpoint_id> new-checkpoint-uuid </to_checkpoint_id> <disks> <disk id="456" /> <disk id="789" /> ... ... </disks> <status>initializing</status> <creation_date> </backup> The response includes the following: The backup ID. The ID of any disk that was included in the backup. The status, indicating that the backup is initializing. Poll the backup until the status is ready . The response includes to_checkpoint_id . Note this ID and use it for from_checkpoint_id in the incremental backup. Additional resources add method of the VmBackups service in the REST API Guide for RHV. 3.2.5.1.10. Getting information about a backup You can get information about a backup that you can use to start a new incremental backup. The list method of the VmBackups service returns the following information about a backup: The ID of each disk that was backed up. The IDs of the start and end checkpoints of the backup. The ID of the disk image of the backup, for each disk included in the backup. The status of the backup. The date the backup was created. When the value of <status> is ready , the response includes <to_checkpoint_id> which should be used as the <from_checkpoint_id> in the incremental backup and you can start downloading the disks to back up the virtual machine storage. Procedure To get information about a backup with ID 456 of a virtual machine with ID 123, send a request like this: GET /ovirt-engine/api/vms/456/backups/123 The response includes the backup with ID 456, with <from_checkpoint_id> 999 and <to_checkpoint_id> 666. The disks included in the backup are referenced in the <link> element. <backup id="456"> <from_checkpoint_id>999</from_checkpoint_id> <to_checkpoint_id>666</to_checkpoint_id> <link href="/ovirt-engine/api/vms/456/backups/123/disks" rel="disks"/> <status>ready</status> <creation_date> </backup> Additional resources list method of the VmBackups service 3.2.5.1.11. Getting information about the disks in a backup You can get information about the disks that are part of the backup, including the backup mode that was used for each disk in a backup, which helps determine the mode that you use to download the backup. The list method of the VmBackupDisks service returns the following information about a backup: The ID and name of each disk that was backed up. The ID of the disk image of the backup, for each disk included in the backup. The disk format. The backup behavior supported by the disk. The backup type that was taken for the disk (full/incremental). Procedure To get information about a backup with ID 456 of a virtual machine with ID 123, send a request like this: GET /ovirt-engine/api/vms/456/backups/123/disks The response includes the disk with ID 789, and the ID of the disk image is 555. <disks> <disk id="789"> <name>vm1_Disk1</name> <actual_size>671744</actual_size> <backup>incremental</backup> <backup_mode>full</backup_mode> <format>cow</format> <image_id>555</image_id> <qcow_version>qcow2_v3</qcow_version> <status>locked</status> <storage_type>image</storage_type> <total_size>0</total_size> </disk> </disks> Additional resources list method of the VmBackupDisks service 3.2.5.1.12. Finalizing a backup Finalizing a backup ends the backup, unlocks resources, and performs cleanups. Use the finalize backup service method Procedure To finalize a backup of a disk with ID 456 on a virtual machine with ID 123 , send a request like this: POST /vms/123/backups/456/finalize Additional resources finalize POST in the REST API Guide . 3.2.5.1.13. Creating an image transfer object for incremental backup When the backup is ready to download, the backup application should create an imagetransfer object, which initiates a transfer for an incremental backup. Creating an image transfer object requires a request call with a body. Procedure Send a request like this: POST /ovirt-engine/api/imagetransfers In the request body, specify the following parameters: Disk ID. Backup ID. Direction of the disk set to download . Format of the disk set to raw . For example, to transfer a backup of a disk where the ID of the disk is 123 and the ID of the backup is 456 , send the following request body: <image_transfer> <disk id="123"/> <backup id="456"/> <direction>download</direction> <format>raw</format> </image_transfer> Additional resources add method for creating an imagetransfer object in the REST API Guide for RHV. 3.2.5.1.14. Creating an image transfer object for incremental restore To enable restoring raw data backed up using the incremental backup API to a QCOW2-formatted disk, the backup application should create an imagetransfer object. When the transfer format is raw and the underlying disk format is QCOW2, uploaded data is converted on the fly to QCOW2 format when writing to storage. Uploading data from a QCOW2 disk to a RAW disk is not supported. Creating an image transfer object requires a request call with a body. Procedure Send a request like this: POST /ovirt-engine/api/imagetransfers In the request body, specify the following parameters: Disk ID or snapshot ID. Direction of the disk set to upload . Format of the disk set to raw . For example, to transfer a backup of a disk where the ID of the disk is 123 , send the following request body: <image_transfer> <disk id="123"/> <direction>upload</direction> <format>raw</format> </image_transfer> Additional resources add method for creating an imagetransfer object in the REST API Guide for RHV. 3.2.5.1.15. Listing checkpoints for a virtual machine You can list all checkpoints for a virtual machine, including information for each checkpoint, by sending a request call. Procedure Send a request specifying a virtual machine. For example, specify a virtual machine with ID 123 like this: GET /vms/123/checkpoints/ The response includes all the virtual machine checkpoints. Each checkpoint contains the following information: The checkpoint's disks. The ID of the parent checkpoint. Creation date of the checkpoint. The virtual machine to which it belongs. For example: <parent_id>, <creation_date> and the virtual machine it belongs to <vm>: <checkpoints> <checkpoint id="456"> <link href="/ovirt-engine/api/vms/vm-uuid/checkpoints/456/disks" rel="disks"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href="/ovirt-engine/api/vms/123" id="123"/> </checkpoint> </checkpoints> Additional resources list method to list virtual machine checkpoints in the REST API Guide for RHV. 3.2.5.1.16. Listing a specific checkpoint for a virtual machine You can list information for a specific checkpoint for a virtual machine by sending a request call. Procedure Send a request specifying a virtual machine. For example, specify a virtual machine with ID 123 and checkpoint ID 456 like this: GET /vms/123/checkpoints/456 The response includes the following information for the checkpoint: The checkpoint's disks. The ID of the parent checkpoint. Creation date of the checkpoint. The virtual machine to which it belongs. For example: <checkpoint id="456"> <link href="/ovirt-engine/api/vms/vm-uuid/checkpoints/456/disks" rel="disks"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href="/ovirt-engine/api/vms/123" id="123"/> </checkpoint> Additional resources list method to list virtual machine checkpoints in the REST API Guide for RHV. 3.2.5.1.17. Removing a checkpoint You can remove a checkpoint of a virtual machine by sending a DELETE request. You can remove a checkpoint on a virtual machine whether it is running or not. Procedure Send a request specifying a virtual machine and a checkpoint. For example, specify a virtual machine with ID 123 and a checkpoint with ID 456 like this: DELETE /vms/123/checkpoints/456/ Additional resources remove method of VmCheckpoint 3.2.5.1.18. Using the imageio API to transfer backup data Image transfer APIs start and stop an image transfer. The result is a transfer URL. You use the imageio API to actually transfer the data from the transfer URL. For complete information on using the imageio API, see the ovirt-imageio Images API reference . Table 3.2. imageio Image API methods used in incremental backup and restore API request Description imageio Image API reference section OPTIONS /images/{ticket-id} HTTP/1.1 Gets the server options, to find out what features the server supports. See OPTIONS GET /images/{ticket-id}/extents Gets information about disk image content and allocation, or about blocks that changed during an incremental backup. This information is known as extent information. See EXTENTS GET /images/{ticket-id}/extent?context=dirty The program doing image transfer needs to download changes from the backup. These changes are know as dirty extents. To download changes, send a request like this: See EXTENTS -> Examples -> Request dirty extents PUT /images/{ticket-id} The backup application creates a new disk or a snapshot with an existing disk to hold the restored data. See PUT Additional resources The Red Hat Virtualization Python SDK includes several implementation examples you can use to get started with transferring backups: ovirt-imageio Images API reference Creating a disk Calling imagetransfer.create_transfer() a helper to simplify creating a transfer Using the Red Hat Virtualization Python SDK
[ "engine-backup --mode=backup", "engine-backup --mode=restore", "engine-backup", "engine-backup", "engine-backup --scope=files --scope=db", "engine-backup --scope=files --scope=dwhdb", "mkdir -p /etc/ovirt-engine-backup/engine-backup-config.d", "BACKUP_PATHS=\"USD{BACKUP_PATHS} /etc/chrony.conf /etc/ntp.conf /etc/ovirt-engine-backup\"", "engine-backup --mode=restore --file= file_name --log= log_file_name --provision-db", "engine-backup --mode=restore --file= file_name --log= log_file_name --provision-db --provision-dwh-db", "engine-backup --mode=restore --scope=files --scope=db --file= file_name --log= log_file_name --provision-db", "engine-backup --mode=restore --scope=files --scope=dwhdb --file= file_name --log= log_file_name --provision-dwh-db", "You should now run engine-setup. Done.", "engine-setup", "engine-cleanup", "engine-backup --mode=restore --file= file_name --log= log_file_name --restore-permissions", "engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file= file_name --log= log_file_name --restore-permissions", "You should now run engine-setup. Done.", "engine-setup", "engine-cleanup", "su - postgres -c 'psql'", "postgres=# alter role user_name encrypted password ' new_password ';", "engine-backup --mode=restore --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --no-restore-permissions", "engine-backup --mode=restore --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --change-dwh-db-credentials --dwh-db-host= database_location --dwh-db-name= database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions", "engine-backup --mode=restore --scope=files --scope=db --file= file_name --log= log_file_name --change-db-credentials --db-host= database_location --db-name= database_name --db-user=engine --db-password --no-restore-permissions", "engine-backup --mode=restore --scope=files --scope=dwhdb --file= file_name --log= log_file_name --change-dwh-db-credentials --dwh-db-host= database_location --dwh-db-name= database_name --dwh-db-user=ovirt_engine_history --dwh-db-password --no-restore-permissions", "You should now run engine-setup. Done.", "engine-setup", "hosted-engine --set-maintenance --mode=global", "systemctl stop ovirt-engine systemctl disable ovirt-engine", "engine-backup --mode=backup --file= file_name --log= log_file_name", "scp -p file_name log_file_name storage.example.com:/backup/", "subscription-manager unregister", "hosted-engine --vm-shutdown", "scp -p file_name host.example.com:/backup/", "dnf install ovirt-hosted-engine-setup", "dnf -y install tmux tmux", "hosted-engine --deploy --restore-from-file=backup/ file_name", "[ INFO ] You can now connect to https://<host name>:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.<random>_he_setup_lock is removed, delete it once ready to proceed]", "gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "dnf repolist", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms", "subscription-manager release --set=8.6", "dnf module -y enable pki-deps", "dnf module -y enable postgresql:12", "dnf module -y enable nodejs:14", "dnf distro-sync --nobest", "hosted-engine --vm-status", "scp -p file_name host.example.com:/backup/", "dnf install ovirt-hosted-engine-setup", "dnf -y install tmux tmux", "hosted-engine --deploy --restore-from-file=backup/ file_name", "[ INFO ] You can now connect to https://<host name>:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.<random>_he_setup_lock is removed, delete it once ready to proceed]", "gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "dnf repolist", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms", "subscription-manager release --set=8.6", "dnf module -y enable pki-deps", "dnf module -y enable postgresql:12", "dnf module -y enable nodejs:14", "dnf distro-sync --nobest", "hosted-engine --vm-status", "hosted-engine --set-maintenance --mode=global", "hosted-engine --vm-status", "engine-cleanup", "engine-backup --mode=restore --file= file_name --log= log_file_name --restore-permissions", "engine-backup --mode=restore --scope=files --scope=db --scope=dwhdb --file= file_name --log= log_file_name --restore-permissions", "You should now run engine-setup. Done.", "engine-setup", "hosted-engine --set-maintenance --mode=none", "hosted-engine --vm-status", "{\"health\": \"good\", \"vm\": \"up\", \"detail\": \"Up\"}", "{\"reason\": \"bad vm status\", \"health\": \"bad\", \"vm\": \"up\", \"detail\": \"Powering up\"}", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "dnf repolist", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms", "subscription-manager release --set=8.6", "dnf module -y enable postgresql:12", "dnf module -y enable nodejs:14", "dnf distro-sync --nobest", "engine-backup --mode=backup --scope=grafanadb --scope=dwhdb --scope=files --file= file_name --log= log_file_name", "scp /tmp/file_name [email protected]:/tmp", "dnf install ovirt-engine-tools-backup", "dnf install postgresql-server postgresql-contrib", "su - postgres -c 'initdb' systemctl enable postgresql systemctl start postgresql", "engine-backup --mode=restore --scope=files --scope=grafanadb --scope=dwhdb --file= file_name --log= log_file_name --provision-dwh-db", "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms subscription-manager release --set=8.6", "dnf module -y enable pki-deps", "dnf upgrade --nobest", "dnf install ovirt-engine-dwh-setup", "systemctl stop ovirt-engine-dwhd.service", "listen_addresses = '*'", "systemctl restart postgresql", "sed -i '/^ENGINE_DB_/d' /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf sed -i -e 's;^\\(OVESETUP_ENGINE_CORE/enable=bool\\):True;\\1:False;' -e '/^OVESETUP_CONFIG\\/fqdn/d' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf", "rm -f /etc/pki/ovirt-engine/certs/apache.cer /etc/pki/ovirt-engine/certs/apache-grafana.cer /etc/pki/ovirt-engine/keys/apache.key.nopass /etc/pki/ovirt-engine/keys/apache-grafana.key.nopass /etc/pki/ovirt-engine/apache-ca.pem /etc/pki/ovirt-engine/apache-grafana-ca.pem", "engine-setup", "Host fully qualified DNS name of this server [ autodetected host name ]:", "Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:", "Host fully qualified DNS name of the engine server []: engine-fqdn Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: ssh port on remote engine server [22]: root password on remote engine server engine-fqdn : password", "Engine database host []: manager-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password", "Please confirm installation settings (OK, Cancel) [OK]:", "systemctl disable --now grafana-server.service", "service ovirt-engine restart", "sed -i -e 's;^\\(OVESETUP_DWH_CORE/enable=bool\\):True;\\1:False;' -e 's;^\\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\\):True;\\1:False;' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf sed -i -e 's;^\\(OVESETUP_GRAFANA_CORE/enable=bool\\):True;\\1:False;' /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf", "systemctl disable ovirt-engine-dwhd.service", "rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/*.conf /var/lib/ovirt-engine-dwh/backups/*", "POST /api/vms/ {vm:id} /snapshots/ HTTP/1.1 Accept: application/xml Content-type: application/xml <snapshot> <description>BACKUP</description> </snapshot>", "GET /api/vms/ {vm:id} /snapshots/ {snapshot:id} HTTP/1.1 All-Content: true Accept: application/xml Content-type: application/xml", "GET /api/vms/{vm:id}/snapshots/ HTTP/1.1 Accept: application/xml Content-type: application/xml", "GET /api/vms/ {vm:id} /snapshots/ {snapshot:id} /disks HTTP/1.1 Accept: application/xml Content-type: application/xml", "POST /api/vms/ {vm:id} /diskattachments/ HTTP/1.1 Accept: application/xml Content-type: application/xml <disk_attachment> <active>true</active> <interface>_virtio_scsi_</interface> <disk id=\" {disk:id} \"> <snapshot id=\" {snapshot:id} \"/> </disk> </disk_attachment>", "DELETE /api/vms/ {vm:id} /diskattachments/ {snapshot:id} HTTP/1.1 Accept: application/xml Content-type: application/xml", "DELETE /api/vms/ {vm:id} /snapshots/ {snapshot:id} HTTP/1.1 Accept: application/xml Content-type: application/xml", "POST /api/vms/ {vm:id} /disks/ HTTP/1.1 Accept: application/xml Content-type: application/xml <disk id=\" {disk:id} \"> </disk>", "DELETE /api/vms/ {vm:id} /disks/ {disk:id} HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <detach>true</detach> </action>", "POST /api/vms/ HTTP/1.1 Accept: application/xml Content-type: application/xml <vm> <cluster> <name>cluster_name</name> </cluster> <name>_NAME_</name> <initialization> <configuration> <data> <!-- omitting long ovf data --> </data> <type>ovf</type> </configuration> </initialization> </vm>", "POST /api/vms/ {vm:id} /disks/ HTTP/1.1 Accept: application/xml Content-type: application/xml <disk id=\" {disk:id} \"> </disk>", "POST /ovirt-engine/api/vms/123/diskattachments", "<disk_attachment> ... <disk> ... <backup>incremental</backup> ... </disk> </disk_attachment>", "<disk_attachment> ... <disk href=\"/ovirt-engine/api/disks/456\" id=\"456\"/> ... </disk_attachment>", "GET /ovirt-engine/api/vms/123/diskattachments", "<disk_attachments> <disk_attachment> ... <disk href=\"/ovirt-engine/api/disks/456\" id=\"456\"/> ... </disk_attachment> ... </disk_attachments>", "GET /ovirt-engine/api/disks/456", "<disk href=\"/ovirt-engine/api/disks/456\" id=\"456\"> ... <backup>incremental</backup> ... </disk>", "POST /ovirt-engine/api/vms/123/backups", "<backup> <disks> <disk id=\"456\" /> ... </disks> </backup>", "<backup id=\"789\"> <disks> <disk id=\"456\" /> ... ... </disks> <status>initializing</status> <creation_date> </backup>", "POST /ovirt-engine/api/vms/123/backups", "<backup> <from_checkpoint_id> previous-checkpoint-uuid </from_checkpoint_id> <disks> <disk id=\"456\" /> ... </disks> </backup>", "<backup> <from_checkpoint_id> previous-checkpoint-uuid </from_checkpoint_id> <disks> <disk id=\"456\" /> <disk id=\"789\" /> ... </disks> </backup>", "<backup id=\"101112\"> <from_checkpoint_id> previous-checkpoint-uuid </from_checkpoint_id> <to_checkpoint_id> new-checkpoint-uuid </to_checkpoint_id> <disks> <disk id=\"456\" /> <disk id=\"789\" /> ... ... </disks> <status>initializing</status> <creation_date> </backup>", "GET /ovirt-engine/api/vms/456/backups/123", "<backup id=\"456\"> <from_checkpoint_id>999</from_checkpoint_id> <to_checkpoint_id>666</to_checkpoint_id> <link href=\"/ovirt-engine/api/vms/456/backups/123/disks\" rel=\"disks\"/> <status>ready</status> <creation_date> </backup>", "GET /ovirt-engine/api/vms/456/backups/123/disks", "<disks> <disk id=\"789\"> <name>vm1_Disk1</name> <actual_size>671744</actual_size> <backup>incremental</backup> <backup_mode>full</backup_mode> <format>cow</format> <image_id>555</image_id> <qcow_version>qcow2_v3</qcow_version> <status>locked</status> <storage_type>image</storage_type> <total_size>0</total_size> </disk> </disks>", "POST /vms/123/backups/456/finalize", "POST /ovirt-engine/api/imagetransfers", "<image_transfer> <disk id=\"123\"/> <backup id=\"456\"/> <direction>download</direction> <format>raw</format> </image_transfer>", "POST /ovirt-engine/api/imagetransfers", "<image_transfer> <disk id=\"123\"/> <direction>upload</direction> <format>raw</format> </image_transfer>", "GET /vms/123/checkpoints/", "<parent_id>, <creation_date> and the virtual machine it belongs to <vm>: <checkpoints> <checkpoint id=\"456\"> <link href=\"/ovirt-engine/api/vms/vm-uuid/checkpoints/456/disks\" rel=\"disks\"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </checkpoint> </checkpoints>", "GET /vms/123/checkpoints/456", "<checkpoint id=\"456\"> <link href=\"/ovirt-engine/api/vms/vm-uuid/checkpoints/456/disks\" rel=\"disks\"/> <parent_id>parent-checkpoint-uuid</parent_id> <creation_date>xxx</creation_date> <vm href=\"/ovirt-engine/api/vms/123\" id=\"123\"/> </checkpoint>", "DELETE /vms/123/checkpoints/456/" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-Backups_and_Migration
7.2. Registering the Red Hat Support Tool Using the Command Line
7.2. Registering the Red Hat Support Tool Using the Command Line To register the Red Hat Support Tool to the customer portal using the command line, proceed as follows: Where username is the user name of the Red Hat Customer Portal account.
[ "~]# redhat-support-tool config user username", "~]# redhat-support-tool config password Please enter the password for username :" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-registering_the_red_hat_support_tool_using_the_command_line
Appendix E. Messaging Journal Configuration Elements
Appendix E. Messaging Journal Configuration Elements The table below lists all of the configuration elements related to the AMQ Broker messaging journal. Table E.1. Journal configuration elements Name Description journal-directory The directory where the message journal is located. The default value is <broker_instance_dir> /data/journal . For the best performance, the journal should be located on its own physical volume in order to minimize disk head movement. If the journal is on a volume that is shared with other processes that may be writing other files (for example, bindings journal, database, or transaction coordinator) then the disk head may well be moving rapidly between these files as it writes them, thus drastically reducing performance. When using a SAN, each journal instance should be given its own LUN (logical unit). create-journal-dir If set to true , the journal directory will be automatically created at the location specified in journal-directory if it does not already exist. The default value is true . journal-type Valid values are NIO or ASYNCIO . If set to NIO , the broker uses Java NIO interface to itsjournal. Set to ASYNCIO , and the broker will use the Linux asynchronous IO journal. If you choose ASYNCIO but are not running Linux or you do not have libaio installed then the broker will detect this and automatically fall back to using NIO . journal-sync-transactional If set to true , the broker flushes all transaction data to disk on transaction boundaries (that is, commit, prepare, and rollback). The default value is true . journal-sync-non-transactional If set to true , the broker flushes non-transactional message data (sends and acknowledgements) to disk each time. The default value is true . journal-file-size The size of each journal file in bytes. The default value is 10485760 bytes (10MiB). journal-min-files The minimum number of files the broker pre-creates when starting. Files are pre-created only if there is no existing message data. Depending on how much data you expect your queues to contain at steady state, you should tune this number of files to match the total amount of data expected. journal-pool-files The system will create as many files as needed; however, when reclaiming files it will shrink back to journal-pool-files . The default value is -1 , meaning it will never delete files on the journal once created. The system cannot grow infinitely, however, as you are still required to use paging for destinations that can grow indefinitely. journal-max-io Controls the maximum number of write requests that can be in the IO queue at any one time. If the queue becomes full then writes will block until space is freed up. When using NIO, this value should always be 1 . When using AIO, the default value is 500 . The total max AIO can't be higher than the value set at the OS level ( /proc/sys/fs/aio-max-nr ), which is usually at 65536. journal-buffer-timeout Controls the timeout for when the buffer will be flushed. AIO can typically withstand with a higher flush rate than NIO, so the system maintains different default values for both NIO and AIO. The default value for NIO is 3333333 nanoseconds, or 300 times per second, and the default value for AIO is 50000 nanoseconds, or 2000 times per second. Note By increasing the timeout value, you might be able to increase system throughput at the expense of latency, since the default values are chosen to give a reasonable balance between throughput and latency. journal-buffer-size The size of the timed buffer on AIO. The default value is 490KiB . journal-compact-min-files The minimal number of files necessary before the broker compacts the journal. The compacting algorithm will not start until you have at least journal-compact-min-files . The default value is 10 . Note Setting the value to 0 will disable compacting and could be dangerous because the journal could grow indefinitely. journal-compact-percentage The threshold to start compacting. Journal data will be compacted if less than journal-compact-percentage is determined to be live data. Note also that compacting will not start until you have at least journal-compact-min-files data files on the journal. The default value is 30 .
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/configuring_message_journal
7.151. mrtg
7.151. mrtg 7.151.1. RHBA-2012:1449 - mrtg bug fix update Updated mrtg packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The mrtg packages provide the Multi Router Traffic Grapher (MRTG) to monitor the traffic load on network-links. MRTG generates HTML pages containing Portable Network Graphics (PNG) images, which provide a live, visual representation of this traffic. Bug Fix BZ# 706519 Prior to this update, the MRTG tool did not handle the socket6 correctly. As a consequence, MRTG reported errors when run on a system with an IPv6 network interface due to a socket conflict. This update modifies the underlying code to socket6 as expected. (#706519) BZ# 707188 Prior to this update, changing the "kMG" keyword in the MRTG configuration could cause the labels on the y-axis to overlap the main area of the generated chart. With this update, an upstream patch has been applied to address this issue, and changing the "kMG" keyword in the configuration no longer leads to the incorrect rendering of the resulting charts. BZ# 836197 Prior to this update, the wrong value was returned from the IBM Fibrechannel switch when using the ifSpeed interface. As a consequence, mrtg cfgmaker failed to use ifHighSpeed on IBM FibreChannel switches. This update modifies the underlying code to return the correct value. All users of mrtg are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/mrtg
11.4. Including Functions in your VDB
11.4. Including Functions in your VDB In order for JBoss Data Virtualization to become aware of your functions, the actual code must be deployed on your server and available to your Teiid submodule. Teiid Designer workspace is aware of any models containing functions and their referenced jars and class information. When a view model containing user defined functions is added to a VDB, the jar containing the defined function is also added to VDB and visible in the VDB Editor's UDF Jars tab. Figure 11.2. VDB UDF Jar Files
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/including_functions_in_your_vdb
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC
Chapter 5. Installing a cluster on IBM Power Virtual Server into an existing VPC In OpenShift Container Platform version 4.15, you can install a cluster into an existing Virtual Private Cloud (VPC) on IBM Cloud(R). The installation program provisions the rest of the required infrastructure, which you can then further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 5.2. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster using an existing IBM(R) Virtual Private Cloud (VPC). Because the installation program cannot know what other components are in your existing subnets, it cannot choose subnet CIDRs and so forth. You must configure networking for the subnets to which you will install the cluster. 5.2.1. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 5.2.2. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 5.2.3. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.6. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 5.7. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 5.7.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.7.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 vpcSubnets: 12 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" credentialsMode: Manual publish: External 13 pullSecret: '{"auths": ...}' 14 fips: false sshKey: ssh-ed25519 AAAA... 15 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin for installation. The supported value is OVNKubernetes . 11 Specify the name of an existing VPC. 12 Specify the name of the existing VPC subnet. The subnets must belong to the VPC that you specified. Specify a subnet for each availability zone in the region. 13 Specify how to publish the user-facing endpoints of your cluster. 14 Required. The installation program prompts you for this value. 15 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.7.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.8. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 5.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 5.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 5.13. steps Customize your cluster Optional: Opt out of remote health reporting
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-cluster-existing-vpc networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcRegion : vpc-region vpcName: name-of-existing-vpc 11 vpcSubnets: 12 - powervs-region-example-subnet-1 zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" credentialsMode: Manual publish: External 13 pullSecret: '{\"auths\": ...}' 14 fips: false sshKey: ssh-ed25519 AAAA... 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_ibm_power_virtual_server/installing-ibm-powervs-vpc
Chapter 4. Red Hat Enterprise Linux Software certification
Chapter 4. Red Hat Enterprise Linux Software certification Red Hat Enterprise Linux (RHEL) Software certification program provides best of class performance for your applications on a more secure and stable platform, allowing you to identify, analyze and fine tune your workload performance while you are building applications. The customers can benefit from a trusted application and infrastructure stack, tested and jointly supported by Red Hat and the Partners. 4.1. Red Hat Enterprise Linux Software certification for Containerized products The Red Hat Enterprise Linux Software certification program for containerized products helps you to build, certify, and distribute your cloud-native products on Red Hat Enterprise Linux and the scalable container platform of Red Hat OpenShift. For an overview about container certification, see Red Hat Container certification . For more information about container image requirements, see Requirements for container images . To get started with container certification, see Working with Containers . 4.2. Red Hat Enterprise Linux Software certification for Non-containerized products The Red Hat Enterprise Linux Software certification program for traditional, non-containerized software products helps Independent Software Vendors (ISV) to verify the deployment and operation of their application software on systems and server environments running RHEL. To know about the certification requirements, see Program Prerequisites . To get started with the certification process, see Onboarding certification partners . For a detailed procedure about performing the certification process, see Certification workflow . 4.2.1. Red Hat Enterprise Linux Software certification for RPM based products The Red Hat Enterprise Linux Software certification program for RPM based products helps Independent Software Vendors (ISVs) to build, certify and distribute their application software packaged as RPMs for use on systems and server environments running RHEL. ISVs can use a yum repository to distribute their application software packaged as RPMs. To get started with RHEL software certification for ISVs, see Onboarding certification partners . To know the requirements for running the certification tests, see Testing requirements . For a detailed procedure about performing the certification process, see Certification workflow . 4.2.2. Red Hat Enterprise Linux Software certification for other packaging formats The Red Hat Enterprise Linux Software certification program for other packaging formats helps Independent Software Vendors (ISV) to verify the deployment and operation of their application software on systems and server environments running RHEL,in a way such that it does not impact their customer's Red Hat support, security and life-cycle management. This software certification is provided for those ISVs who choose to use a packaging/distribution method for their software product not formally supported by Red Hat. To get started with RHEL software certification for ISVs, see Onboarding certification partners . To know the requirements for running the certification tests, see Testing requirements . For a detailed procedure about performing the certification process, see Certification workflow . For certain instances, Red Hat requires a packaging or distribution method to obtain Red Hat Software certification.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_quick_start_guide/assembly_red-hat-enterprise-linux-software-certification_quick-start-guide_red-hat-openstack-certification
Chapter 13. SubjectAccessReview [authorization.k8s.io/v1]
Chapter 13. SubjectAccessReview [authorization.k8s.io/v1] Description SubjectAccessReview checks whether or not a user or group can perform an action. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 13.1.1. .spec Description SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description extra object Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. extra{} array (string) groups array (string) Groups is the groups you're testing for. nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface uid string UID information about the requesting user. user string User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups 13.1.2. .spec.extra Description Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. Type object 13.1.3. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 13.1.4. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 13.1.5. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 13.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/subjectaccessreviews POST : create a SubjectAccessReview 13.2.1. /apis/authorization.k8s.io/v1/subjectaccessreviews Table 13.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SubjectAccessReview Table 13.2. Body parameters Parameter Type Description body SubjectAccessReview schema Table 13.3. HTTP responses HTTP code Reponse body 200 - OK SubjectAccessReview schema 201 - Created SubjectAccessReview schema 202 - Accepted SubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authorization_apis/subjectaccessreview-authorization-k8s-io-v1
7.84. hwloc
7.84. hwloc 7.84.1. RHBA-2013:0331 - hwloc bug fix and enhancement update Updated hwloc packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The hwloc package provides Portable Hardware Locality, which is a portable abstraction of the hierarchical topology of current architectures. Note The hwloc packages have been upgraded to upstream version 1.5, which provides a number of bug fixes and enhancements over the version. (BZ# 797576 ) Users of hwloc are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/hwloc
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/spine_leaf_networking/making-open-source-more-inclusive
Chapter 30. Configuring time synchronization by using RHEL system roles
Chapter 30. Configuring time synchronization by using RHEL system roles The Network Time Protocol (NTP) and Precision Time Protocol (PTP) are standards to synchronize the clock of computers over a network. An accurate time synchronization in networks is important because certain services rely on it. For example, Kerberos tolerates only a small time difference between the server and client to prevent replay attacks. You can set the time service to configure in the timesync_ntp_provider variable of a playbook. If you do not set this variable, the role determines the time service based on the following factors: On RHEL 8 and later: chronyd On RHEL 6 and 7: chronyd (default) or, if already installed ntpd . 30.1. Configuring time synchronization over NTP by using the timesync RHEL system role The Network Time Protocol (NTP) synchronizes the time of a host with an NTP server over a network. In IT networks, services rely on a correct system time, for example, for security and logging purposes. By using the timesync RHEL system role, you can automate the configuration of Red Hat Enterprise Linux NTP clients in your network and keep the time synchronized. Warning The timesync RHEL system role replaces the configuration of the specified given or detected provider service on the managed host. Consequently, all settings are lost if they are not specified in the playbook. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Managing time synchronization hosts: managed-node-01.example.com tasks: - name: Configuring NTP with an internal server (preferred) and a public server pool as fallback ansible.builtin.include_role: name: rhel-system-roles.timesync vars: timesync_ntp_servers: - hostname: time.example.com trusted: yes prefer: yes iburst: yes - hostname: 0.rhel.pool.ntp.org pool: yes iburst: yes The settings specified in the example playbook include the following: pool: <yes|no> Flags a source as an NTP pool rather than an individual host. In this case, the service expects that the name resolves to multiple IP addresses which can change over time. iburst: yes Enables fast initial synchronization. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.timesync/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the details about the time sources: If the managed node runs the chronyd service, enter: If the managed node runs the ntpd service, enter: Additional resources /usr/share/ansible/roles/rhel-system-roles.time_sync/README.md file /usr/share/doc/rhel-system-roles/time_sync/ directory Are the rhel.pool.ntp.org NTP servers supported by Red Hat? (Red Hat Knowledgebase) 30.2. Configuring time synchronization over NTP with NTS by using the timesync RHEL system role The Network Time Protocol (NTP) synchronizes the time of a host with an NTP server over a network. By using the Network Time Security (NTS) mechanism, clients establish a TLS-encrypted connection to the server and authenticate NTP packets. In IT networks, services rely on a correct system time, for example, for security and logging purposes. By using the timesync RHEL system role, you can automate the configuration of Red Hat Enterprise Linux NTP clients in your network and keep the time synchronized over NTS. Note that you cannot mix NTS servers with non-NTS servers. In mixed configurations, NTS servers are trusted and clients do not fall back to unauthenticated NTP sources because they can be exploited in man-in-the-middle (MITM) attacks. For further details, see the authselectmode parameter description in the chrony.conf(5) man page on your system. Warning The timesync RHEL system role replaces the configuration of the specified given or detected provider service on the managed host. Consequently, all settings are lost if they are not specified in the playbook. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes use chronyd . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Managing time synchronization hosts: managed-node-01.example.com tasks: - name: Configuring NTP with NTS-enabled servers ansible.builtin.include_role: name: rhel-system-roles.timesync vars: timesync_ntp_servers: - hostname: ptbtime1.ptb.de nts: yes iburst: yes The settings specified in the example playbook include the following: iburst: yes Enables fast initial synchronization. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.timesync/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification If the managed node runs the chronyd service: Display the details about the time sources: For sources with NTS enabled, display information that is specific to authentication of NTP sources: Verify that the reported cookies in the Cook column is larger than 0. If the managed node runs the ntpd service, enter: Additional resources /usr/share/ansible/roles/rhel-system-roles.time_sync/README.md file /usr/share/doc/rhel-system-roles/time_sync/ directory Are the rhel.pool.ntp.org NTP servers supported by Red Hat? (Red Hat Knowledgebase)
[ "--- - name: Managing time synchronization hosts: managed-node-01.example.com tasks: - name: Configuring NTP with an internal server (preferred) and a public server pool as fallback ansible.builtin.include_role: name: rhel-system-roles.timesync vars: timesync_ntp_servers: - hostname: time.example.com trusted: yes prefer: yes iburst: yes - hostname: 0.rhel.pool.ntp.org pool: yes iburst: yes", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'chronyc sources' MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* time.example.com 1 10 377 210 +159us[ +55us] +/- 12ms ^? ntp.example.org 2 9 377 409 +1120us[+1021us] +/- 42ms ^? time.us.example.net 2 9 377 992 -329us[ -386us] +/- 15ms", "ansible managed-node-01.example.com -m command -a 'ntpq -p' remote refid st t when poll reach delay offset jitter ============================================================================== *time.example.com .PTB. 1 u 2 64 77 23.585 967.902 0.684 - ntp.example.or 192.0.2.17 2 u - 64 77 27.090 966.755 0.468 +time.us.example 198.51.100.19 2 u 65 64 37 18.497 968.463 1.588", "--- - name: Managing time synchronization hosts: managed-node-01.example.com tasks: - name: Configuring NTP with NTS-enabled servers ansible.builtin.include_role: name: rhel-system-roles.timesync vars: timesync_ntp_servers: - hostname: ptbtime1.ptb.de nts: yes iburst: yes", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'chronyc sources' MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* ptbtime1.ptb.de 1 6 17 55 -13us[ -54us] +/- 12ms ^- ptbtime2.ptb.de 1 6 17 56 -257us[ -297us] +/- 12ms", "ansible managed-node-01.example.com -m command -a 'chronyc -N authdata' Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen ========================================================================= ptbtime1.ptb.de NTS 1 15 256 229 0 0 8 100 ptbtime2.ptb.de NTS 1 15 256 230 0 0 8 100", "ansible managed-node-01.example.com -m command -a 'ntpq -p' remote refid st t when poll reach delay offset jitter ============================================================================== *ptbtime1.ptb.de .PTB. 1 8 2 64 77 23.585 967.902 0.684 -ptbtime2.ptb.de .PTB. 1 8 30 64 78 24.653 993.937 0.765" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/configuring-time-synchronization-by-using-the-timesync-rhel-system-role_automating-system-administration-by-using-rhel-system-roles
function::user_short
function::user_short Name function::user_short - Retrieves a short value stored in user space Synopsis Arguments addr the user space address to retrieve the short from Description Returns the short value from a given user space address. Returns zero when user space data is not accessible.
[ "user_short:long(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-short
Providing feedback on Red Hat Directory Server
Providing feedback on Red Hat Directory Server We appreciate your input on our documentation and products. Please let us know how we could make it better. To do so: For submitting feedback on the Red Hat Directory Server documentation through Jira (account required): Go to the Red Hat Issue Tracker . Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. For submitting feedback on the Red Hat Directory Server product through Jira (account required): Go to the Red Hat Issue Tracker . On the Create Issue page, click . Fill in the Summary field. Select the component in the Component field. Fill in the Description field including: The version number of the selected component. Steps to reproduce the problem or your suggestion for improvement. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_directory_attributes_and_values/proc_providing-feedback-on-red-hat-documentation_managing-directory-attributes-and-values
Chapter 4. Deprecated features
Chapter 4. Deprecated features The features deprecated in this release, and that were supported in releases of AMQ Streams, are outlined below. 4.1. Java 8 Support for Java 8 was deprecated in Kafka 3.0.0 and AMQ Streams 2.0. Java 8 will be unsupported for all AMQ Streams components, including clients, in the future. AMQ Streams supports Java 11. Use Java 11 when developing new applications. Plan to migrate any applications that currently use Java 8 to Java 11. 4.2. Kafka MirrorMaker 1 Kafka MirrorMaker replicates data between two or more active Kafka clusters, within or across data centers. Kafka MirrorMaker 1 is deprecated for Kafka 3.1.0 and will be removed in Kafka 4.0.0. MirrorMaker 2.0 will be the only version available. MirrorMaker 2.0 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. As a consequence, the AMQ Streams KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated. The KafkaMirrorMaker resource will be removed from AMQ Streams when Kafka 4.0.0 is adopted. If you are using MirrorMaker 1 (referred to as just MirrorMaker in the AMQ Streams documentation), use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . MirrorMaker 2.0 renames topics replicated to a target cluster. IdentityReplicationPolicy configuration overrides the automatic renaming. Use it to produce the same active/passive unidirectional replication as MirrorMaker 1. See Kafka MirrorMaker 2.0 cluster configuration . 4.3. Identity replication policy Identity replication policy is used with MirrorMaker 2 to override the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. The AMQ Streams Identity Replication Policy class ( io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy ) is now deprecated and will be removed in the future. You can update to use Kafka's own Identity Replication Policy ( class org.apache.kafka.connect.mirror.IdentityReplicationPolicy ). See Kafka MirrorMaker 2.0 cluster configuration .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/release_notes_for_amq_streams_2.1_on_rhel/deprecated-features-str
3.7. Configuring IP Networking from the Kernel Command line
3.7. Configuring IP Networking from the Kernel Command line When connecting to the root file system on an iSCSI target from an interface, the network settings are not configured on the installed system. For solution of this problem: Install the dracut utility. For information on using dracut , see Red Hat Enterprise Linux System Administrator's Guide Set the configuration using the ip option on the kernel command line: dhcp - DHCP configuration dhpc6 - DHCP IPv6 configuration auto6 - automatic IPv6 configuration on , any - any protocol available in the kernel (default) none , off - no autoconfiguration, static network configuration For example: Set the name server configuration: The dracut utility sets up a network connection and generates new ifcfg files that can be copied to the /etc/sysconfig/network-scripts/ file.
[ "ip<client-IP-number>:[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:<interface>:{dhcp|dhcp6|auto6|on|any|none|off}", "ip=192.168.180.120:192.168.180.100:192.168.180.1:255.255.255.0::enp1s0:off", "nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Configuring_IP_Networking_from_the_Kernel_Command_line
A.3. Troubleshooting Firefox Kerberos Configuration
A.3. Troubleshooting Firefox Kerberos Configuration If Kerberos authentication is not working, turn on verbose logging for the authentication process. Close all instances of Firefox. In a command prompt, export values for the NSPR_LOG_* variables: Restart Firefox from that shell , and visit the website where Kerberos authentication is failing. Check the /tmp/moz.log file for error messages with nsNegotiateAuth in the message. There are several common errors that occur with Kerberos authentication. No credentials found This means that no Kerberos tickets are available (meaning that they expired or were not generated). To fix this, run kinit to generate the Kerberos ticket, and then open the website again. Server not found in Kerberos database This means that the browser is unable to contact the KDC. This is usually a Kerberos configuration problem. The correct entries must be in the [domain_realm] section of the /etc/krb5.conf file to identify the domain. For example: No errors are present in the log An HTTP proxy server could be stripping off the HTTP headers required for Kerberos authentication. Try to connect to the site using HTTPS, which allows the request to pass through unmodified.
[ "export NSPR_LOG_MODULES=negotiateauth:5 export NSPR_LOG_FILE=/tmp/moz.log", "-1208550944[90039d0]: entering nsNegotiateAuth::GetNextToken() -1208550944[90039d0]: gss_init_sec_context() failed: Miscellaneous failure No credentials cache found", "-1208994096[8d683d8]: entering nsAuthGSSAPI::GetNextToken() -1208994096[8d683d8]: gss_init_sec_context() failed: Miscellaneous failure Server not found in Kerberos database", ".example.com = EXAMPLE.COM example.com = EXAMPLE.COM" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/firefox-configuration-troubleshooting
Chapter 5. Machine phases and lifecycle
Chapter 5. Machine phases and lifecycle Machines move through a lifecycle that has several defined phases. Understanding the machine lifecycle and its phases can help you verify whether a procedure is complete or troubleshoot undesired behavior. In OpenShift Container Platform, the machine lifecycle is consistent across all supported cloud providers. 5.1. Machine phases As a machine moves through its lifecycle, it passes through different phases. Each phase is a basic representation of the state of the machine. Provisioning There is a request to provision a new machine. The machine does not yet exist and does not have an instance, a provider ID, or an address. Provisioned The machine exists and has a provider ID or an address. The cloud provider has created an instance for the machine. The machine has not yet become a node and the status.nodeRef section of the machine object is not yet populated. Running The machine exists and has a provider ID or address. Ignition has run successfully and the cluster machine approver has approved a certificate signing request (CSR). The machine has become a node and the status.nodeRef section of the machine object contains node details. Deleting There is a request to delete the machine. The machine object has a DeletionTimestamp field that indicates the time of the deletion request. Failed There is an unrecoverable problem with the machine. This can happen, for example, if the cloud provider deletes the instance for the machine. 5.2. The machine lifecycle The lifecycle begins with the request to provision a machine and continues until the machine no longer exists. The machine lifecycle proceeds in the following order. Interruptions due to errors or lifecycle hooks are not included in this overview. There is a request to provision a new machine for one of the following reasons: A cluster administrator scales a machine set such that it requires additional machines. An autoscaling policy scales machine set such that it requires additional machines. A machine that is managed by a machine set fails or is deleted and the machine set creates a replacement to maintain the required number of machines. The machine enters the Provisioning phase. The infrastructure provider creates an instance for the machine. The machine has a provider ID or address and enters the Provisioned phase. The Ignition configuration file is processed. The kubelet issues a certificate signing request (CSR). The cluster machine approver approves the CSR. The machine becomes a node and enters the Running phase. An existing machine is slated for deletion for one of the following reasons: A user with cluster-admin permissions uses the oc delete machine command. The machine gets a machine.openshift.io/delete-machine annotation. The machine set that manages the machine marks it for deletion to reduce the replica count as part of reconciliation. The cluster autoscaler identifies a node that is unnecessary to meet the deployment needs of the cluster. A machine health check is configured to replace an unhealthy machine. The machine enters the Deleting phase, in which it is marked for deletion but is still present in the API. The machine controller removes the instance from the infrastructure provider. The machine controller deletes the Node object. 5.3. Determining the phase of a machine You can find the phase of a machine by using the OpenShift CLI ( oc ) or by using the web console. You can use this information to verify whether a procedure is complete or to troubleshoot undesired behavior. 5.3.1. Determining the phase of a machine by using the CLI You can find the phase of a machine by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc CLI. Procedure List the machines on the cluster by running the following command: USD oc get machine -n openshift-machine-api Example output NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m The PHASE column of the output contains the phase of each machine. 5.3.2. Determining the phase of a machine by using the web console You can find the phase of a machine by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Log in to the web console as a user with the cluster-admin role. Navigate to Compute Machines . On the Machines page, select the name of the machine that you want to find the phase of. On the Machine details page, select the YAML tab. In the YAML block, find the value of the status.phase field. Example YAML snippet apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t # ... status: phase: Running 1 1 In this example, the phase is Running . 5.4. Additional resources Lifecycle hooks for the machine deletion phase
[ "oc get machine -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t status: phase: Running 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_management/machine-phases-lifecycle
Chapter 1. About
Chapter 1. About 1.1. About OpenShift Virtualization Learn about OpenShift Virtualization's capabilities and support scope. 1.1.1. What you can do with OpenShift Virtualization OpenShift Virtualization provides the scalable, enterprise-grade virtualization functionality in Red Hat OpenShift. You can use it to manage virtual machines (VMs) exclusively or alongside container workloads. Note If you have a Red Hat OpenShift Virtualization Engine subscription, you can run unlimited VMs on subscribed hosts, but you cannot run application instances in containers. For more information, see the subscription guide section about Red Hat OpenShift Virtualization Engine and related products . OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include: Creating and managing Linux and Windows VMs Running pod and VM workloads alongside each other in a cluster Connecting to VMs through a variety of consoles and CLI tools Importing and cloning existing VMs Managing network interface controllers and storage disks attached to VMs Live migrating VMs between nodes You can manage your cluster and virtualization resources by using the Virtualization perspective of the OpenShift Container Platform web console, and by using the OpenShift CLI ( oc ). OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features. Important When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. You can use OpenShift Virtualization with OVN-Kubernetes or one of the other certified network plugins listed in Certified OpenShift CNI Plug-ins . You can check your OpenShift Virtualization cluster for compliance issues by installing the Compliance Operator and running a scan with the ocp4-moderate and ocp4-moderate-node profiles . The Compliance Operator uses OpenSCAP, a NIST-certified tool , to scan and enforce security policies. 1.1.2. Comparing OpenShift Virtualization to VMware vSphere If you are familiar with VMware vSphere, the following table lists OpenShift Virtualization components that you can use to accomplish similar tasks. However, because OpenShift Virtualization is conceptually different from vSphere, and much of its functionality comes from the underlying OpenShift Container Platform, OpenShift Virtualization does not have direct alternatives for all vSphere concepts or components. Table 1.1. Mapping of vSphere concepts to their closest OpenShift Virtualization counterparts vSphere concept OpenShift Virtualization Explanation Datastore Persistent volume (PV) + Persistent volume claim (PVC) Stores VM disks. A PV represents existing storage and is attached to a VM through a PVC. When created with the ReadWriteMany (RWX) access mode, PVCs can be mounted by multiple VMs simultaneously. Dynamic Resource Scheduling (DRS) Pod eviction policy + Descheduler Provides active resource balancing. A combination of pod eviction policies and a descheduler allows VMs to be live migrated to more appropriate nodes to keep node resource utilization manageable. NSX Multus + OVN-Kubernetes + Third-party container network interface (CNI) plug-ins Provides an overlay network configuration. There is no direct equivalent for NSX in OpenShift Virtualization, but you can use the OVN-Kubernetes network provider or install certified third-party CNI plug-ins. Storage Policy Based Management (SPBM) Storage class Provides policy-based storage selection. Storage classes represent various storage types and describe storage capabilities, such as quality of service, backup policy, reclaim policy, and whether volume expansion is allowed. A PVC can request a specific storage class to satisfy application requirements. vCenter vRealize Operations OpenShift Metrics and Monitoring Provides host and VM metrics. You can view metrics and monitor the overall health of the cluster and VMs by using the OpenShift Container Platform web console. vMotion Live migration Moves a running VM to another node without interruption. For live migration to be available, the PVC attached to the VM must have the ReadWriteMany (RWX) access mode. vSwitch DvSwitch NMState Operator + Multus Provides a physical network configuration. You can use the NMState Operator to apply state-driven network configuration and manage various network interface types, including Linux bridges and network bonds. With Multus, you can attach multiple network interfaces and connect VMs to external networks. 1.1.3. Supported cluster versions for OpenShift Virtualization OpenShift Virtualization 4.18 is supported for use on OpenShift Container Platform 4.18 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. 1.1.4. About volume and access modes for virtual machine disks If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: ReadWriteMany (RWX) access mode is required for live migration. The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage. For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes. Important You cannot live migrate virtual machines with the following configurations: Storage volume with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Set the evictionStrategy field to None for these virtual machines. The None strategy powers down VMs during node reboots. 1.1.5. Single-node OpenShift differences You can install OpenShift Virtualization on single-node OpenShift. However, you should be aware that Single-node OpenShift does not support the following features: High availability Pod disruption Live migration Virtual machines or templates that have an eviction strategy configured 1.1.6. Additional resources Glossary of common terms for OpenShift Container Platform storage About single-node OpenShift Assisted installer Pod disruption budgets About live migration Eviction strategies Tuning & Scaling Guide Supported limits for OpenShift Virtualization 4.x 1.2. Security policies Learn about OpenShift Virtualization security and authorization. Key points OpenShift Virtualization adheres to the restricted Kubernetes pod security standards profile, which aims to enforce the current best practices for pod security. Virtual machine (VM) workloads run as unprivileged pods. Security context constraints (SCCs) are defined for the kubevirt-controller service account. TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. 1.2.1. About workload security By default, virtual machine (VM) workloads do not run with root privileges in OpenShift Virtualization, and there are no supported OpenShift Virtualization features that require root privileges. For each VM, a virt-launcher pod runs an instance of libvirt in session mode to manage the VM process. In session mode, the libvirt daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege. 1.2.2. TLS certificates TLS certificates for OpenShift Virtualization components are renewed and rotated automatically. You are not required to refresh them manually. Automatic renewal schedules TLS certificates are automatically deleted and replaced according to the following schedule: KubeVirt certificates are renewed daily. Containerized Data Importer controller (CDI) certificates are renewed every 15 days. MAC pool certificates are renewed every year. Automatic TLS certificate rotation does not disrupt any operations. For example, the following operations continue to function without any disruption: Migrations Image uploads VNC and console connections 1.2.3. Authorization OpenShift Virtualization uses role-based access control (RBAC) to define permissions for human users and service accounts. The permissions defined for service accounts control the actions that OpenShift Virtualization components can perform. You can also use RBAC roles to manage user access to virtualization features. For example, an administrator can create an RBAC role that provides the permissions required to launch a virtual machine. The administrator can then restrict access by binding the role to specific users. 1.2.3.1. Default cluster roles for OpenShift Virtualization By using cluster role aggregation, OpenShift Virtualization extends the default OpenShift Container Platform cluster roles to include permissions for accessing virtualization objects. Table 1.2. OpenShift Virtualization cluster roles Default cluster role OpenShift Virtualization cluster role OpenShift Virtualization cluster role description view kubevirt.io:view A user that can view all OpenShift Virtualization resources in the cluster but cannot create, delete, modify, or access them. For example, the user can see that a virtual machine (VM) is running but cannot shut it down or gain access to its console. edit kubevirt.io:edit A user that can modify all OpenShift Virtualization resources in the cluster. For example, the user can create VMs, access VM consoles, and delete VMs. admin kubevirt.io:admin A user that has full permissions to all OpenShift Virtualization resources, including the ability to delete collections of resources. The user can also view and modify the OpenShift Virtualization runtime configuration, which is located in the HyperConverged custom resource in the openshift-cnv namespace. 1.2.3.2. RBAC roles for storage features in OpenShift Virtualization The following permissions are granted to the Containerized Data Importer (CDI), including the cdi-operator and cdi-controller service accounts. 1.2.3.2.1. Cluster-wide RBAC roles Table 1.3. Aggregated cluster roles for the cdi.kubevirt.io API group CDI cluster role Resources Verbs cdi.kubevirt.io:admin datavolumes , uploadtokenrequests * (all) datavolumes/source create cdi.kubevirt.io:edit datavolumes , uploadtokenrequests * datavolumes/source create cdi.kubevirt.io:view cdiconfigs , dataimportcrons , datasources , datavolumes , objecttransfers , storageprofiles , volumeimportsources , volumeuploadsources , volumeclonesources get , list , watch datavolumes/source create cdi.kubevirt.io:config-reader cdiconfigs , storageprofiles get , list , watch Table 1.4. Cluster-wide roles for the cdi-operator service account API group Resources Verbs rbac.authorization.k8s.io clusterrolebindings , clusterroles get , list , watch , create , update , delete security.openshift.io securitycontextconstraints get , list , watch , update , create apiextensions.k8s.io customresourcedefinitions , customresourcedefinitions/status get , list , watch , create , update , delete cdi.kubevirt.io * * upload.cdi.kubevirt.io * * admissionregistration.k8s.io validatingwebhookconfigurations , mutatingwebhookconfigurations create , list , watch admissionregistration.k8s.io validatingwebhookconfigurations Allow list: cdi-api-dataimportcron-validate, cdi-api-populator-validate, cdi-api-datavolume-validate, cdi-api-validate, objecttransfer-api-validate get , update , delete admissionregistration.k8s.io mutatingwebhookconfigurations Allow list: cdi-api-datavolume-mutate get , update , delete apiregistration.k8s.io apiservices get , list , watch , create , update , delete Table 1.5. Cluster-wide roles for the cdi-controller service account API group Resources Verbs "" (core) events create , patch "" (core) persistentvolumeclaims get , list , watch , create , update , delete , deletecollection , patch "" (core) persistentvolumes get , list , watch , update "" (core) persistentvolumeclaims/finalizers , pods/finalizers update "" (core) pods , services get , list , watch , create , delete "" (core) configmaps get , create storage.k8s.io storageclasses , csidrivers get , list , watch config.openshift.io proxies get , list , watch cdi.kubevirt.io * * snapshot.storage.k8s.io volumesnapshots , volumesnapshotclasses , volumesnapshotcontents get , list , watch , create , delete snapshot.storage.k8s.io volumesnapshots update , deletecollection apiextensions.k8s.io customresourcedefinitions get , list , watch scheduling.k8s.io priorityclasses get , list , watch image.openshift.io imagestreams get , list , watch "" (core) secrets create kubevirt.io virtualmachines/finalizers update 1.2.3.2.2. Namespaced RBAC roles Table 1.6. Namespaced roles for the cdi-operator service account API group Resources Verbs rbac.authorization.k8s.io rolebindings , roles get , list , watch , create , update , delete "" (core) serviceaccounts , configmaps , events , secrets , services get , list , watch , create , update , patch , delete apps deployments , deployments/finalizers get , list , watch , create , update , delete route.openshift.io routes , routes/custom-host get , list , watch , create , update config.openshift.io proxies get , list , watch monitoring.coreos.com servicemonitors , prometheusrules get , list , watch , create , delete , update , patch coordination.k8s.io leases get , create , update Table 1.7. Namespaced roles for the cdi-controller service account API group Resources Verbs "" (core) configmaps get , list , watch , create , update , delete "" (core) secrets get , list , watch batch cronjobs get , list , watch , create , update , delete batch jobs create , delete , list , watch coordination.k8s.io leases get , create , update networking.k8s.io ingresses get , list , watch route.openshift.io routes get , list , watch 1.2.3.3. Additional SCCs and permissions for the kubevirt-controller service account Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. The virt-controller is a cluster controller that creates the virt-launcher pods for virtual machines in the cluster. These pods are granted permissions by the kubevirt-controller service account. The kubevirt-controller service account is granted additional SCCs and Linux capabilities so that it can create virt-launcher pods with the appropriate permissions. These extended permissions allow virtual machines to use OpenShift Virtualization features that are beyond the scope of typical pods. The kubevirt-controller service account is granted the following SCCs: scc.AllowHostDirVolumePlugin = true This allows virtual machines to use the hostpath volume plugin. scc.AllowPrivilegedContainer = false This ensures the virt-launcher pod is not run as a privileged container. scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"} SYS_NICE allows setting the CPU affinity. NET_BIND_SERVICE allows DHCP and Slirp operations. Viewing the SCC and RBAC definitions for the kubevirt-controller You can view the SecurityContextConstraints definition for the kubevirt-controller by using the oc tool: USD oc get scc kubevirt-controller -o yaml You can view the RBAC definition for the kubevirt-controller clusterrole by using the oc tool: USD oc get clusterrole kubevirt-controller -o yaml 1.2.4. Additional resources Managing security context constraints Using RBAC to define and apply permissions Creating a cluster role Cluster role binding commands Enabling user permissions to clone data volumes across namespaces 1.3. OpenShift Virtualization Architecture The Operator Lifecycle Manager (OLM) deploys operator pods for each component of OpenShift Virtualization: Compute: virt-operator Storage: cdi-operator Network: cluster-network-addons-operator Scaling: ssp-operator OLM also deploys the hyperconverged-cluster-operator pod, which is responsible for the deployment, configuration, and life cycle of other components, and several helper pods: hco-webhook , and hyperconverged-cluster-cli-download . After all operator pods are successfully deployed, you should create the HyperConverged custom resource (CR). The configurations set in the HyperConverged CR serve as the single source of truth and the entrypoint for OpenShift Virtualization, and guide the behavior of the CRs. The HyperConverged CR creates corresponding CRs for the operators of all other components within its reconciliation loop. Each operator then creates resources such as daemon sets, config maps, and additional components for the OpenShift Virtualization control plane. For example, when the HyperConverged Operator (HCO) creates the KubeVirt CR, the OpenShift Virtualization Operator reconciles it and creates additional resources such as virt-controller , virt-handler , and virt-api . The OLM deploys the Hostpath Provisioner (HPP) Operator, but it is not functional until you create a hostpath-provisioner CR. Virtctl client commands 1.3.1. About the HyperConverged Operator (HCO) The HCO, hco-operator , provides a single entry point for deploying and managing OpenShift Virtualization and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators. Table 1.8. HyperConverged Operator components Component Description deployment/hco-webhook Validates the HyperConverged custom resource contents. deployment/hyperconverged-cluster-cli-download Provides the virtctl tool binaries to the cluster so that you can download them directly from the cluster. KubeVirt/kubevirt-kubevirt-hyperconverged Contains all operators, CRs, and objects needed by OpenShift Virtualization. SSP/ssp-kubevirt-hyperconverged A Scheduling, Scale, and Performance (SSP) CR. This is automatically created by the HCO. CDI/cdi-kubevirt-hyperconverged A Containerized Data Importer (CDI) CR. This is automatically created by the HCO. NetworkAddonsConfig/cluster A CR that instructs and is managed by the cluster-network-addons-operator . 1.3.2. About the Containerized Data Importer (CDI) Operator The CDI Operator, cdi-operator , manages CDI and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume. Table 1.9. CDI Operator components Component Description deployment/cdi-apiserver Manages the authorization to upload VM disks into PVCs by issuing secure upload tokens. deployment/cdi-uploadproxy Directs external disk upload traffic to the appropriate upload server pod so that it can be written to the correct PVC. Requires a valid upload token. pod/cdi-importer Helper pod that imports a virtual machine image into a PVC when creating a data volume. 1.3.3. About the Cluster Network Addons Operator The Cluster Network Addons Operator, cluster-network-addons-operator , deploys networking components on a cluster and manages the related resources for extended network functionality. Table 1.10. Cluster Network Addons Operator components Component Description deployment/kubemacpool-cert-manager Manages TLS certificates of Kubemacpool's webhooks. deployment/kubemacpool-mac-controller-manager Provides a MAC address pooling service for virtual machine (VM) network interface cards (NICs). daemonset/bridge-marker Marks network bridges available on nodes as node resources. daemonset/kube-cni-linux-bridge-plugin Installs Container Network Interface (CNI) plugins on cluster nodes, enabling the attachment of VMs to Linux bridges through network attachment definitions. 1.3.4. About the Hostpath Provisioner (HPP) Operator The HPP Operator, hostpath-provisioner-operator , deploys and manages the multi-node HPP and related resources. Table 1.11. HPP Operator components Component Description deployment/hpp-pool-hpp-csi-pvc-block-<worker_node_name> Provides a worker for each node where the HPP is designated to run. The pods mount the specified backing storage on the node. daemonset/hostpath-provisioner-csi Implements the Container Storage Interface (CSI) driver interface of the HPP. daemonset/hostpath-provisioner Implements the legacy driver interface of the HPP. 1.3.5. About the Scheduling, Scale, and Performance (SSP) Operator The SSP Operator, ssp-operator , deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator. 1.3.6. About the OpenShift Virtualization Operator The OpenShift Virtualization Operator, virt-operator , deploys, upgrades, and manages OpenShift Virtualization without disrupting current virtual machine (VM) workloads. In addition, the OpenShift Virtualization Operator deploys the common instance types and common preferences. Table 1.12. virt-operator components Component Description deployment/virt-api HTTP API server that serves as the entry point for all virtualization-related flows. deployment/virt-controller Observes the creation of a new VM instance object and creates a corresponding pod. When the pod is scheduled on a node, virt-controller updates the VM with the node name. daemonset/virt-handler Monitors any changes to a VM and instructs virt-launcher to perform the required operations. This component is node-specific. pod/virt-launcher Contains the VM that was created by the user as implemented by libvirt and qemu .
[ "oc get scc kubevirt-controller -o yaml", "oc get clusterrole kubevirt-controller -o yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/virtualization/about
Chapter 29. Adding columns to guided decision tables
Chapter 29. Adding columns to guided decision tables After you have created the guided decision table, you can define and add various types of columns within the guided decision tables designer. Prerequisites Any data objects that will be used for column parameters, such as Facts and Fields, have been created within the same package where the guided decision table is found, or have been imported from another package in Data Objects New item of the guided decision tables designer. For descriptions of these column parameters, see the "Required column parameters" segments for each column type in Chapter 30, Types of columns in guided decision tables . For details about creating data objects, see Section 26.1, "Creating data objects" . Procedure In the guided decision tables designer, click Columns Insert Column . Click Include advanced options to view the full list of column options. Figure 29.1. Add columns Select the column type that you want to add, click , and follow the steps in the wizard to specify the data required to add the column. For descriptions of each column type and required parameters for setup, see Chapter 30, Types of columns in guided decision tables . Click Finish to add the configured column. After all columns are added, you can begin adding rows of rules correlating to your columns to complete the decision table. For details, see Chapter 34, Adding rows and defining rules in guided decision tables . The following is an example decision table for a loan application decision service: Figure 29.2. Example of complete guided decision table
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/guided-decision-tables-columns-create-proc
2.6. Using NetworkManager with Network Scripts
2.6. Using NetworkManager with Network Scripts This section describes how to run a script and how to use custom commands in network scripts. The term network scripts refers to the script /etc/init.d/network and any other installed scripts it calls. Although NetworkManager provides the default networking service, scripts and NetworkManager can run in parallel and work together. Red Hat recommends to test them first. Running Network Script Run a network script only with the systemctl command: systemctl start|stop|restart|status network The systemctl utility clears any existing environment variables and ensures correct execution. In Red Hat Enterprise Linux 7, NetworkManager is started first, and /etc/init.d/network checks with NetworkManager to avoid tampering with NetworkManager 's connections. NetworkManager is intended to be the primary application using sysconfig configuration files, and /etc/init.d/network is intended to be secondary. The /etc/init.d/network script runs: manually - using one of the systemctl commands start|stop|restart network , or on boot and shutdown if the network service is enabled - as a result of the systemctl enable network command. It is a manual process and does not react to events that happen after boot. Users can also call the ifup and ifdown scripts manually. Note The systemctl reload network.service command does not work due to technical limitations of initscripts. To apply a new configuration for the network service, use the restart command: This brings down and brings up all the Network Interface Cards (NICs) to load the new configuration. For more information, see the Red Hat Knowledgebase solution Reload and force-reload options for network service . Using Custom Commands in Network Scripts Custom commands in the /sbin/ifup-local , ifdown-pre-local , and ifdown-local scripts are only executed if these devices are controlled by the /etc/init.d/network service. The ifup-local file does not exist by default. If required, create it under the /sbin/ directory. The ifup-local script is readable only by the initscripts and not by NetworkManager . To run a custom script using NetworkManager , create it under the dispatcher.d/ directory. See the section called "Running Dispatcher scripts" . Important Modifying any files included with the initscripts package or related rpms is not recommended. If a user modifies such files, Red Hat does not provide support. Custom tasks can run when network connections go up and down, both with the old network scripts and with NetworkManager . If NetworkManager is enabled, the ifup and ifdown script will ask NetworkManager whether NetworkManager manages the interface in question, which is found from the " DEVICE= " line in the ifcfg file. Devices managed by NetworkManager : calling ifup When you call ifup and the device is managed by NetworkManager , there are two options: If the device is not already connected, then ifup asks NetworkManager to start the connection. If the device is already connected, then nothing to do. calling ifdown When you call ifdown and the device is managed by NetworkManager : ifdown asks NetworkManager to terminate the connection. Devices unmanaged by NetworkManager : If you call either ifup or ifdown , the script starts the connection using the older, non-NetworkManager mechanism that it has used since the time before NetworkManager existed. Running Dispatcher scripts NetworkManager provides a way to run additional custom scripts to start or stop services based on the connection status. By default, the /etc/NetworkManager/dispatcher.d/ directory exists and NetworkManager runs scripts there, in alphabetical order. Each script must be an executable file owned by root and must have write permission only for the file owner. For more information about running NetworkManager dispatcher scripts, see the Red Hat Knowledgebase solution How to write a NetworkManager dispatcher script to apply ethtool commands .
[ "~]# systemctl restart network.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Using_NetworkManager_with_Network_Scripts
Chapter 10. Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates
Chapter 10. Installing a cluster on user-provisioned infrastructure in GCP by using Deployment Manager templates In OpenShift Container Platform version 4.15, you can install a cluster on Google Cloud Platform (GCP) that uses infrastructure that you provide. The steps for performing a user-provided infrastructure install are outlined here. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several Deployment Manager templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 10.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . Note Be sure to also review this site list if you are configuring a proxy. 10.2. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 10.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 10.4. Configuring your GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 10.4.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 10.4.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 10.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 10.2. Optional API services API service Console service name Cloud Deployment Manager V2 API deploymentmanager.googleapis.com Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 10.4.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 10.4.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 10.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Networking Global 11 1 Forwarding rules Compute Global 2 0 Health checks Compute Global 2 0 Images Compute Global 1 0 Networks Networking Global 1 0 Routers Networking Global 1 0 Routes Networking Global 2 0 Subnetworks Compute Global 2 0 Target pools Networking Global 2 0 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 10.4.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 10.4.6. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin Required roles for user-provisioned GCP infrastructure Deployment Manager Editor The following roles are applied to the service accounts that the control plane and compute machines use: Table 10.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 10.4.7. Required GCP permissions for user-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the user-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 10.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 10.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 10.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list dns.resourceRecordSets.update Example 10.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 10.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 10.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 10.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 10.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 10.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 10.10. Required IAM permissions for installation iam.roles.get Example 10.11. Required permissions when authenticating without a service account key iam.serviceAccounts.signBlob Example 10.12. Required Images permissions for installation compute.images.create compute.images.delete compute.images.get compute.images.list Example 10.13. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 10.14. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 10.15. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 10.16. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 10.17. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 10.18. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 10.19. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 10.20. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 10.21. Required Images permissions for deletion compute.images.delete compute.images.list Example 10.22. Required permissions to get Region related information compute.regions.get Example 10.23. Required Deployment Manager permissions deploymentmanager.deployments.create deploymentmanager.deployments.delete deploymentmanager.deployments.get deploymentmanager.deployments.list deploymentmanager.manifests.get deploymentmanager.operations.get deploymentmanager.resources.list Additional resources Optimizing storage 10.4.8. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 10.4.9. Installing and configuring CLI tools for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must install and configure the CLI tools for GCP. Prerequisites You created a project to host your cluster. You created a service account and granted it the required permissions. Procedure Install the following binaries in USDPATH : gcloud gsutil See Install the latest Cloud SDK version in the GCP documentation. Authenticate using the gcloud tool with your configured service account. See Authorizing with a service account in the GCP documentation. 10.5. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 10.5.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 10.5. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 10.5.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 10.6. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 10.5.3. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 10.24. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 10.5.4. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 10.25. Machine series for 64-bit ARM machines Tau T2A 10.5.5. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . 10.6. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 10.6.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 10.6.2. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on GCP". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 10.6.3. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 10.6.4. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 10.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 10.6.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Optional: If you do not want the cluster to provision compute machines, remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Important If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install. Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Optional: Adding the ingress DNS records 10.7. Exporting common variables 10.7.1. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Google Cloud Platform (GCP). The infrastructure name is also used to locate the appropriate GCP resources during an OpenShift Container Platform installation. The provided Deployment Manager templates contain references to this infrastructure name, so you must extract it. Prerequisites You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 10.7.2. Exporting common variables for Deployment Manager templates You must export a common set of variables that are used with the provided Deployment Manager templates used to assist in completing a user-provided infrastructure install on Google Cloud Platform (GCP). Note Specific Deployment Manager templates can also require additional exported variables, which are detailed in their related procedures. Procedure Export the following common variables to be used by the provided Deployment Manager templates: USD export BASE_DOMAIN='<base_domain>' USD export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' USD export NETWORK_CIDR='10.0.0.0/16' USD export MASTER_SUBNET_CIDR='10.0.0.0/17' USD export WORKER_SUBNET_CIDR='10.0.128.0/17' USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 USD export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` USD export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` USD export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` USD export REGION=`jq -r .gcp.region <installation_directory>/metadata.json` 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 10.8. Creating a VPC in GCP You must create a VPC in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements. One way to create the VPC is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the VPC section of this topic and save it as 01_vpc.py on your computer. This template describes the VPC that your cluster requires. Create a 01_vpc.yaml resource definition file: USD cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 master_subnet_cidr is the CIDR for the master subnet, for example 10.0.0.0/17 . 4 worker_subnet_cidr is the CIDR for the worker subnet, for example 10.0.128.0/17 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml 10.8.1. Deployment Manager template for the VPC You can use the following Deployment Manager template to deploy the VPC that you need for your OpenShift Container Platform cluster: Example 10.26. 01_vpc.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources} 10.9. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 10.9.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 10.9.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 10.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 10.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 10.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 10.10. Creating load balancers in GCP You must configure load balancers in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for the internal load balancer section of this topic and save it as 02_lb_int.py on your computer. This template describes the internal load balancing objects that your cluster requires. For an external cluster, also copy the template from the Deployment Manager template for the external load balancer section of this topic and save it as 02_lb_ext.py on your computer. This template describes the external load balancing objects that your cluster requires. Export the variables that the deployment template uses: Export the cluster network location: USD export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`) Export the control plane subnet location: USD export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the three zones that the cluster uses: USD export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d "/" -f9`) USD export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d "/" -f9`) USD export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d "/" -f9`) Create a 02_infra.yaml resource definition file: USD cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF 1 2 Required only when deploying an external cluster. 3 infra_id is the INFRA_ID infrastructure name from the extraction step. 4 region is the region to deploy the cluster into, for example us-central1 . 5 control_subnet is the URI to the control subnet. 6 zones are the zones to deploy the control plane instances into, like us-east1-b , us-east1-c , and us-east1-d . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml Export the cluster IP address: USD export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`) For an external cluster, also export the cluster public IP address: USD export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`) 10.10.1. Deployment Manager template for the external load balancer You can use the following Deployment Manager template to deploy the external load balancer that you need for your OpenShift Container Platform cluster: Example 10.27. 02_lb_ext.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources} 10.10.2. Deployment Manager template for the internal load balancer You can use the following Deployment Manager template to deploy the internal load balancer that you need for your OpenShift Container Platform cluster: Example 10.28. 02_lb_int.py Deployment Manager template def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': "HTTPS" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources} You will need this template in addition to the 02_lb_ext.py template when you create an external cluster. 10.11. Creating a private DNS zone in GCP You must configure a private DNS zone in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create this component is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for the private DNS section of this topic and save it as 02_dns.py on your computer. This template describes the private DNS objects that your cluster requires. Create a 02_dns.yaml resource definition file: USD cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 cluster_domain is the domain for the cluster, for example openshift.example.com . 3 cluster_network is the selfLink URL to the cluster network. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml The templates do not create DNS entries due to limitations of Deployment Manager, so you must create them manually: Add the internal DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the external DNS entries: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} 10.11.1. Deployment Manager template for the private DNS You can use the following Deployment Manager template to deploy the private DNS that you need for your OpenShift Container Platform cluster: Example 10.29. 02_dns.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources} 10.12. Creating firewall rules in GCP You must create firewall rules in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Procedure Copy the template from the Deployment Manager template for firewall rules section of this topic and save it as 03_firewall.py on your computer. This template describes the security groups that your cluster requires. Create a 03_firewall.yaml resource definition file: USD cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF 1 allowed_external_cidr is the CIDR range that can access the cluster API and SSH to the bootstrap host. For an internal cluster, set this value to USD{NETWORK_CIDR} . 2 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 cluster_network is the selfLink URL to the cluster network. 4 network_cidr is the CIDR of the VPC network, for example 10.0.0.0/16 . Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml 10.12.1. Deployment Manager template for firewall rules You can use the following Deployment Manager template to deploy the firewall rues that you need for your OpenShift Container Platform cluster: Example 10.30. 03_firewall.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources} 10.13. Creating IAM roles in GCP You must create IAM roles in Google Cloud Platform (GCP) for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your GCP infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You have defined the variables in the Exporting common variables section. Procedure Copy the template from the Deployment Manager template for IAM roles section of this topic and save it as 03_iam.py on your computer. This template describes the IAM roles that your cluster requires. Create a 03_iam.yaml resource definition file: USD cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml Export the variable for the master service account: USD export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the worker service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the variable for the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) The templates do not create the policy bindings due to limitations of Deployment Manager, so you must create them manually: USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.instanceAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.networkAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/compute.securityAdmin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/iam.serviceAccountUser" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{MASTER_SERVICE_ACCOUNT}" --role "roles/storage.admin" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/compute.viewer" USD gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member "serviceAccount:USD{WORKER_SERVICE_ACCOUNT}" --role "roles/storage.admin" Create a service account key and store it locally for later use: USD gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT} 10.13.1. Deployment Manager template for IAM roles You can use the following Deployment Manager template to deploy the IAM roles that you need for your OpenShift Container Platform cluster: Example 10.31. 03_iam.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources} 10.14. Creating the RHCOS cluster image for the GCP infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Google Cloud Platform (GCP) for your OpenShift Container Platform nodes. Procedure Obtain the RHCOS image from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The file name contains the OpenShift Container Platform version number in the format rhcos-<version>-<arch>-gcp.<arch>.tar.gz . Create the Google storage bucket: USD gsutil mb gs://<bucket_name> Upload the RHCOS image to the Google storage bucket: USD gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name> Export the uploaded RHCOS image location as a variable: USD export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz Create the cluster image: USD gcloud compute images create "USD{INFRA_ID}-rhcos-image" \ --source-uri="USD{IMAGE_SOURCE}" 10.15. Creating the bootstrap machine in GCP You must create the bootstrap machine in Google Cloud Platform (GCP) to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Ensure you installed pyOpenSSL. Procedure Copy the template from the Deployment Manager template for the bootstrap machine section of this topic and save it as 04_bootstrap.py on your computer. This template describes the bootstrap machine that your cluster requires. Export the location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that the installation program requires: USD export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`) Create a bucket and upload the bootstrap.ign file: USD gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition USD gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/ Create a signed URL for the bootstrap instance to use to access the Ignition config. Export the URL from the output as a variable: USD export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep "^gs:" | awk '{print USD5}'` Create a 04_bootstrap.yaml resource definition file: USD cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 region is the region to deploy the cluster into, for example us-central1 . 3 zone is the zone to deploy the bootstrap instance into, for example us-central1-b . 4 cluster_network is the selfLink URL to the cluster network. 5 control_subnet is the selfLink URL to the control subnet. 6 image is the selfLink URL to the RHCOS image. 7 machine_type is the machine type of the instance, for example n1-standard-4 . 8 root_volume_size is the boot disk size for the bootstrap machine. 9 bootstrap_ign is the URL output when creating a signed URL. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the bootstrap machine manually. Add the bootstrap instance to the internal load balancer instance group: USD gcloud compute instance-groups unmanaged add-instances \ USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap Add the bootstrap instance group to the internal load balancer backend service: USD gcloud compute backend-services add-backend \ USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} 10.15.1. Deployment Manager template for the bootstrap machine You can use the following Deployment Manager template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 10.32. 04_bootstrap.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{"ignition":{"config":{"replace":{"source":"' + context.properties['bootstrap_ign'] + '"}},"version":"3.2.0"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources} 10.16. Creating the control plane machines in GCP You must create the control plane machines in Google Cloud Platform (GCP) for your cluster to use. One way to create these machines is to modify the provided Deployment Manager template. Note If you do not use the provided Deployment Manager template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , Creating IAM roles in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Procedure Copy the template from the Deployment Manager template for control plane machines section of this topic and save it as 05_control_plane.py on your computer. This template describes the control plane machines that your cluster requires. Export the following variable required by the resource definition: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign` Create a 05_control_plane.yaml resource definition file: USD cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF 1 infra_id is the INFRA_ID infrastructure name from the extraction step. 2 zones are the zones to deploy the control plane instances into, for example us-central1-a , us-central1-b , and us-central1-c . 3 control_subnet is the selfLink URL to the control subnet. 4 image is the selfLink URL to the RHCOS image. 5 machine_type is the machine type of the instance, for example n1-standard-4 . 6 service_account_email is the email address for the master service account that you created. 7 ignition is the contents of the master.ign file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml The templates do not manage load balancer membership due to limitations of Deployment Manager, so you must add the control plane machines manually. Run the following commands to add the control plane machines to the appropriate instance groups: USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1 USD gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2 For an external cluster, you must also run the following commands to add the control plane machines to the target pools: USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_0}" --instances=USD{INFRA_ID}-master-0 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_1}" --instances=USD{INFRA_ID}-master-1 USD gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone="USD{ZONE_2}" --instances=USD{INFRA_ID}-master-2 10.16.1. Deployment Manager template for control plane machines You can use the following Deployment Manager template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 10.33. 05_control_plane.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources} 10.17. Wait for bootstrap completion and remove bootstrap resources in GCP After you create all of the required infrastructure in Google Cloud Platform (GCP), wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Ensure you defined the variables in the Exporting common variables and Creating load balancers in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0} USD gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign USD gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition USD gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap 10.18. Creating additional worker machines in GCP You can create worker machines in Google Cloud Platform (GCP) for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. In this example, you manually launch one instance by using the Deployment Manager template. Additional instances can be launched by including additional resources of type 06_worker.py in the file. Note If you do not use the provided Deployment Manager template to create your worker machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Ensure you defined the variables in the Exporting common variables , Creating load balancers in GCP , and Creating the bootstrap machine in GCP sections. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the Deployment Manager template for worker machines section of this topic and save it as 06_worker.py on your computer. This template describes the worker machines that your cluster requires. Export the variables that the resource definition uses. Export the subnet that hosts the compute machines: USD export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`) Export the email address for your service account: USD export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter "email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}." --format json | jq -r '.[0].email'`) Export the location of the compute machine Ignition config file: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign` Create a 06_worker.yaml resource definition file: USD cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF 1 name is the name of the worker machine, for example worker-0 . 2 9 infra_id is the INFRA_ID infrastructure name from the extraction step. 3 10 zone is the zone to deploy the worker machine into, for example us-central1-a . 4 11 compute_subnet is the selfLink URL to the compute subnet. 5 12 image is the selfLink URL to the RHCOS image. 1 6 13 machine_type is the machine type of the instance, for example n1-standard-4 . 7 14 service_account_email is the email address for the worker service account that you created. 8 15 ignition is the contents of the worker.ign file. Optional: If you want to launch additional instances, include additional resources of type 06_worker.py in your 06_worker.yaml resource definition file. Create the deployment by using the gcloud CLI: USD gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736 10.18.1. Deployment Manager template for worker machines You can use the following Deployment Manager template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 10.34. 06_worker.py Deployment Manager template def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources} 10.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 10.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You installed the oc CLI. Ensure the bootstrap process completed successfully. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 10.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.22. Optional: Adding the ingress DNS records If you removed the DNS zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites Ensure you defined the variables in the Exporting common variables section. Remove the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs. Ensure the bootstrap process completed successfully. Procedure Wait for the Ingress router to create a load balancer and populate the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 Add the A record to your zones: To use A records: Export the variable for the router IP address: USD export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add the A record to the private zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone USD gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone For an external cluster, also add the A record to the public zones: USD if [ -f transaction.yaml ]; then rm transaction.yaml; fi USD gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction add USD{ROUTER_IP} --name \*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} USD gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME} To add explicit domains instead of using a wildcard, create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com 10.23. Completing a GCP installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Google Cloud Platform (GCP) user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Ensure the bootstrap process completed successfully. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Observe the running state of your cluster. Run the following command to view the current cluster version and status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete Run the following command to view the Operators managed on the control plane by the Cluster Version Operator (CVO): USD oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m Run the following command to view your cluster pods: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m ... openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m When the current cluster version is AVAILABLE , the installation is complete. 10.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 10.25. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Configure Global Access for an Ingress Controller on GCP .
[ "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "export BASE_DOMAIN='<base_domain>' export BASE_DOMAIN_ZONE_NAME='<base_domain_zone_name>' export NETWORK_CIDR='10.0.0.0/16' export MASTER_SUBNET_CIDR='10.0.0.0/17' export WORKER_SUBNET_CIDR='10.0.128.0/17' export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 export CLUSTER_NAME=`jq -r .clusterName <installation_directory>/metadata.json` export INFRA_ID=`jq -r .infraID <installation_directory>/metadata.json` export PROJECT_NAME=`jq -r .gcp.projectID <installation_directory>/metadata.json` export REGION=`jq -r .gcp.region <installation_directory>/metadata.json`", "cat <<EOF >01_vpc.yaml imports: - path: 01_vpc.py resources: - name: cluster-vpc type: 01_vpc.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 master_subnet_cidr: 'USD{MASTER_SUBNET_CIDR}' 3 worker_subnet_cidr: 'USD{WORKER_SUBNET_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-vpc --config 01_vpc.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-network', 'type': 'compute.v1.network', 'properties': { 'region': context.properties['region'], 'autoCreateSubnetworks': False } }, { 'name': context.properties['infra_id'] + '-master-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['master_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-worker-subnet', 'type': 'compute.v1.subnetwork', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'ipCidrRange': context.properties['worker_subnet_cidr'] } }, { 'name': context.properties['infra_id'] + '-router', 'type': 'compute.v1.router', 'properties': { 'region': context.properties['region'], 'network': 'USD(ref.' + context.properties['infra_id'] + '-network.selfLink)', 'nats': [{ 'name': context.properties['infra_id'] + '-nat-master', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 7168, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-master-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }, { 'name': context.properties['infra_id'] + '-nat-worker', 'natIpAllocateOption': 'AUTO_ONLY', 'minPortsPerVm': 512, 'sourceSubnetworkIpRangesToNat': 'LIST_OF_SUBNETWORKS', 'subnetworks': [{ 'name': 'USD(ref.' + context.properties['infra_id'] + '-worker-subnet.selfLink)', 'sourceIpRangesToNat': ['ALL_IP_RANGES'] }] }] } }] return {'resources': resources}", "export CLUSTER_NETWORK=(`gcloud compute networks describe USD{INFRA_ID}-network --format json | jq -r .selfLink`)", "export CONTROL_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-master-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export ZONE_0=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[0] | cut -d \"/\" -f9`)", "export ZONE_1=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[1] | cut -d \"/\" -f9`)", "export ZONE_2=(`gcloud compute regions describe USD{REGION} --format=json | jq -r .zones[2] | cut -d \"/\" -f9`)", "cat <<EOF >02_infra.yaml imports: - path: 02_lb_ext.py - path: 02_lb_int.py 1 resources: - name: cluster-lb-ext 2 type: 02_lb_ext.py properties: infra_id: 'USD{INFRA_ID}' 3 region: 'USD{REGION}' 4 - name: cluster-lb-int type: 02_lb_int.py properties: cluster_network: 'USD{CLUSTER_NETWORK}' control_subnet: 'USD{CONTROL_SUBNET}' 5 infra_id: 'USD{INFRA_ID}' region: 'USD{REGION}' zones: 6 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-infra --config 02_infra.yaml", "export CLUSTER_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-ip --region=USD{REGION} --format json | jq -r .address`)", "export CLUSTER_PUBLIC_IP=(`gcloud compute addresses describe USD{INFRA_ID}-cluster-public-ip --region=USD{REGION} --format json | jq -r .address`)", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-cluster-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-http-health-check', 'type': 'compute.v1.httpHealthCheck', 'properties': { 'port': 6080, 'requestPath': '/readyz' } }, { 'name': context.properties['infra_id'] + '-api-target-pool', 'type': 'compute.v1.targetPool', 'properties': { 'region': context.properties['region'], 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-http-health-check.selfLink)'], 'instances': [] } }, { 'name': context.properties['infra_id'] + '-api-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'region': context.properties['region'], 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-public-ip.selfLink)', 'target': 'USD(ref.' + context.properties['infra_id'] + '-api-target-pool.selfLink)', 'portRange': '6443' } }] return {'resources': resources}", "def GenerateConfig(context): backends = [] for zone in context.properties['zones']: backends.append({ 'group': 'USD(ref.' + context.properties['infra_id'] + '-master-' + zone + '-ig' + '.selfLink)' }) resources = [{ 'name': context.properties['infra_id'] + '-cluster-ip', 'type': 'compute.v1.address', 'properties': { 'addressType': 'INTERNAL', 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }, { # Refer to docs/dev/kube-apiserver-health-check.md on how to correctly setup health check probe for kube-apiserver 'name': context.properties['infra_id'] + '-api-internal-health-check', 'type': 'compute.v1.healthCheck', 'properties': { 'httpsHealthCheck': { 'port': 6443, 'requestPath': '/readyz' }, 'type': \"HTTPS\" } }, { 'name': context.properties['infra_id'] + '-api-internal-backend-service', 'type': 'compute.v1.regionBackendService', 'properties': { 'backends': backends, 'healthChecks': ['USD(ref.' + context.properties['infra_id'] + '-api-internal-health-check.selfLink)'], 'loadBalancingScheme': 'INTERNAL', 'region': context.properties['region'], 'protocol': 'TCP', 'timeoutSec': 120 } }, { 'name': context.properties['infra_id'] + '-api-internal-forwarding-rule', 'type': 'compute.v1.forwardingRule', 'properties': { 'backendService': 'USD(ref.' + context.properties['infra_id'] + '-api-internal-backend-service.selfLink)', 'IPAddress': 'USD(ref.' + context.properties['infra_id'] + '-cluster-ip.selfLink)', 'loadBalancingScheme': 'INTERNAL', 'ports': ['6443','22623'], 'region': context.properties['region'], 'subnetwork': context.properties['control_subnet'] } }] for zone in context.properties['zones']: resources.append({ 'name': context.properties['infra_id'] + '-master-' + zone + '-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': zone } }) return {'resources': resources}", "cat <<EOF >02_dns.yaml imports: - path: 02_dns.py resources: - name: cluster-dns type: 02_dns.py properties: infra_id: 'USD{INFRA_ID}' 1 cluster_domain: 'USD{CLUSTER_NAME}.USD{BASE_DOMAIN}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-dns --config 02_dns.yaml", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{CLUSTER_IP} --name api-int.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{CLUSTER_PUBLIC_IP} --name api.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 60 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-private-zone', 'type': 'dns.v1.managedZone', 'properties': { 'description': '', 'dnsName': context.properties['cluster_domain'] + '.', 'visibility': 'private', 'privateVisibilityConfig': { 'networks': [{ 'networkUrl': context.properties['cluster_network'] }] } } }] return {'resources': resources}", "cat <<EOF >03_firewall.yaml imports: - path: 03_firewall.py resources: - name: cluster-firewall type: 03_firewall.py properties: allowed_external_cidr: '0.0.0.0/0' 1 infra_id: 'USD{INFRA_ID}' 2 cluster_network: 'USD{CLUSTER_NETWORK}' 3 network_cidr: 'USD{NETWORK_CIDR}' 4 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-firewall --config 03_firewall.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-in-ssh', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-bootstrap'] } }, { 'name': context.properties['infra_id'] + '-api', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6443'] }], 'sourceRanges': [context.properties['allowed_external_cidr']], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-health-checks', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['6080', '6443', '22624'] }], 'sourceRanges': ['35.191.0.0/16', '130.211.0.0/22', '209.85.152.0/22', '209.85.204.0/22'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-etcd', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['2379-2380'] }], 'sourceTags': [context.properties['infra_id'] + '-master'], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-control-plane', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'tcp', 'ports': ['10257'] },{ 'IPProtocol': 'tcp', 'ports': ['10259'] },{ 'IPProtocol': 'tcp', 'ports': ['22623'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [context.properties['infra_id'] + '-master'] } }, { 'name': context.properties['infra_id'] + '-internal-network', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'icmp' },{ 'IPProtocol': 'tcp', 'ports': ['22'] }], 'sourceRanges': [context.properties['network_cidr']], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }, { 'name': context.properties['infra_id'] + '-internal-cluster', 'type': 'compute.v1.firewall', 'properties': { 'network': context.properties['cluster_network'], 'allowed': [{ 'IPProtocol': 'udp', 'ports': ['4789', '6081'] },{ 'IPProtocol': 'udp', 'ports': ['500', '4500'] },{ 'IPProtocol': 'esp', },{ 'IPProtocol': 'tcp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'udp', 'ports': ['9000-9999'] },{ 'IPProtocol': 'tcp', 'ports': ['10250'] },{ 'IPProtocol': 'tcp', 'ports': ['30000-32767'] },{ 'IPProtocol': 'udp', 'ports': ['30000-32767'] }], 'sourceTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ], 'targetTags': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-worker' ] } }] return {'resources': resources}", "cat <<EOF >03_iam.yaml imports: - path: 03_iam.py resources: - name: cluster-iam type: 03_iam.py properties: infra_id: 'USD{INFRA_ID}' 1 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-iam --config 03_iam.yaml", "export MASTER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-m@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.instanceAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.networkAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/compute.securityAdmin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/iam.serviceAccountUser\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{MASTER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/compute.viewer\" gcloud projects add-iam-policy-binding USD{PROJECT_NAME} --member \"serviceAccount:USD{WORKER_SERVICE_ACCOUNT}\" --role \"roles/storage.admin\"", "gcloud iam service-accounts keys create service-account-key.json --iam-account=USD{MASTER_SERVICE_ACCOUNT}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-m', 'displayName': context.properties['infra_id'] + '-master-node' } }, { 'name': context.properties['infra_id'] + '-worker-node-sa', 'type': 'iam.v1.serviceAccount', 'properties': { 'accountId': context.properties['infra_id'] + '-w', 'displayName': context.properties['infra_id'] + '-worker-node' } }] return {'resources': resources}", "gsutil mb gs://<bucket_name>", "gsutil cp <downloaded_image_file_path>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz gs://<bucket_name>", "export IMAGE_SOURCE=gs://<bucket_name>/rhcos-<version>-x86_64-gcp.x86_64.tar.gz", "gcloud compute images create \"USD{INFRA_ID}-rhcos-image\" --source-uri=\"USD{IMAGE_SOURCE}\"", "export CLUSTER_IMAGE=(`gcloud compute images describe USD{INFRA_ID}-rhcos-image --format json | jq -r .selfLink`)", "gsutil mb gs://USD{INFRA_ID}-bootstrap-ignition", "gsutil cp <installation_directory>/bootstrap.ign gs://USD{INFRA_ID}-bootstrap-ignition/", "export BOOTSTRAP_IGN=`gsutil signurl -d 1h service-account-key.json gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign | grep \"^gs:\" | awk '{print USD5}'`", "cat <<EOF >04_bootstrap.yaml imports: - path: 04_bootstrap.py resources: - name: cluster-bootstrap type: 04_bootstrap.py properties: infra_id: 'USD{INFRA_ID}' 1 region: 'USD{REGION}' 2 zone: 'USD{ZONE_0}' 3 cluster_network: 'USD{CLUSTER_NETWORK}' 4 control_subnet: 'USD{CONTROL_SUBNET}' 5 image: 'USD{CLUSTER_IMAGE}' 6 machine_type: 'n1-standard-4' 7 root_volume_size: '128' 8 bootstrap_ign: 'USD{BOOTSTRAP_IGN}' 9 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-bootstrap --config 04_bootstrap.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-bootstrap-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-bootstrap", "gcloud compute backend-services add-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-bootstrap-public-ip', 'type': 'compute.v1.address', 'properties': { 'region': context.properties['region'] } }, { 'name': context.properties['infra_id'] + '-bootstrap', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"' + context.properties['bootstrap_ign'] + '\"}},\"version\":\"3.2.0\"}}', }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'], 'accessConfigs': [{ 'natIP': 'USD(ref.' + context.properties['infra_id'] + '-bootstrap-public-ip.address)' }] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', context.properties['infra_id'] + '-bootstrap' ] }, 'zone': context.properties['zone'] } }, { 'name': context.properties['infra_id'] + '-bootstrap-ig', 'type': 'compute.v1.instanceGroup', 'properties': { 'namedPorts': [ { 'name': 'ignition', 'port': 22623 }, { 'name': 'https', 'port': 6443 } ], 'network': context.properties['cluster_network'], 'zone': context.properties['zone'] } }] return {'resources': resources}", "export MASTER_IGNITION=`cat <installation_directory>/master.ign`", "cat <<EOF >05_control_plane.yaml imports: - path: 05_control_plane.py resources: - name: cluster-control-plane type: 05_control_plane.py properties: infra_id: 'USD{INFRA_ID}' 1 zones: 2 - 'USD{ZONE_0}' - 'USD{ZONE_1}' - 'USD{ZONE_2}' control_subnet: 'USD{CONTROL_SUBNET}' 3 image: 'USD{CLUSTER_IMAGE}' 4 machine_type: 'n1-standard-4' 5 root_volume_size: '128' service_account_email: 'USD{MASTER_SERVICE_ACCOUNT}' 6 ignition: 'USD{MASTER_IGNITION}' 7 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-control-plane --config 05_control_plane.yaml", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_0}-ig --zone=USD{ZONE_0} --instances=USD{INFRA_ID}-master-0", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_1}-ig --zone=USD{ZONE_1} --instances=USD{INFRA_ID}-master-1", "gcloud compute instance-groups unmanaged add-instances USD{INFRA_ID}-master-USD{ZONE_2}-ig --zone=USD{ZONE_2} --instances=USD{INFRA_ID}-master-2", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_0}\" --instances=USD{INFRA_ID}-master-0", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_1}\" --instances=USD{INFRA_ID}-master-1", "gcloud compute target-pools add-instances USD{INFRA_ID}-api-target-pool --instances-zone=\"USD{ZONE_2}\" --instances=USD{INFRA_ID}-master-2", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-master-0', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][0] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][0] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][0] } }, { 'name': context.properties['infra_id'] + '-master-1', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][1] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][1] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][1] } }, { 'name': context.properties['infra_id'] + '-master-2', 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'diskType': 'zones/' + context.properties['zones'][2] + '/diskTypes/pd-ssd', 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zones'][2] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['control_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-master', ] }, 'zone': context.properties['zones'][2] } }] return {'resources': resources}", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "gcloud compute backend-services remove-backend USD{INFRA_ID}-api-internal --region=USD{REGION} --instance-group=USD{INFRA_ID}-bootstrap-ig --instance-group-zone=USD{ZONE_0}", "gsutil rm gs://USD{INFRA_ID}-bootstrap-ignition/bootstrap.ign", "gsutil rb gs://USD{INFRA_ID}-bootstrap-ignition", "gcloud deployment-manager deployments delete USD{INFRA_ID}-bootstrap", "export COMPUTE_SUBNET=(`gcloud compute networks subnets describe USD{INFRA_ID}-worker-subnet --region=USD{REGION} --format json | jq -r .selfLink`)", "export WORKER_SERVICE_ACCOUNT=(`gcloud iam service-accounts list --filter \"email~^USD{INFRA_ID}-w@USD{PROJECT_NAME}.\" --format json | jq -r '.[0].email'`)", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign`", "cat <<EOF >06_worker.yaml imports: - path: 06_worker.py resources: - name: 'worker-0' 1 type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 2 zone: 'USD{ZONE_0}' 3 compute_subnet: 'USD{COMPUTE_SUBNET}' 4 image: 'USD{CLUSTER_IMAGE}' 5 machine_type: 'n1-standard-4' 6 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 7 ignition: 'USD{WORKER_IGNITION}' 8 - name: 'worker-1' type: 06_worker.py properties: infra_id: 'USD{INFRA_ID}' 9 zone: 'USD{ZONE_1}' 10 compute_subnet: 'USD{COMPUTE_SUBNET}' 11 image: 'USD{CLUSTER_IMAGE}' 12 machine_type: 'n1-standard-4' 13 root_volume_size: '128' service_account_email: 'USD{WORKER_SERVICE_ACCOUNT}' 14 ignition: 'USD{WORKER_IGNITION}' 15 EOF", "gcloud deployment-manager deployments create USD{INFRA_ID}-worker --config 06_worker.yaml", "def GenerateConfig(context): resources = [{ 'name': context.properties['infra_id'] + '-' + context.env['name'], 'type': 'compute.v1.instance', 'properties': { 'disks': [{ 'autoDelete': True, 'boot': True, 'initializeParams': { 'diskSizeGb': context.properties['root_volume_size'], 'sourceImage': context.properties['image'] } }], 'machineType': 'zones/' + context.properties['zone'] + '/machineTypes/' + context.properties['machine_type'], 'metadata': { 'items': [{ 'key': 'user-data', 'value': context.properties['ignition'] }] }, 'networkInterfaces': [{ 'subnetwork': context.properties['compute_subnet'] }], 'serviceAccounts': [{ 'email': context.properties['service_account_email'], 'scopes': ['https://www.googleapis.com/auth/cloud-platform'] }], 'tags': { 'items': [ context.properties['infra_id'] + '-worker', ] }, 'zone': context.properties['zone'] } }] return {'resources': resources}", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98", "export ROUTER_IP=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{INFRA_ID}-private-zone gcloud dns record-sets transaction execute --zone USD{INFRA_ID}-private-zone", "if [ -f transaction.yaml ]; then rm transaction.yaml; fi gcloud dns record-sets transaction start --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction add USD{ROUTER_IP} --name \\*.apps.USD{CLUSTER_NAME}.USD{BASE_DOMAIN}. --ttl 300 --type A --zone USD{BASE_DOMAIN_ZONE_NAME} gcloud dns record-sets transaction execute --zone USD{BASE_DOMAIN_ZONE_NAME}", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.your.cluster.domain.example.com console-openshift-console.apps.your.cluster.domain.example.com downloads-openshift-console.apps.your.cluster.domain.example.com alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 24m Working towards 4.5.4: 99% complete", "oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.4 True False False 7m56s cloud-credential 4.5.4 True False False 31m cluster-autoscaler 4.5.4 True False False 16m console 4.5.4 True False False 10m csi-snapshot-controller 4.5.4 True False False 16m dns 4.5.4 True False False 22m etcd 4.5.4 False False False 25s image-registry 4.5.4 True False False 16m ingress 4.5.4 True False False 16m insights 4.5.4 True False False 17m kube-apiserver 4.5.4 True False False 19m kube-controller-manager 4.5.4 True False False 20m kube-scheduler 4.5.4 True False False 20m kube-storage-version-migrator 4.5.4 True False False 16m machine-api 4.5.4 True False False 22m machine-config 4.5.4 True False False 22m marketplace 4.5.4 True False False 16m monitoring 4.5.4 True False False 10m network 4.5.4 True False False 23m node-tuning 4.5.4 True False False 23m openshift-apiserver 4.5.4 True False False 17m openshift-controller-manager 4.5.4 True False False 15m openshift-samples 4.5.4 True False False 16m operator-lifecycle-manager 4.5.4 True False False 22m operator-lifecycle-manager-catalog 4.5.4 True False False 22m operator-lifecycle-manager-packageserver 4.5.4 True False False 18m service-ca 4.5.4 True False False 23m service-catalog-apiserver 4.5.4 True False False 23m service-catalog-controller-manager 4.5.4 True False False 23m storage 4.5.4 True False False 17m", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-member-ip-10-0-3-111.us-east-2.compute.internal 1/1 Running 0 35m kube-system etcd-member-ip-10-0-3-239.us-east-2.compute.internal 1/1 Running 0 37m kube-system etcd-member-ip-10-0-3-24.us-east-2.compute.internal 1/1 Running 0 35m openshift-apiserver-operator openshift-apiserver-operator-6d6674f4f4-h7t2t 1/1 Running 1 37m openshift-apiserver apiserver-fm48r 1/1 Running 0 30m openshift-apiserver apiserver-fxkvv 1/1 Running 0 29m openshift-apiserver apiserver-q85nm 1/1 Running 0 29m openshift-service-ca-operator openshift-service-ca-operator-66ff6dc6cd-9r257 1/1 Running 0 37m openshift-service-ca apiservice-cabundle-injector-695b6bcbc-cl5hm 1/1 Running 0 35m openshift-service-ca configmap-cabundle-injector-8498544d7-25qn6 1/1 Running 0 35m openshift-service-ca service-serving-cert-signer-6445fc9c6-wqdqn 1/1 Running 0 35m openshift-service-catalog-apiserver-operator openshift-service-catalog-apiserver-operator-549f44668b-b5q2w 1/1 Running 0 32m openshift-service-catalog-controller-manager-operator openshift-service-catalog-controller-manager-operator-b78cr2lnm 1/1 Running 0 31m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/installing-gcp-user-infra
Appendix B. Using your subscription
Appendix B. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. B.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. B.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. B.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. B.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions Red Hat Enterprise Linux 9 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/using_your_subscription
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/red_hat_cloud_access_reference_guide/making-open-source-more-inclusive
Chapter 12. Configuring endpoints
Chapter 12. Configuring endpoints Learn how to configure endpoints for Red Hat Advanced Cluster Security for Kubernetes (RHACS) by using a YAML configuration file. You can use a YAML configuration file to configure exposed endpoints. You can use this configuration file to define one or more endpoints for Red Hat Advanced Cluster Security for Kubernetes and customize the TLS settings for each endpoint, or disable the TLS for specific endpoints. You can also define if client authentication is required, and which client certificates to accept. 12.1. Custom YAML configuration Red Hat Advanced Cluster Security for Kubernetes uses the YAML configuration as a ConfigMap , making configurations easier to change and manage. When you use the custom YAML configuration file, you can configure the following for each endpoint: The protocols to use, such as HTTP , gRPC , or both. Enable or disable TLS. Specify server certificates. Client Certificate Authorities (CA) to trust for client authentication. Specify if client certificate authentication ( mTLS ) is required. You can use the configuration file to specify endpoints either during the installation or on an existing instance of Red Hat Advanced Cluster Security for Kubernetes. However, if you expose any additional ports other than the default port 8443 , you must create network policies that allow traffic on those additional ports. The following is a sample endpoints.yaml configuration file for Red Hat Advanced Cluster Security for Kubernetes: # Sample endpoints.yaml configuration for Central. # # # CAREFUL: If the following line is uncommented, do not expose the default endpoint on port 8443 by default. # # This will break normal operation. # disableDefault: true # if true, do not serve on :8443 1 endpoints: 2 # Serve plaintext HTTP only on port 8080 - listen: ":8080" 3 # Backend protocols, possible values are 'http' and 'grpc'. If unset or empty, assume both. protocols: 4 - http tls: 5 # Disable TLS. If this is not specified, assume TLS is enabled. disable: true 6 # Serve HTTP and gRPC for sensors only on port 8444 - listen: ":8444" 7 tls: 8 # Which TLS certificates to serve, possible values are 'service' (For service certificates that Red&#160;Hat Advanced Cluster Security for Kubernetes generates) # and 'default' (user-configured default TLS certificate). If unset or empty, assume both. serverCerts: 9 - default - service # Client authentication settings. clientAuth: 10 # Enforce TLS client authentication. If unset, do not enforce, only request certificates # opportunistically. required: true 11 # Which TLS client CAs to serve, possible values are 'service' (CA for service # certificates that Red&#160;Hat Advanced Cluster Security for Kubernetes generates) and 'user' (CAs for PKI auth providers). If unset or empty, assume both. certAuthorities: 12 # if not set, assume ["user", "service"] - service 1 Use true to disable exposure on the default port number 8443 . The default value is false ; changing it to true might break existing functionality. 2 A list of additional endpoints for exposing Central. 3 7 The address and port number on which to listen. You must specify this value if you are using endpoints . You can use the format port , :port , or address:port to specify values. For example, 8080 or :8080 - listen on port 8080 on all interfaces. 0.0.0.0:8080 - listen on port 8080 on all IPv4 (not IPv6) interfaces. 127.0.0.1:8080 - listen on port 8080 on the local loopback device only. 4 Protocols to use for the specified endpoint. Acceptable values are http and grpc . If you do not specify a value, Central listens to both HTTP and gRPC traffic on the specified port. If you want to expose an endpoint exclusively for the RHACS portal, use http . However, you will not be able to use the endpoint for service-to-service communication or for the roxctl CLI, because these clients require both gRPC and HTTP. Red Hat recommends that you do not specify a value of this key, to enable both HTTP and gRPC protocols for the endpoint. If you want to restrict an endpoint to Red Hat Advanced Cluster Security for Kubernetes services only, use the clientAuth option. 5 8 Use it to specify the TLS settings for the endpoint. If you do not specify a value, Red Hat Advanced Cluster Security for Kubernetes enables TLS with the default settings for all the following nested keys. 6 Use true to disable TLS on the specified endpoint. The default value is false . When you set it to true , you cannot specify values for serverCerts and clientAuth . 9 Specify a list of sources from which to configure server TLS certificates. The serverCerts list is order-dependent, it means that the first item in the list determines the certificate that Central uses by default, when there is no matching SNI (Server Name Indication). You can use this to specify multiple certificates and Central automatically selects the right certificate based on SNI. Acceptable values are: default : use the already configured custom TLS certificate if it exists. service : use the internal service certificate that Red Hat Advanced Cluster Security for Kubernetes generates. 10 Use it to configure the behavior of the TLS-enabled endpoint's client certificate authentication. 11 Use true to only allow clients with a valid client certificate. The default value is false . You can use true in conjunction with a the certAuthorities setting of service to only allow Red Hat Advanced Cluster Security for Kubernetes services to connect to this endpoint. 12 A list of CA to verify client certificates. The default value is ["service", "user"] . The certAuthorities list is order-independent, it means that the position of the items in this list does not matter. Also, setting it as empty list [] disables client certificate authentication for the endpoint, which is different from leaving this value unset. Acceptable values are: service : CA for service certificates that Red Hat Advanced Cluster Security for Kubernetes generates. user : CAs configured by PKI authentication providers. 12.2. Configuring endpoints during a new installation When you install Red Hat Advanced Cluster Security for Kubernetes by using the roxctl CLI, it creates a folder named central-bundle , which contains the necessary YAML manifests and scripts to deploy Central. Procedure After you generate the central-bundle , open the ./central-bundle/central/02-endpoints-config.yaml file. In this file, add your custom YAML configuration under the data: section of the key endpoints.yaml . Make sure that you maintain a 4 space indentation for the YAML configuration. Continue the installation instructions as usual. Red Hat Advanced Cluster Security for Kubernetes uses the specified configuration. Note If you expose any additional ports other than the default port 8443 , you must create network policies that allow traffic on those additional ports. 12.3. Configuring endpoints for an existing instance You can configure endpoints for an existing instance of Red Hat Advanced Cluster Security for Kubernetes. Procedure Download the existing config map: USD oc -n stackrox get cm/central-endpoints -o go-template='{{index .data "endpoints.yaml"}}' > <directory_path>/central_endpoints.yaml In the downloaded central_endpoints.yaml file, specify your custom YAML configuration. Upload and apply the modified central_endpoints.yaml configuration file: USD oc -n stackrox create cm central-endpoints --from-file=endpoints.yaml=<directory-path>/central-endpoints.yaml -o yaml --dry-run | \ oc label -f - --local -o yaml app.kubernetes.io/name=stackrox | \ oc apply -f - Restart Central. Note If you expose any additional ports other than the default port 8443 , you must create network policies that allow traffic on those additional ports. 12.3.1. Restarting the Central container You can restart the Central container by killing the Central container or by deleting the Central pod. Procedure Run the following command to kill the Central container: Note You must wait for at least 1 minute, until OpenShift Container Platform propagates your changes and restarts the Central container. USD oc -n stackrox exec deploy/central -c central -- kill 1 Or, run the following command to delete the Central pod: USD oc -n stackrox delete pod -lapp=central 12.4. Enabling traffic flow through custom ports If you are exposing a port to another service running in the same cluster or to an ingress controller, you must only allow traffic from the services in your cluster or from the proxy of the ingress controller. Otherwise, if you are exposing a port by using a load balancer service, you might want to allow traffic from all sources, including external sources. Use the procedure listed in this section to allow traffic from all sources. Procedure Clone the allow-ext-to-central Kubernetes network policy: USD oc -n stackrox get networkpolicy.networking.k8s.io/allow-ext-to-central -o yaml > <directory_path>/allow-ext-to-central-custom-port.yaml Use it as a reference to create your network policy, and in that policy, specify the port number you want to expose. Make sure to change the name of your network policy in the metadata section of the YAML file, so that it does not interfere with the built-in allow-ext-to-central policy.
[ "Sample endpoints.yaml configuration for Central. # # CAREFUL: If the following line is uncommented, do not expose the default endpoint on port 8443 by default. # This will break normal operation. disableDefault: true # if true, do not serve on :8443 1 endpoints: 2 # Serve plaintext HTTP only on port 8080 - listen: \":8080\" 3 # Backend protocols, possible values are 'http' and 'grpc'. If unset or empty, assume both. protocols: 4 - http tls: 5 # Disable TLS. If this is not specified, assume TLS is enabled. disable: true 6 # Serve HTTP and gRPC for sensors only on port 8444 - listen: \":8444\" 7 tls: 8 # Which TLS certificates to serve, possible values are 'service' (For service certificates that Red&#160;Hat Advanced Cluster Security for Kubernetes generates) # and 'default' (user-configured default TLS certificate). If unset or empty, assume both. serverCerts: 9 - default - service # Client authentication settings. clientAuth: 10 # Enforce TLS client authentication. If unset, do not enforce, only request certificates # opportunistically. required: true 11 # Which TLS client CAs to serve, possible values are 'service' (CA for service # certificates that Red&#160;Hat Advanced Cluster Security for Kubernetes generates) and 'user' (CAs for PKI auth providers). If unset or empty, assume both. certAuthorities: 12 # if not set, assume [\"user\", \"service\"] - service", "oc -n stackrox get cm/central-endpoints -o go-template='{{index .data \"endpoints.yaml\"}}' > <directory_path>/central_endpoints.yaml", "oc -n stackrox create cm central-endpoints --from-file=endpoints.yaml=<directory-path>/central-endpoints.yaml -o yaml --dry-run | label -f - --local -o yaml app.kubernetes.io/name=stackrox | apply -f -", "oc -n stackrox exec deploy/central -c central -- kill 1", "oc -n stackrox delete pod -lapp=central", "oc -n stackrox get networkpolicy.networking.k8s.io/allow-ext-to-central -o yaml > <directory_path>/allow-ext-to-central-custom-port.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/configure-endpoints
probe::tty.write
probe::tty.write Name probe::tty.write - write to the tty line Synopsis tty.write Values nr The amount of characters buffer the buffer that will be written file_name the file name lreated to the tty driver_name the driver name
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tty-write
Chapter 6. Preparing for your Streams for Apache Kafka deployment
Chapter 6. Preparing for your Streams for Apache Kafka deployment Prepare for a deployment of Streams for Apache Kafka by completing any necessary pre-deployment tasks. Take the necessary preparatory steps according to your specific requirements, such as the following: Ensuring you have the necessary prerequisites before deploying Streams for Apache Kafka Considering operator deployment best practices Pushing the Streams for Apache Kafka container images into your own registry (if required) Creating a pull secret for authentication to the container image registry Setting up admin roles to enable configuration of custom resources used in the deployment Note To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs. 6.1. Deployment prerequisites To deploy Streams for Apache Kafka, you will need the following: An OpenShift 4.14 and later cluster. Streams for Apache Kafka is based on Strimzi 0.45.x. The oc command-line tool is installed and configured to connect to the running cluster. 6.2. Operator deployment best practices Potential issues can arise from installing more than one Streams for Apache Kafka operator in the same OpenShift cluster, especially when using different versions. Each Streams for Apache Kafka operator manages a set of resources in an OpenShift cluster. When you install multiple Streams for Apache Kafka operators, they may attempt to manage the same resources concurrently. This can lead to conflicts and unpredictable behavior within your cluster. Conflicts can still occur even if you deploy Streams for Apache Kafka operators in different namespaces within the same OpenShift cluster. Although namespaces provide some degree of resource isolation, certain resources managed by the Streams for Apache Kafka operator, such as Custom Resource Definitions (CRDs) and roles, have a cluster-wide scope. Additionally, installing multiple operators with different versions can result in compatibility issues between the operators and the Kafka clusters they manage. Different versions of Streams for Apache Kafka operators may introduce changes, bug fixes, or improvements that are not backward-compatible. To avoid the issues associated with installing multiple Streams for Apache Kafka operators in an OpenShift cluster, the following guidelines are recommended: Install the Streams for Apache Kafka operator in a separate namespace from the Kafka cluster and other Kafka components it manages, to ensure clear separation of resources and configurations. Use a single Streams for Apache Kafka operator to manage all your Kafka instances within an OpenShift cluster. Update the Streams for Apache Kafka operator and the supported Kafka version as often as possible to reflect the latest features and enhancements. By following these best practices and ensuring consistent updates for a single Streams for Apache Kafka operator, you can enhance the stability of managing Kafka instances in an OpenShift cluster. This approach also enables you to make the most of Streams for Apache Kafka's latest features and capabilities. Note As Streams for Apache Kafka is based on Strimzi, the same issues can also arise when combining Streams for Apache Kafka operators with Strimzi operators in an OpenShift cluster. 6.3. Pushing container images to your own registry Container images for Streams for Apache Kafka are available in the Red Hat Ecosystem Catalog . The installation YAML files provided by Streams for Apache Kafka will pull the images directly from the Red Hat Ecosystem Catalog . If you do not have access to the Red Hat Ecosystem Catalog or want to use your own container repository, do the following: Pull all container images listed here Push them into your own registry Update the image names in the installation YAML files Note Each Kafka version supported for the release has a separate image. Table 6.1. Streams for Apache Kafka container images Container image Namespace/Repository Description Kafka registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 registry.redhat.io/amq-streams/kafka-38-rhel9:2.9.0 Images for running Kafka, including: Kafka Broker Kafka Connect Kafka MirrorMaker ZooKeeper Cruise Control Operator registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.9.0 Image for running the operators: Cluster Operator Topic Operator User Operator Kafka Initializer Kafka Bridge registry.redhat.io/amq-streams/bridge-rhel9:2.9.0 Image for running the Streams for Apache Kafka Bridge Streams for Apache Kafka Drain Cleaner registry.redhat.io/amq-streams/drain-cleaner-rhel9:2.9.0 Image for running the Streams for Apache Kafka Drain Cleaner Streams for Apache Kafka Proxy registry.redhat.io/amq-streams/proxy-rhel9:2.9.0 Image for running the Streams for Apache Kafka Proxy Streams for Apache Kafka Console registry.redhat.io/amq-streams/console-ui-rhel9:2.9.0 registry.redhat.io/amq-streams/console-api-rhel9:2.9.0 Images for running the Streams for Apache Kafka Console 6.4. Creating a pull secret for authentication to the container image registry The installation YAML files provided by Streams for Apache Kafka pull container images directly from the Red Hat Ecosystem Catalog . If a Streams for Apache Kafka deployment requires authentication, configure authentication credentials in a secret and add it to the installation YAML. Note Authentication is not usually required, but might be requested on certain platforms. Prerequisites You need your Red Hat username and password or the login details from your Red Hat registry service account. Note You can use your Red Hat subscription to create a registry service account from the Red Hat Customer Portal . Procedure Create a pull secret containing your login details and the container registry where the Streams for Apache Kafka image is pulled from: oc create secret docker-registry <pull_secret_name> \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> Add your user name and password. The email address is optional. Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml deployment file to specify the pull secret using the STRIMZI_IMAGE_PULL_SECRETS environment variable: apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: - name: STRIMZI_IMAGE_PULL_SECRETS value: "<pull_secret_name>" # ... The secret applies to all pods created by the Cluster Operator. 6.5. Designating Streams for Apache Kafka administrators Streams for Apache Kafka provides custom resources for configuration of your deployment. By default, permission to view, create, edit, and delete these resources is limited to OpenShift cluster administrators. Streams for Apache Kafka provides two cluster roles that you can use to assign these rights to other users: strimzi-view allows users to view and list Streams for Apache Kafka resources. strimzi-admin allows users to also create, edit or delete Streams for Apache Kafka resources. When you install these roles, they will automatically aggregate (add) these rights to the default OpenShift cluster roles. strimzi-view aggregates to the view role, and strimzi-admin aggregates to the edit and admin roles. Because of the aggregation, you might not need to assign these roles to users who already have similar rights. The following procedure shows how to assign a strimzi-admin role that allows non-cluster administrators to manage Streams for Apache Kafka resources. A system administrator can designate Streams for Apache Kafka administrators after the Cluster Operator is deployed. Prerequisites The Streams for Apache Kafka admin deployment files, which are included in the Streams for Apache Kafka deployment files . The Streams for Apache Kafka Custom Resource Definitions (CRDs) and role-based access control (RBAC) resources to manage the CRDs have been deployed with the Cluster Operator . Procedure Create the strimzi-view and strimzi-admin cluster roles in OpenShift. oc create -f install/strimzi-admin If needed, assign the roles that provide access rights to users that require them. oc create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2
[ "create secret docker-registry <pull_secret_name> --docker-server=registry.redhat.io --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: - name: STRIMZI_IMAGE_PULL_SECRETS value: \"<pull_secret_name>\"", "create -f install/strimzi-admin", "create clusterrolebinding strimzi-admin --clusterrole=strimzi-admin --user= user1 --user= user2" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/deploy-tasks-prereqs_str
2.8. Moving a Process to a Control Group
2.8. Moving a Process to a Control Group Move a process into a cgroup by running the cgclassify command, for example: The syntax for cgclassify is: where: subsystems is a comma‐separated list of subsystems, or * to launch the process in the hierarchies associated with all available subsystems. Note that if cgroups of the same name exist in multiple hierarchies, the -g option moves the processes in each of those groups. Ensure that the cgroup exists within each of the hierarchies whose subsystems you specify here. path_to_cgroup is the path to the cgroup within its hierarchies. pidlist is a space-separated list of process identifier (PIDs). If the -g option is not specified, cgclassify automatically searches the /etc/cgrules.conf file (see Section 2.8.1, "The cgred Service" ) and uses the first applicable configuration line. According to this line, cgclassify determines the hierarchies and cgroups to move the process under. Note that for the move to be successful, the destination hierarchies must exist. The subsystems specified in /etc/cgrules.conf also have to be properly configured for the corresponding hierarchy in /etc/cgconfig.conf . You can also add the --sticky option before the pid to keep any child processes in the same cgroup. If you do not set this option and the cgred service is running, child processes are allocated to cgroups based on the settings found in /etc/cgrules.conf . However, the parent process remains in the cgroup in which it was first started. Using cgclassify , you can move several processes simultaneously. For example, this command moves the processes with PIDs 1701 and 1138 into cgroup group1/ : Note that the PIDs to be moved are separated by spaces and that the specified groups should be in different hierarchies. Alternative method To move a process into a cgroup directly, write its PID to the tasks file of the cgroup. For example, to move a process with the PID 1701 into a cgroup at /cgroup/cpu_and_mem/group1/ : 2.8.1. The cgred Service Cgred is a service (which starts the cgrulesengd service) that moves tasks into cgroups according to parameters set in the /etc/cgrules.conf file. Entries in the /etc/cgrules.conf file can take one of these two forms: user subsystems control_group user : command subsystems control_group Replace user with a user name or a group name prefixed with the "@" character. Replace subsystems with a comma‐separated list of subsystem names, control_group represents a path to the cgroup, and command stands for a process name or a full command path of a process. For example: This entry specifies that any processes that belong to the user named maria access the devices subsystem according to the parameters specified in the /usergroup/staff cgroup. To associate particular commands with particular cgroups, add the command parameter, as follows: The entry now specifies that when the user named maria uses the ftp command, the process is automatically moved to the /usergroup/staff/ftp cgroup in the hierarchy that contains the devices subsystem. Note, however, that the daemon moves the process to the cgroup only after the appropriate condition is fulfilled. Therefore, the ftp process might run for a short time in the wrong group. Furthermore, if the process quickly spawns children while in the wrong group, these children might not be moved. Entries in the /etc/cgrules.conf file can include the following extra notation: @ - indicates a group instead of an individual user. For example, @admins are all users in the admins group. * - represents "all". For example, * in the subsystem field represents all mounted subsystems. % - represents an item that is the same as the item on the line above. For example, the entries specified in the /etc/cgrules.conf file can have the following form: The above configuration ensures that processes owned by the adminstaff and labstaff access the devices subsystem according to the limits set in the admingroup cgroup. Rules specified in /etc/cgrules.conf can be linked to templates configured either in the /etc/cgconfig.conf file or in configuration files stored in the /etc/cgconfig.d/ directory, allowing for flexible cgroup assignment and creation. For example, specify the following template in /etc/cgconfig.conf : Then use the users/%g/%u template in the third row of a /etc/cgrules.conf entry, which can look as follows: The %g and %u variables used above are automatically replaced with group and user name depending on the owner of the ftp process. If the process belongs to peter from the adminstaff group, the above path is translated to users/adminstaff/peter . The cgred service then searches for this directory, and if it does not exist, cgred creates it and assigns the process to users/adminstaff/peter/tasks . Note that template rules apply only to definitions of templates in configuration files, so even if " group users/adminstaff/peter " was defined in /etc/cgconfig.conf , it would be ignored in favor of " template users/%g/%u ". There are several other variables that can be used for specifying cgroup paths in templates: %u - is replaced with the name of the user who owns the current process. If name resolution fails, UID is used instead. %U - is replaced with the UID of the specified user who owns the current process. %g - is replaced with the name of the user group that owns the current process, or with the GID if name resolution fails. %G - is replaced with the GID of the cgroup that owns the current process. %p - is replaced with the name of the current process. PID is used in case of name resolution failure. %P - is replaced with the PID of the current processes.
[ "~]# cgclassify -g cpu,memory:group1 1701", "cgclassify -g subsystems : path_to_cgroup pidlist", "~]# cgclassify -g cpu,memory:group1 1701 1138", "~]# echo 1701 > /cgroup/cpu_and_mem/group1/tasks", "maria devices /usergroup/staff", "maria:ftp devices /usergroup/staff/ftp", "@adminstaff devices /admingroup @labstaff % %", "template users/%g/%u { cpuacct{ } cpu { cpu.shares = \"1000\"; } }", "peter:ftp cpu users/%g/%u" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-moving_a_process_to_a_control_group
Security and compliance
Security and compliance OpenShift Container Platform 4.13 Learning about and managing security for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "variant: openshift version: 4.13.0 metadata: name: 51-worker-rh-registry-trust labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/containers/policy.json mode: 0644 overwrite: true contents: inline: | { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "butane 51-worker-rh-registry-trust.bu -o 51-worker-rh-registry-trust.yaml", "oc apply -f 51-worker-rh-registry-trust.yaml", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 00-worker a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-master-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-container-runtime a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 01-worker-kubelet a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 51-master-rh-registry-trust 3.2.0 13s 51-worker-rh-registry-trust 3.2.0 53s 1 99-master-generated-crio-seccomp-use-default 3.2.0 25m 99-master-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-master-ssh 3.2.0 28m 99-worker-generated-crio-seccomp-use-default 3.2.0 25m 99-worker-generated-registries a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 25m 99-worker-ssh 3.2.0 28m rendered-master-af1e7ff78da0a9c851bab4be2777773b a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 8s rendered-master-cd51fd0c47e91812bfef2765c52ec7e6 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-2b52f75684fbc711bd1652dd86fd0b82 a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 24m rendered-worker-be3b3bce4f4aa52a62902304bac9da3c a2178ad522c49ee330b0033bb5cb5ea132060b0a 3.2.0 48s 2", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-af1e7ff78da0a9c851bab4be2777773b True False False 3 3 3 0 30m worker rendered-worker-be3b3bce4f4aa52a62902304bac9da3c False True False 3 0 0 0 30m 1", "oc debug node/<node_name>", "sh-4.2# chroot /host", "docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc describe machineconfigpool/worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool Metadata: Creation Timestamp: 2019-12-19T02:02:12Z Generation: 3 Resource Version: 16229 Self Link: /apis/machineconfiguration.openshift.io/v1/machineconfigpools/worker UID: 92697796-2203-11ea-b48c-fa163e3940e5 Spec: Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Machine Config Selector: Match Labels: machineconfiguration.openshift.io/role: worker Node Selector: Match Labels: node-role.kubernetes.io/worker: Paused: false Status: Conditions: Last Transition Time: 2019-12-19T02:03:27Z Message: Reason: Status: False Type: RenderDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: NodeDegraded Last Transition Time: 2019-12-19T02:03:43Z Message: Reason: Status: False Type: Degraded Last Transition Time: 2019-12-19T02:28:23Z Message: Reason: Status: False Type: Updated Last Transition Time: 2019-12-19T02:28:23Z Message: All nodes are updating to rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updating Configuration: Name: rendered-worker-d9b3f4ffcfd65c30dcf591a0e8cf9b2e Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 1 Observed Generation: 3 Ready Machine Count: 0 Unavailable Machine Count: 1 Updated Machine Count: 0 Events: <none>", "oc describe machineconfigpool/worker", "Last Transition Time: 2019-12-19T04:53:09Z Message: All nodes are updated with rendered-worker-f6819366eb455a401c42f8d96ab25c02 Reason: Status: True Type: Updated Last Transition Time: 2019-12-19T04:53:09Z Message: Reason: Status: False Type: Updating Configuration: Name: rendered-worker-f6819366eb455a401c42f8d96ab25c02 Source: API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 00-worker API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-container-runtime API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 01-worker-kubelet API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 51-worker-rh-registry-trust API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-92697796-2203-11ea-b48c-fa163e3940e5-registries API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfig Name: 99-worker-ssh Degraded Machine Count: 0 Machine Count: 3 Observed Generation: 4 Ready Machine Count: 3 Unavailable Machine Count: 0 Updated Machine Count: 3", "oc debug node/<node> -- chroot /host cat /etc/containers/policy.json", "Starting pod/<node>-debug To use host binaries, run `chroot /host` { \"default\": [ { \"type\": \"insecureAcceptAnything\" } ], \"transports\": { \"docker\": { \"registry.access.redhat.com\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ], \"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPath\": \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\" } ] }, \"docker-daemon\": { \"\": [ { \"type\": \"insecureAcceptAnything\" } ] } } }", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.redhat.io.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.redhat.io: sigstore: https://registry.redhat.io/containers/sigstore", "oc debug node/<node> -- chroot /host cat /etc/containers/registries.d/registry.access.redhat.com.yaml", "Starting pod/<node>-debug To use host binaries, run `chroot /host` docker: registry.access.redhat.com: sigstore: https://access.redhat.com/webassets/docker/content/sigstore", "oc adm release info quay.io/openshift-release-dev/ocp-release@sha256:2309578b68c5666dad62aed696f1f9d778ae1a089ee461060ba7b9514b7ca417 -o pullspec 1 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9aafb914d5d7d0dec4edd800d02f811d7383a7d49e500af548eab5d00c1bffdb 2", "oc adm release info <release_version> \\ 1", "--- Pull From: quay.io/openshift-release-dev/ocp-release@sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55 ---", "curl -o pub.key https://access.redhat.com/security/data/fd431d51.txt", "curl -o signature-1 https://mirror.openshift.com/pub/openshift-v4/signatures/openshift-release-dev/ocp-release/sha256%<sha_from_version>/signature-1 \\ 1", "skopeo inspect --raw docker://<quay_link_to_release> > manifest.json \\ 1", "skopeo standalone-verify manifest.json quay.io/openshift-release-dev/ocp-release:<release_number>-<arch> any signature-1 --public-key-file pub.key", "Signature verified using fingerprint 567E347AD0044ADE55BA8A5F199E2F91FD431D51, digest sha256:e73ab4b33a9c3ff00c9f800a38d69853ca0c4dfa5a88e3df331f66df8f18ec55", "quality.images.openshift.io/<qualityType>.<providerId>: {}", "quality.images.openshift.io/vulnerability.blackduck: {} quality.images.openshift.io/vulnerability.jfrog: {} quality.images.openshift.io/license.blackduck: {} quality.images.openshift.io/vulnerability.openscap: {}", "{ \"name\": \"OpenSCAP\", \"description\": \"OpenSCAP vulnerability score\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://www.open-scap.org/930492\", \"compliant\": true, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"critical\", \"data\": \"4\", \"severityIndex\": 3, \"reference\": null }, { \"label\": \"important\", \"data\": \"12\", \"severityIndex\": 2, \"reference\": null }, { \"label\": \"moderate\", \"data\": \"8\", \"severityIndex\": 1, \"reference\": null }, { \"label\": \"low\", \"data\": \"26\", \"severityIndex\": 0, \"reference\": null } ] }", "{ \"name\": \"Red Hat Ecosystem Catalog\", \"description\": \"Container health index\", \"timestamp\": \"2016-09-08T05:04:46Z\", \"reference\": \"https://access.redhat.com/errata/RHBA-2016:1566\", \"compliant\": null, \"scannerVersion\": \"1.2\", \"summary\": [ { \"label\": \"Health index\", \"data\": \"B\", \"severityIndex\": 1, \"reference\": null } ] }", "oc annotate image  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "oc new-app jenkins-persistent", "oc new-app jenkins-ephemeral", "kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange", "docker pull registry.redhat.io/openshift4/ose-jenkins:<v4.5.0>", "docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7:<v4.5.0>", "docker pull registry.redhat.io/openshift4/jenkins-agent-nodejs-12-rhel7:<v4.5.0>", "docker pull registry.redhat.io/openshift4/ose-jenkins-agent-maven:<v4.5.0>", "docker pull registry.redhat.io/openshift4/ose-jenkins-agent-base:<v4.5.0>", "podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }", "podman inspect --format='{{ index .Config.Labels \"io.openshift.s2i.scripts-url\" }}' wildfly/wildfly-centos7", "image:///usr/libexec/s2i", "#!/bin/bash echo \"Before assembling\" /usr/libexec/s2i/assemble rc=USD? if [ USDrc -eq 0 ]; then echo \"After successful assembling\" else echo \"After failed assembling\" fi exit USDrc", "#!/bin/bash echo \"Before running application\" exec /usr/libexec/s2i/run" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/images/index
Chapter 6. SubjectAccessReview [authorization.openshift.io/v1]
Chapter 6. SubjectAccessReview [authorization.openshift.io/v1] Description SubjectAccessReview is an object for requesting information about whether a user or group can perform an action Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL user groups scopes 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update groups array (string) GroupsSlice is optional. Groups is the list of groups to which the User belongs. isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil for a self-SAR, means "use the scopes on this request". Nil for a regular SAR, means the same as empty. user string User is optional. If both User and Groups are empty, the current authenticated user is used. verb string Verb is one of: get, list, watch, create, update, delete 6.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/subjectaccessreviews POST : create a SubjectAccessReview 6.2.1. /apis/authorization.openshift.io/v1/subjectaccessreviews Table 6.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SubjectAccessReview Table 6.2. Body parameters Parameter Type Description body SubjectAccessReview schema Table 6.3. HTTP responses HTTP code Reponse body 200 - OK SubjectAccessReview schema 201 - Created SubjectAccessReview schema 202 - Accepted SubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authorization_apis/subjectaccessreview-authorization-openshift-io-v1
5.3. Directories within /proc/
5.3. Directories within /proc/ Common groups of information concerning the kernel are grouped into directories and subdirectories within the /proc/ directory. 5.3.1. Process Directories Every /proc/ directory contains a number of directories with numerical names. A listing of them may be similar to the following: These directories are called process directories , as they are named after a program's process ID and contain information specific to that process. The owner and group of each process directory is set to the user running the process. When the process is terminated, its /proc/ process directory vanishes. Each process directory contains the following files: cmdline - Contains the command issued when starting the process. cwd - A symbolic link to the current working directory for the process. environ - A list of the environment variables for the process. The environment variable is given in all upper-case characters, and the value is in lower-case characters. exe - A symbolic link to the executable of this process. fd - A directory containing all of the file descriptors for a particular process. These are given in numbered links: maps - A list of memory maps to the various executables and library files associated with this process. This file can be rather long, depending upon the complexity of the process, but sample output from the sshd process begins like the following: mem - The memory held by the process. This file cannot be read by the user. root - A link to the root directory of the process. stat - The status of the process. statm - The status of the memory in use by the process. Below is a sample /proc/statm file: The seven columns relate to different memory statistics for the process. From left to right, they report the following aspects of the memory used: Total program size, in kilobytes. Size of memory portions, in kilobytes. Number of pages that are shared. Number of pages that are code. Number of pages of data/stack. Number of library pages. Number of dirty pages. status - The status of the process in a more readable form than stat or statm . Sample output for sshd looks similar to the following: The information in this output includes the process name and ID, the state (such as S (sleeping) or R (running) ), user/group ID running the process, and detailed data regarding memory usage. 5.3.1.1. /proc/self/ The /proc/self/ directory is a link to the currently running process. This allows a process to look at itself without having to know its process ID. Within a shell environment, a listing of the /proc/self/ directory produces the same contents as listing the process directory for that process.
[ "dr-xr-xr-x 3 root root 0 Feb 13 01:28 1 dr-xr-xr-x 3 root root 0 Feb 13 01:28 1010 dr-xr-xr-x 3 xfs xfs 0 Feb 13 01:28 1087 dr-xr-xr-x 3 daemon daemon 0 Feb 13 01:28 1123 dr-xr-xr-x 3 root root 0 Feb 13 01:28 11307 dr-xr-xr-x 3 apache apache 0 Feb 13 01:28 13660 dr-xr-xr-x 3 rpc rpc 0 Feb 13 01:28 637 dr-xr-xr-x 3 rpcuser rpcuser 0 Feb 13 01:28 666", "total 0 lrwx------ 1 root root 64 May 8 11:31 0 -> /dev/null lrwx------ 1 root root 64 May 8 11:31 1 -> /dev/null lrwx------ 1 root root 64 May 8 11:31 2 -> /dev/null lrwx------ 1 root root 64 May 8 11:31 3 -> /dev/ptmx lrwx------ 1 root root 64 May 8 11:31 4 -> socket:[7774817] lrwx------ 1 root root 64 May 8 11:31 5 -> /dev/ptmx lrwx------ 1 root root 64 May 8 11:31 6 -> socket:[7774829] lrwx------ 1 root root 64 May 8 11:31 7 -> /dev/ptmx", "08048000-08086000 r-xp 00000000 03:03 391479 /usr/sbin/sshd 08086000-08088000 rw-p 0003e000 03:03 391479 /usr/sbin/sshd 08088000-08095000 rwxp 00000000 00:00 0 40000000-40013000 r-xp 00000000 03:03 293205 /lib/ld-2.2.5.so 40013000-40014000 rw-p 00013000 03:03 293205 /lib/ld-2.2.5.so 40031000-40038000 r-xp 00000000 03:03 293282 /lib/libpam.so.0.75 40038000-40039000 rw-p 00006000 03:03 293282 /lib/libpam.so.0.75 40039000-4003a000 rw-p 00000000 00:00 0 4003a000-4003c000 r-xp 00000000 03:03 293218 /lib/libdl-2.2.5.so 4003c000-4003d000 rw-p 00001000 03:03 293218 /lib/libdl-2.2.5.so", "263 210 210 5 0 205 0", "Name: sshd State: S (sleeping) Tgid: 797 Pid: 797 PPid: 1 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 32 Groups: VmSize: 3072 kB VmLck: 0 kB VmRSS: 840 kB VmData: 104 kB VmStk: 12 kB VmExe: 300 kB VmLib: 2528 kB SigPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 8000000000001000 SigCgt: 0000000000014005 CapInh: 0000000000000000 CapPrm: 00000000fffffeff CapEff: 00000000fffffeff" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-proc-directories
Chapter 1. Overview
Chapter 1. Overview Version 4 of the Python software development kit is a collection of classes that allows you to interact with the Red Hat Virtualization Manager in Python-based projects. By downloading these classes and adding them to your project, you can access a range of functionality for high-level automation of administrative tasks. Note Version 3 of the SDK is no longer supported. For more information, consult the RHV 4.3 version of this guide . Python 3.7 and async In Python 3.7 and later versions, async is a reserved keyword. You cannot use the async parameter in methods of services that previously supported it, as in the following example, because async=True will cause an error: dc = dc_service.update( types.DataCenter( description='Updated description', ), async=True, ) The solution is to add an underscore to the parameter ( async_ ): dc = dc_service.update( types.DataCenter( description='Updated description', ), async_=True, ) Note This limitation applies only to Python 3.7 and later. Earlier versions of Python do not require this modification. 1.1. Prerequisites To install the Python software development kit, you must have: A system where Red Hat Enterprise Linux 8 is installed. Both the Server and Workstation variants are supported. A subscription to Red Hat Virtualization entitlements. Important The software development kit is an interface for the Red Hat Virtualization REST API. Use the version of the software development kit that corresponds to the version of your Red Hat Virtualization environment. For example, if you are using Red Hat Virtualization 4.3, use V4 Python software development kit. 1.2. Installing the Python Software Development Kit To install the Python software development kit: Enable the repositories that are appropriate for your hardware platform . For example, for x86-64 hardware, enable: # subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms # subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms # subscription-manager release --set=8.6 Install the required packages: # dnf install python3-ovirt-engine-sdk4 The Python software development kit is installed into the Python 3 site-packages directory, and the accompanying documentation and example are installed to /usr/share/doc/python3-ovirt-engine-sdk4 .
[ "dc = dc_service.update( types.DataCenter( description='Updated description', ), async=True, )", "dc = dc_service.update( types.DataCenter( description='Updated description', ), async_=True, )", "subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms subscription-manager release --set=8.6", "dnf install python3-ovirt-engine-sdk4" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/python_sdk_guide/chap-Overview
Appendix B. Revision History
Appendix B. Revision History Revision History Revision 10.13-59 Mon May 21 2018 Marek Suchanek Asynchronous update. Revision 10.14-00 Fri Apr 6 2018 Marek Suchanek Preparing document for 7.5 GA publication. Revision 10.13-58 Fri Mar 23 2018 Marek Suchanek New sections: pqos. Revision 10.13-57 Wed Feb 28 2018 Marek Suchanek Asynchronous update. Revision 10.13-50 Thu Jul 27 2017 Milan Navratil Document version for 7.4 GA publication. Revision 10.13-44 Tue Dec 13 2016 Milan Navratil Asynchronous update. Revision 10.08-38 Wed Nov 11 2015 Jana Heves Version for 7.2 GA release. Revision 0.3-23 Tue Feb 17 2015 Laura Bailey Building for RHEL 7.1 GA. Revision 0.3-3 Mon Apr 07 2014 Laura Bailey Rebuilding for RHEL 7.0 GA.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/appe-red_hat_enterprise_linux-performance_tuning_guide-revision_history
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_18.1.8_toolset/making-open-source-more-inclusive
Chapter 1. Goals of This Guide
Chapter 1. Goals of This Guide By following this guide, you will be taught how to download and install the product for testing in a non-production environment. (If you want to learn about installing on production systems, please refer to the Red Hat JBoss Data Virtualization Installation Guide .)
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/getting_started_guide/goals_of_this_guide
Chapter 1. Supported upgrade paths
Chapter 1. Supported upgrade paths Currently, it is possible to perform an in-place upgrade from RHEL 8 to the following target RHEL 9 minor versions: System configuration Source OS version Target OS version SAP HANA RHEL 8.8 RHEL 9.2 RHEL 8.10 RHEL 9.4 SAP NetWeaver and other SAP Applications RHEL 8.8 RHEL 9.2 RHEL 8.10 RHEL 9.4 SAP HANA is validated by SAP for RHEL minor versions that receive package updates for more than 6 months. SAP NetWeaver is validated by SAP for each major RHEL version. The supported in-place upgrade path for both are the same as described in the Upgrading from RHEL 8 to RHEL 9 document. The upgrade of systems hosting SAP HANA and other SAP applications are very similar. Certain deviations are described in Section 4. Upgrading an SAP NetWeaver system . For systems on which both SAP HANA and SAP NetWeaver are installed, the SAP HANA restrictions apply. Please refer to the Planning an upgrade to RHEL 9 chapter for more details. Note For cloud providers (AWS, Azure, and GCP), PAYG VMs on RHUI, and for both options, RHEL for SAP HA and US and RHEL for SAP Applications, there is a known bug in upgrading from 8.10 to 9.4 due to a RHUI client rpm name difference in 8.10 compared to releases. The upgrade is not possible at the moment, and there is no workaround. The upgrade from 8.8 to 9.2 is not impacted.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/upgrading_sap_environments_from_rhel_8_to_rhel_9/asmb_supported-upgrade-paths_how-to-in-place-upgrade-sap-environments-from-rhel8-to-rhel9
Data Security and Hardening Guide
Data Security and Hardening Guide Red Hat Ceph Storage 8 Red Hat Ceph Storage Data Security and Hardening Guide Red Hat Ceph Storage Documentation Team
[ "encrypted: true", "ceph orch daemon rotate-key NAME", "ceph orch daemon rotate-key mgr.ceph-key-host01 Scheduled to rotate-key mgr.ceph-key-host01 on host 'my-host-host01-installer'", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "DEFAULT_NGINX_IMAGE = 'quay.io/ceph/NGINX_IMAGE'", "ceph config set mgr mgr/cephadm/container_image_nginx NEW_NGINX_IMAGE ceph orch redeploy mgmt-gateway", "ceph orch apply mgmt-gateway [--placement= DESTINATION_HOST ] [--enable-auth=true]", "ceph orch apply mgmt-gateway --placement=host01", "touch mgmt-gateway.yaml", "service_type: mgmt-gateway placement: hosts: - ceph-node-1 spec: port: 9443 ssl_protocols: # Optional - TLSv1.3 ssl_ciphers: # Optional - AES128-SHA - AES256-SHA - RC4-SHA ssl_certificate: | # Optional -----BEGIN CERTIFICATE----- < YOU CERT DATA HERE > -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN RSA PRIVATE KEY----- < YOU PRIV KEY DATA HERE > -----END RSA PRIVATE KEY-----", "service_type: mgmt-gateway service_id: gateway placement: hosts: - ceph0 spec: port: 5000 ssl_protocols: - TLSv1.3 - ssl_ciphers: - AES128-SHA - AES256-SHA - ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3 DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T [...] -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4 /CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h [...] -----END PRIVATE KEY-----", "ceph orch apply -i mgmt-gateway.yaml", "unzip rhsso-7.6.0.zip", "cd standalone/configuration vi standalone.xml", "./add-user-keycloak.sh -u admin", "keytool -import -noprompt -trustcacerts -alias ca -file ../ca.cer -keystore /etc/java/java-1.8.0-openjdk/java-1.8.0-openjdk-1.8.0.272.b10-3.el8_3.x86_64/lib/security/cacert", "./standalone.sh", "ceph config get mgr mgr/cephadm/container_image_oauth2_proxy", "ceph config set mgr mgr/cephadm/container_image_oauth2_proxy NEW_OAUTH2_PROXY_IMAGE ceph orch redeploy oauth2_proxy", "ceph orch apply oauth2-proxy [--placement= DESTINATION_HOST ]", "ceph orch apply oauth2-proxy [--placement=host01]", "touch oauth2-proxy.yaml", "service_type: oauth2-proxy service_id: auth-proxy placement: hosts: - ceph-node-1 spec: https_address: HTTPS_ADDRESS:PORT provider_display_name: MY OIDC PROVIDER client_id: CLIENT_ID oidc_issuer_url: OIDC ISSUER URL allowlist_domains: - HTTPS_ADDRESS:PORT client_secret: CLIENT_SECRET cookie_secret: COOKIE_SECRET ssl_certificate: | -----BEGIN CERTIFICATE----- < YOU CERT DATA HERE > -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN RSA PRIVATE KEY----- < YOU PRIV KEY DATA HERE > -----END RSA PRIVATE KEY-----", "service_type: oauth2-proxy service_id: auth-proxy placement: hosts: - ceph0 spec: https_address: \"0.0.0.0:4180\" provider_display_name: \"My OIDC Provider\" client_id: \"your-client-id\" oidc_issuer_url: \"http://192.168.100.1:5556/realms/ceph\" allowlist_domains: - 192.168.100.1:8080 - 192.168.200.1:5000 client_secret: \"your-client-secret\" cookie_secret: \"your-cookie-secret\" ssl_certificate: | -----BEGIN CERTIFICATE----- MIIDtTCCAp2gAwIBAgIYMC4xNzc1NDQxNjEzMzc2MjMyXzxvQ7EcMA0GCSqGSIb3 DQEBCwUAMG0xCzAJBgNVBAYTAlVTMQ0wCwYDVQQIDARVdGFoMRcwFQYDVQQHDA5T [...] -----END CERTIFICATE----- ssl_certificate_key: | -----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jdYbjtNTAKW4 /CwQr/7wOiLGzVxChn3mmCIF3DwbL/qvTFTX2d8bDf6LjGwLYloXHscRfxszX/4h [...] -----END PRIVATE KEY-----", "ceph orch apply -i oauth2-proxy.yaml", "public_network = <public-network/netmask>[,<public-network/netmask>] cluster_network = <cluster-network/netmask>[,<cluster-network/netmask>]", "systemctl enable firewalld systemctl start firewalld systemctl status firewalld", "firewall-cmd --list-all", "sources: services: ssh dhcpv6-client", "getenforce Enforcing", "setenforce 1", "firewall-cmd --zone=<zone-name> --add-rich-rule=\"rule family=\"ipv4\" source address=\"<ip-address>/<netmask>\" port protocol=\"tcp\" port=\"<port-number>\" accept\"", "firewall-cmd --zone=<zone-name> --add-rich-rule=\"rule family=\"ipv4\" source address=\"<ip-address>/<netmask>\" port protocol=\"tcp\" port=\"<port-number>\" accept\" --permanent", "cat /var/log/ceph/6c58dfb8-4342-11ee-a953-fa163e843234/ceph.audit.log", "2023-09-01T10:20:21.445990+0000 mon.host01 (mon.0) 122301 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"config generate-minimal-conf\"}]: dispatch 2023-09-01T10:20:21.446972+0000 mon.host01 (mon.0) 122302 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"auth get\", \"entity\": \"client.admin\"}]: dispatch 2023-09-01T10:20:21.453790+0000 mon.host01 (mon.0) 122303 : audit [INF] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' 2023-09-01T10:20:21.457119+0000 mon.host01 (mon.0) 122304 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"osd tree\", \"states\": [\"destroyed\"], \"format\": \"json\"}]: dispatch 2023-09-01T10:20:30.671816+0000 mon.host01 (mon.0) 122305 : audit [DBG] from='mgr.14189 10.0.210.22:0/1157748332' entity='mgr.host01.mcadea' cmd=[{\"prefix\": \"osd blocklist ls\", \"format\": \"json\"}]: dispatch" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/data_security_and_hardening_guide/index
6.5. Using verdict maps in nftables commands
6.5. Using verdict maps in nftables commands Verdict maps, which are also known as dictionaries, enable nft to perform an action based on packet information by mapping match criteria to an action. 6.5.1. Using anonymous maps in nftables An anonymous map is a { match_criteria : action } statement that you use directly in a rule. The statement can contain multiple comma-separated mappings. The drawback of an anonymous map is that if you want to change the map, you must replace the rule. For a dynamic solution, use named maps as described in Section 6.5.2, "Using named maps in nftables" . The example describes how to use an anonymous map to route both TCP and UDP packets of the IPv4 and IPv6 protocol to different chains to count incoming TCP and UDP packets separately. Procedure 6.15. Using anonymous maps in nftables Create the example_table : Create the tcp_packets chain in example_table : Add a rule to tcp_packets that counts the traffic in this chain: Create the udp_packets chain in example_table : Add a rule to udp_packets that counts the traffic in this chain: Create a chain for incoming traffic. For example, to create a chain named incoming_traffic in example_table that filters incoming traffic: Add a rule with an anonymous map to incoming_traffic : The anonymous map distinguishes the packets and sends them to the different counter chains based on their protocol. To list the traffic counters, display example_table : The counters in the tcp_packets and udp_packets chain display both the number of received packets and bytes. 6.5.2. Using named maps in nftables The nftables framework supports named maps. You can use these maps in multiple rules within a table. Another benefit over anonymous maps is that you can update a named map without replacing the rules that use it. When you create a named map, you must specify the type of elements: ipv4_addr for a map whose match part contains an IPv4 address, such as 192.0.2.1 . ipv6_addr for a map whose match part contains an IPv6 address, such as 2001:db8:1::1 . ether_addr for a map whose match part contains a media access control ( MAC ) address, such as 52:54:00:6b:66:42 . inet_proto for a map whose match part contains an Internet protocol type, such as tcp . inet_service for a map whose match part contains an Internet services name port number, such as ssh or 22 . mark for a map whose match part contains a packet mark. A packet mark can be any positive 32-bit integer value ( 0 to 2147483647 ). counter for a map whose match part contains a counter value. The counter value can be any positive 64-bit integer value. quota for a map whose match part contains a quota value. The quota value can be any positive 64-bit integer value. The example describes how to allow or drop incoming packets based on their source IP address. Using a named map, you require only a single rule to configure this scenario while the IP addresses and actions are dynamically stored in the map. The procedure also describes how to add and remove entries from the map. Procedure 6.16. Using named maps in nftables Create a table. For example, to create a table named example_table that processes IPv4 packets: Create a chain. For example, to create a chain named example_chain in example_table : Important To avoid that the shell interprets the semicolons as the end of the command, you must escape the semicolons with a backslash. Create an empty map. For example, to create a map for IPv4 addresses: Create rules that use the map. For example, the following command adds a rule to example_chain in example_table that applies actions to IPv4 addresses which are both defined in example_map : Add IPv4 addresses and corresponding actions to example_map : This example defines the mappings of IPv4 addresses to actions. In combination with the rule created above, the firewall accepts packet from 192.0.2.1 and drops packets from 192.0.2.2 . Optionally, enhance the map by adding another IP address and action statement: Optionally, remove an entry from the map: Optionally, display the rule set: 6.5.3. Related information For further details about verdict maps, see the Maps section in the nft(8) man page.
[ "nft add table inet example_table", "nft add chain inet example_table tcp_packets", "nft add rule inet example_table tcp_packets counter", "nft add chain inet example_table udp_packets", "nft add rule inet example_table udp_packets counter", "nft add chain inet example_table incoming_traffic { type filter hook input priority 0 \\; }", "nft add rule inet example_table incoming_traffic ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets }", "nft list table inet example_table table inet example_table { chain tcp_packets { counter packets 36379 bytes 2103816 } chain udp_packets { counter packets 10 bytes 1559 } chain incoming_traffic { type filter hook input priority filter; policy accept; ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets } } }", "nft add table ip example_table", "nft add chain ip example_table example_chain { type filter hook input priority 0 \\; }", "nft add map ip example_table example_map { type ipv4_addr : verdict \\; }", "nft add rule example_table example_chain ip saddr vmap @example_map", "nft add element ip example_table example_map { 192.0.2.1 : accept, 192.0.2.2 : drop }", "nft add element ip example_table example_map { 192.0.2.3 : accept }", "nft delete element ip example_table example_map { 192.0.2.1 }", "nft list ruleset table ip example_table { map example_map { type ipv4_addr : verdict elements = { 192.0.2.2 : drop, 192.0.2.3 : accept } } chain example_chain { type filter hook input priority filter; policy accept; ip saddr vmap @example_map } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-Using_verdict_maps_in_nftables_commands
Providing feedback on Red Hat Ceph Storage documentation
Providing feedback on Red Hat Ceph Storage documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so, create a Bugzilla ticket: Go to the Bugzilla website. In the Component drop-down, select Documentation . In the Sub-Component drop-down, select the appropriate sub-component. Select the appropriate version of the document. Fill in the Summary and Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Optional: Add an attachment, if any. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/release_notes/providing-feedback-on-red-hat-ceph-storage-documentation
Chapter 37. AWS Simple Workflow Component
Chapter 37. AWS Simple Workflow Component Available as of Camel version 2.13 The Simple Workflow component supports managing workflows from Amazon's Simple Workflow service. Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon Simple Workflow. More information are available at Amazon Simple Workflow . 37.1. URI Format aws-swf://<workflow|activity>[?options] You can append query options to the URI in the following format, ?options=value&option2=value&... 37.2. URI Options The AWS Simple Workflow component supports 5 options, which are listed below. Name Description Default Type configuration (advanced) The AWS SWF default configuration SWFConfiguration accessKey (common) Amazon AWS Access Key. String secretKey (common) Amazon AWS Secret Key. String region (common) Amazon AWS Region. String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The AWS Simple Workflow endpoint is configured using URI syntax: with the following path and query parameters: 37.2.1. Path Parameters (1 parameters): Name Description Default Type type Required Activity or workflow String 37.2.2. Query Parameters (30 parameters): Name Description Default Type amazonSWClient (common) To use the given AmazonSimpleWorkflowClient as client AmazonSimpleWorkflow Client dataConverter (common) An instance of com.amazonaws.services.simpleworkflow.flow.DataConverter to use for serializing/deserializing the data. DataConverter domainName (common) The workflow domain to use. String eventName (common) The workflow or activity event name to use. String region (common) Amazon AWS Region. String version (common) The workflow or activity event version to use. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern clientConfiguration Parameters (advanced) To configure the ClientConfiguration using the key/values from the Map. Map startWorkflowOptions Parameters (advanced) To configure the StartWorkflowOptions using the key/values from the Map. Map sWClientParameters (advanced) To configure the AmazonSimpleWorkflowClient using the key/values from the Map. Map synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean activityList (activity) The list name to consume activities from. String activitySchedulingOptions (activity) Activity scheduling options ActivityScheduling Options activityThreadPoolSize (activity) Maximum number of threads in work pool for activity. 100 int activityTypeExecution Options (activity) Activity execution options ActivityTypeExecution Options activityTypeRegistration Options (activity) Activity registration options ActivityType RegistrationOptions childPolicy (workflow) The policy to use on child workflows when terminating a workflow. String executionStartToClose Timeout (workflow) Set the execution start to close timeout. 3600 String operation (workflow) Workflow operation START String signalName (workflow) The name of the signal to send to the workflow. String stateResultType (workflow) The type of the result when a workflow state is queried. String taskStartToCloseTimeout (workflow) Set the task start to close timeout. 600 String terminationDetails (workflow) Details for terminating a workflow. String terminationReason (workflow) The reason for terminating a workflow. String workflowList (workflow) The list name to consume workflows from. String workflowTypeRegistration Options (workflow) Workflow registration options WorkflowType RegistrationOptions accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 37.3. Spring Boot Auto-Configuration The component supports 32 options, which are listed below. Name Description Default Type camel.component.aws-swf.access-key Amazon AWS Access Key. String camel.component.aws-swf.configuration.access-key Amazon AWS Access Key. String camel.component.aws-swf.configuration.activity-list The list name to consume activities from. String camel.component.aws-swf.configuration.activity-scheduling-options Activity scheduling options ActivityScheduling Options camel.component.aws-swf.configuration.activity-thread-pool-size Maximum number of threads in work pool for activity. 100 Integer camel.component.aws-swf.configuration.activity-type-execution-options Activity execution options ActivityTypeExecution Options camel.component.aws-swf.configuration.activity-type-registration-options Activity registration options ActivityType RegistrationOptions camel.component.aws-swf.configuration.amazon-s-w-client To use the given AmazonSimpleWorkflowClient as client AmazonSimpleWorkflow Client camel.component.aws-swf.configuration.child-policy The policy to use on child workflows when terminating a workflow. String camel.component.aws-swf.configuration.client-configuration-parameters To configure the ClientConfiguration using the key/values from the Map. Map camel.component.aws-swf.configuration.data-converter An instance of com.amazonaws.services.simpleworkflow.flow.DataConverter to use for serializing/deserializing the data. DataConverter camel.component.aws-swf.configuration.domain-name The workflow domain to use. String camel.component.aws-swf.configuration.event-name The workflow or activity event name to use. String camel.component.aws-swf.configuration.execution-start-to-close-timeout Set the execution start to close timeout. 3600 String camel.component.aws-swf.configuration.operation Workflow operation START String camel.component.aws-swf.configuration.region Amazon AWS Region. String camel.component.aws-swf.configuration.s-w-client-parameters To configure the AmazonSimpleWorkflowClient using the key/values from the Map. Map camel.component.aws-swf.configuration.secret-key Amazon AWS Secret Key. String camel.component.aws-swf.configuration.signal-name The name of the signal to send to the workflow. String camel.component.aws-swf.configuration.start-workflow-options-parameters To configure the StartWorkflowOptions using the key/values from the Map. Map camel.component.aws-swf.configuration.state-result-type The type of the result when a workflow state is queried. String camel.component.aws-swf.configuration.task-start-to-close-timeout Set the task start to close timeout. 600 String camel.component.aws-swf.configuration.termination-details Details for terminating a workflow. String camel.component.aws-swf.configuration.termination-reason The reason for terminating a workflow. String camel.component.aws-swf.configuration.type Activity or workflow String camel.component.aws-swf.configuration.version The workflow or activity event version to use. String camel.component.aws-swf.configuration.workflow-list The list name to consume workflows from. String camel.component.aws-swf.configuration.workflow-type-registration-options Workflow registration options WorkflowType RegistrationOptions camel.component.aws-swf.enabled Enable aws-swf component true Boolean camel.component.aws-swf.region Amazon AWS Region. String camel.component.aws-swf.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.aws-swf.secret-key Amazon AWS Secret Key. String Required SWF component options You have to provide the amazonSWClient in the Registry or your accessKey and secretKey to access the Amazon's Simple Workflow Service . 37.4. Usage 37.4.1. Message headers evaluated by the SWF Workflow Producer A workflow producer allows interacting with a workflow. It can start a new workflow execution, query its state, send signals to a running workflow, or terminate and cancel it. Header Type Description CamelSWFOperation String The operation to perform on the workflow. Supported operations are: SIGNAL, CANCEL, TERMINATE, GET_STATE, START, DESCRIBE, GET_HISTORY. CamelSWFWorkflowId String A workflow ID to use. CamelAwsDdbKeyCamelSWFRunId String A worfklow run ID to use. CamelSWFStateResultType String The type of the result when a workflow state is queried. CamelSWFEventName String The workflow or activity event name to use. CamelSWFVersion String The workflow or activity event version to use. CamelSWFReason String The reason for terminating a workflow. CamelSWFDetails String Details for terminating a workflow. CamelSWFChildPolicy String The policy to use on child workflows when terminating a workflow. 37.4.2. Message headers set by the SWF Workflow Producer Header Type Description CamelSWFWorkflowId String The worfklow ID used or newly generated. CamelAwsDdbKeyCamelSWFRunId String The worfklow run ID used or generated. 37.4.3. Message headers set by the SWF Workflow Consumer A workflow consumer represents the workflow logic. When it is started, it will start polling workflow decision tasks and process them. In addition to processing decision tasks, a workflow consumer route, will also receive signals (send from a workflow producer) or state queries. The primary purpose of a workflow consumer is to schedule activity tasks for execution using activity producers. Actually activity tasks can be scheduled only from a thread started by a workflow consumer. Header Type Description CamelSWFAction String Indicates what type is the current event: CamelSWFActionExecute, CamelSWFSignalReceivedAction or CamelSWFGetStateAction. CamelSWFWorkflowReplaying boolean Indicates whether the current decision task is a replay or not. CamelSWFWorkflowStartTime long The time of the start event for this decision task. 37.4.4. Message headers set by the SWF Activity Producer An activity producer allows scheduling activity tasks. An activity producer can be used only from a thread started by a workflow consumer ie, it can process synchronous exchanges started by a workflow consumer. Header Type Description CamelSWFEventName String The activity name to schedule. CamelSWFVersion String The activity version to schedule. 37.4.5. Message headers set by the SWF Activity Consumer Header Type Description CamelSWFTaskToken String The task token that is required to report task completion for manually completed tasks. 37.4.6. Advanced amazonSWClient configuration If you need more control over the AmazonSimpleWorkflowClient instance configuration you can create your own instance and refer to it from the URI: The #client refers to a AmazonSimpleWorkflowClient in the Registry. For example if your Camel Application is running behind a firewall: AWSCredentials awsCredentials = new BasicAWSCredentials("myAccessKey", "mySecretKey"); ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost("http://myProxyHost"); clientConfiguration.setProxyPort(8080); AmazonSimpleWorkflowClient client = new AmazonSimpleWorkflowClient(awsCredentials, clientConfiguration); registry.bind("client", client); 37.5. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency> where USD{camel-version } must be replaced by the actual version of Camel (2.13 or higher). 37.6. See Also Configuring Camel Component Endpoint Getting Started AWS Component
[ "aws-swf://<workflow|activity>[?options]", "aws-swf:type", "AWSCredentials awsCredentials = new BasicAWSCredentials(\"myAccessKey\", \"mySecretKey\"); ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setProxyHost(\"http://myProxyHost\"); clientConfiguration.setProxyPort(8080); AmazonSimpleWorkflowClient client = new AmazonSimpleWorkflowClient(awsCredentials, clientConfiguration); registry.bind(\"client\", client);", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws</artifactId> <version>USD{camel-version}</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/aws-swf-component
Chapter 214. Log Component
Chapter 214. Log Component Available as of Camel version 1.1 The log: component logs message exchanges to the underlying logging mechanism. Camel uses sfl4j which allows you to configure logging via, among others: Log4j Logback Java Util Logging 214.1. URI format Where loggingCategory is the name of the logging category to use. You can append query options to the URI in the following format, ?option=value&option=value&... INFO:*Using Logger instance from the Registry* As of Camel 2.12.4/2.13.1 , if there's single instance of org.slf4j.Logger found in the Registry, the loggingCategory is no longer used to create logger instance. The registered instance is used instead. Also it is possible to reference particular Logger instance using ?logger=#myLogger URI parameter. Eventually, if there's no registered and URI logger parameter, the logger instance is created using loggingCategory . For example, a log endpoint typically specifies the logging level using the level option, as follows: The default logger logs every exchange ( regular logging ). But Camel also ships with the Throughput logger, which is used whenever the groupSize option is specified. TIP:*Also a log in the DSL* There is also a log directly in the DSL, but it has a different purpose. Its meant for lightweight and human logs. See more details at LogEIP. 214.2. Options The Log component supports 2 options, which are listed below. Name Description Default Type exchangeFormatter (advanced) Sets a custom ExchangeFormatter to convert the Exchange to a String suitable for logging. If not specified, we default to DefaultExchangeFormatter. ExchangeFormatter resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Log endpoint is configured using URI syntax: with the following path and query parameters: 214.2.1. Path Parameters (1 parameters): Name Description Default Type loggerName Required The logger name to use String 214.2.2. Query Parameters (26 parameters): Name Description Default Type groupActiveOnly (producer) If true, will hide stats when no new messages have been received for a time interval, if false, show stats regardless of message traffic. true Boolean groupDelay (producer) Set the initial delay for stats (in millis) Long groupInterval (producer) If specified will group message stats by this time interval (in millis) Long groupSize (producer) An integer that specifies a group size for throughput logging. Integer level (producer) Logging level to use. The default value is INFO. INFO String logMask (producer) If true, mask sensitive information like password or passphrase in the log. Boolean marker (producer) An optional Marker name to use. String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean maxChars (formatting) Limits the number of characters logged per line. 10000 int multiline (formatting) If enabled then each information is outputted on a newline. false boolean showAll (formatting) Quick option for turning all options on. (multiline, maxChars has to be manually set if to be used) false boolean showBody (formatting) Show the message body. true boolean showBodyType (formatting) Show the body Java type. true boolean showCaughtException (formatting) f the exchange has a caught exception, show the exception message (no stack trace). A caught exception is stored as a property on the exchange (using the key org.apache.camel.Exchange#EXCEPTION_CAUGHT and for instance a doCatch can catch exceptions. false boolean showException (formatting) If the exchange has an exception, show the exception message (no stacktrace) false boolean showExchangeId (formatting) Show the unique exchange ID. false boolean showExchangePattern (formatting) Shows the Message Exchange Pattern (or MEP for short). true boolean showFiles (formatting) If enabled Camel will output files false boolean showFuture (formatting) If enabled Camel will on Future objects wait for it to complete to obtain the payload to be logged. false boolean showHeaders (formatting) Show the message headers. false boolean showOut (formatting) If the exchange has an out message, show the out message. false boolean showProperties (formatting) Show the exchange properties. false boolean showStackTrace (formatting) Show the stack trace, if an exchange has an exception. Only effective if one of showAll, showException or showCaughtException are enabled. false boolean showStreams (formatting) Whether Camel should show stream bodies or not (eg such as java.io.InputStream). Beware if you enable this option then you may not be able later to access the message body as the stream have already been read by this logger. To remedy this you will have to use Stream Caching. false boolean skipBodyLineSeparator (formatting) Whether to skip line separators when logging the message body. This allows to log the message body in one line, setting this option to false will preserve any line separators from the body, which then will log the body as is. true boolean style (formatting) Sets the outputs style to use. Default OutputStyle 214.3. Regular logger sample In the route below we log the incoming orders at DEBUG level before the order is processed: from("activemq:orders").to("log:com.mycompany.order?level=DEBUG").to("bean:processOrder"); Or using Spring XML to define the route: <route> <from uri="activemq:orders"/> <to uri="log:com.mycompany.order?level=DEBUG"/> <to uri="bean:processOrder"/> </route> 214.4. Regular logger with formatter sample In the route below we log the incoming orders at INFO level before the order is processed. from("activemq:orders"). to("log:com.mycompany.order?showAll=true&multiline=true").to("bean:processOrder"); 214.5. Throughput logger with groupSize sample In the route below we log the throughput of the incoming orders at DEBUG level grouped by 10 messages. from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupSize=10").to("bean:processOrder"); 214.6. Throughput logger with groupInterval sample This route will result in message stats logged every 10s, with an initial 60s delay and stats should be displayed even if there isn't any message traffic. from("activemq:orders"). to("log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false").to("bean:processOrder"); The following will be logged: 214.7. Masking sensitive information like password Available as of Camel 2.19 You can enable security masking for logging by setting logMask flag to true . Note that this option also affects Log EIP. To enable mask in Java DSL at CamelContext level: camelContext.setLogMask(true); And in XML: <camelContext logMask="true"> You can also turn it on|off at endpoint level. To enable mask in Java DSL at endpoint level, add logMask=true option in the URI for the log endpoint: from("direct:start").to("log:foo?logMask=true"); And in XML: <route> <from uri="direct:foo"/> <to uri="log:foo?logMask=true"/> </route> org.apache.camel.processor.DefaultMaskingFormatter is used for the masking by default. If you want to use a custom masking formatter, put it into registry with the name CamelCustomLogMask . Note that the masking formatter must implement org.apache.camel.spi.MaskingFormatter . 214.8. Full customization of the logging output Available as of Camel 2.11 With the options outlined in the #Formatting section, you can control much of the output of the logger. However, log lines will always follow this structure: This format is unsuitable in some cases, perhaps because you need to... ... filter the headers and properties that are printed, to strike a balance between insight and verbosity. ... adjust the log message to whatever you deem most readable. ... tailor log messages for digestion by log mining systems, e.g. Splunk. ... print specific body types differently. ... etc. Whenever you require absolute customization, you can create a class that implements the ExchangeFormatter interface. Within the format(Exchange) method you have access to the full Exchange, so you can select and extract the precise information you need, format it in a custom manner and return it. The return value will become the final log message. You can have the Log component pick up your custom ExchangeFormatter in either of two ways: Explicitly instantiating the LogComponent in your Registry: <bean name="log" class="org.apache.camel.component.log.LogComponent"> <property name="exchangeFormatter" ref="myCustomFormatter" /> </bean> 214.8.1. Convention over configuration:* Simply by registering a bean with the name logFormatter ; the Log Component is intelligent enough to pick it up automatically. <bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" /> Note the ExchangeFormatter gets applied to all Log endpoints within that Camel Context . If you need different ExchangeFormatters for different endpoints, just instantiate the LogComponent as many times as needed, and use the relevant bean name as the endpoint prefix. From Camel 2.11.2/2.12 onwards when using a custom log formatter, you can specify parameters in the log uri, which gets configured on the custom log formatter. Though when you do that you should define the "logFormatter" as prototype scoped so its not shared if you have different parameters, eg: <bean name="logFormatter" class="com.xyz.MyCustomExchangeFormatter" scope="prototype"/> And then we can have Camel routes using the log uri with different options: <to uri="log:foo?param1=foo&amp;param2=100"/> <to uri="log:bar?param1=bar&amp;param2=200"/> 214.9. Using Log component in OSGi Improvement as of Camel 2.12.4/2.13.1 When using Log component inside OSGi (e.g., in Karaf), the underlying logging mechanisms are provided by PAX logging. It searches for a bundle which invokes org.slf4j.LoggerFactory.getLogger() method and associates the bundle with the logger instance. Without specifying custom org.sfl4j.Logger instance, the logger created by Log component is associated with camel-core bundle. In some scenarios it is required that the bundle associated with logger should be the bundle which contains route definition. To do this, either register single instance of org.slf4j.Logger in the Registry or reference it using logger URI parameter. 214.10. See Also LogEIP for using log directly in the DSL for human logs.
[ "log:loggingCategory[?options]", "log:org.apache.camel.example?level=DEBUG", "log:loggerName", "from(\"activemq:orders\").to(\"log:com.mycompany.order?level=DEBUG\").to(\"bean:processOrder\");", "<route> <from uri=\"activemq:orders\"/> <to uri=\"log:com.mycompany.order?level=DEBUG\"/> <to uri=\"bean:processOrder\"/> </route>", "from(\"activemq:orders\"). to(\"log:com.mycompany.order?showAll=true&multiline=true\").to(\"bean:processOrder\");", "from(\"activemq:orders\"). to(\"log:com.mycompany.order?level=DEBUG&groupSize=10\").to(\"bean:processOrder\");", "from(\"activemq:orders\"). to(\"log:com.mycompany.order?level=DEBUG&groupInterval=10000&groupDelay=60000&groupActiveOnly=false\").to(\"bean:processOrder\");", "\"Received: 1000 new messages, with total 2000 so far. Last group took: 10000 millis which is: 100 messages per second. average: 100\"", "camelContext.setLogMask(true);", "<camelContext logMask=\"true\">", "from(\"direct:start\").to(\"log:foo?logMask=true\");", "<route> <from uri=\"direct:foo\"/> <to uri=\"log:foo?logMask=true\"/> </route>", "Exchange[Id:ID-machine-local-50656-1234567901234-1-2, ExchangePattern:InOut, Properties:{CamelToEndpoint=log://org.apache.camel.component.log.TEST?showAll=true, CamelCreatedTimestamp=Thu Mar 28 00:00:00 WET 2013}, Headers:{breadcrumbId=ID-machine-local-50656-1234567901234-1-1}, BodyType:String, Body:Hello World, Out: null]", "<bean name=\"log\" class=\"org.apache.camel.component.log.LogComponent\"> <property name=\"exchangeFormatter\" ref=\"myCustomFormatter\" /> </bean>", "<bean name=\"logFormatter\" class=\"com.xyz.MyCustomExchangeFormatter\" />", "<bean name=\"logFormatter\" class=\"com.xyz.MyCustomExchangeFormatter\" scope=\"prototype\"/>", "<to uri=\"log:foo?param1=foo&amp;param2=100\"/> <to uri=\"log:bar?param1=bar&amp;param2=200\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/log-component
18.5. LevelDB Cache Store
18.5. LevelDB Cache Store LevelDB is a key-value storage engine that provides an ordered mapping from string keys to string values. The LevelDB Cache Store uses two filesystem directories. Each directory is configured for a LevelDB database. One directory stores the non-expired data and the second directory stores the keys pending to be purged permanently. Report a bug 18.5.1. Configuring LevelDB Cache Store (Remote Client-Server Mode) Procedure 18.1. To configure LevelDB Cache Store: Add the following elements to a cache definition in standalone.xml to configure the database: Note Directories will be automatically created if they do not exist. For details about the elements and parameters used in this sample configuration, see Section 18.3, "Cache Store Configuration Details (Remote Client-Server Mode)" . Report a bug 18.5.2. LevelDB Cache Store Programmatic Configuration The following is a sample programmatic configuration of LevelDB Cache Store: Procedure 18.2. LevelDB Cache Store programmatic configuration Use the ConfigurationBuilder to create a new configuration object. Add the store using LevelDBCacheStoreConfigurationBuilder class to build its configuration. Set the LevelDB Cache Store location path. The specified path stores the primary cache store data. The directory is automatically created if it does not exist. Specify the location for expired data using the expiredLocation parameter for the LevelDB Store. The specified path stores expired data before it is purged. The directory is automatically created if it does not exist. Note Programmatic configurations can only be used with Red Hat JBoss Data Grid Library mode. Report a bug 18.5.3. LevelDB Cache Store Sample XML Configuration (Library Mode) The following is a sample XML configuration of LevelDB Cache Store: For details about the elements and parameters used in this sample configuration, see Section 18.2, "Cache Store Configuration Details (Library Mode)" . Report a bug 18.5.4. Configure a LevelDB Cache Store Using JBoss Operations Network Use the following procedure to set up a new LevelDB cache store using the JBoss Operations Network. Procedure 18.3. Ensure that Red Hat JBoss Operations Network 3.2 or higher is installed and started. Install the Red Hat JBoss Data Grid Plugin Pack for JBoss Operations Network 3.2.0. Ensure that JBoss Data Grid is installed and started. Import JBoss Data Grid server into the inventory. Configure the JBoss Data Grid connection settings. Create a new LevelDB cache store as follows: Figure 18.1. Create a new LevelDB Cache Store Right-click the default cache. In the menu, mouse over the Create Child option. In the submenu, click LevelDB Store . Name the new LevelDB cache store as follows: Figure 18.2. Name the new LevelDB Cache Store In the Resource Create Wizard that appears, add a name for the new LevelDB Cache Store. Click to continue. Configure the LevelDB Cache Store settings as follows: Figure 18.3. Configure the LevelDB Cache Store Settings Use the options in the configuration window to configure a new LevelDB cache store. Click Finish to complete the configuration. Schedule a restart operation as follows: Figure 18.4. Schedule a Restart Operation In the screen's left panel, expand the JBossAS7 Standalone Servers entry, if it is not currently expanded. Click JDG (0.0.0.0:9990) from the expanded menu items. In the screen's right panel, details about the selected server display. Click the Operations tab. In the Operation drop-down box, select the Restart operation. Select the radio button for the Now entry. Click Schedule to restart the server immediately. Discover the new LevelDB cache store as follows: Figure 18.5. Discover the New LevelDB Cache Store In the screen's left panel, select each of the following items in the specified order to expand them: JBossAS7 Standalong Servers JDG (0.0.0.0:9990) infinispan Cache Containers local Caches default LevelDB Stores Click the name of your new LevelDB Cache Store to view its configuration information in the right panel. Report a bug
[ "<leveldb-store path=\"/path/to/leveldb/data\" passivation=\"false\" purge=\"false\" > <expiration path=\"/path/to/leveldb/expires/data\" /> <implementation type=\"JNI\" /> </leveldb-store>", "Configuration cacheConfig = new ConfigurationBuilder().persistence() .addStore(LevelDBStoreConfigurationBuilder.class) .location(\"/tmp/leveldb/data\") .expiredLocation(\"/tmp/leveldb/expired\").build();", "<namedCache name=\"vehicleCache\"> <persistence passivation=\"false\"> <leveldbStore xmlns=\"urn:infinispan:config:store:leveldb:6.0 location=\"/path/to/leveldb/data\" expiredLocation=\"/path/to/expired/data\" shared=\"false\" preload=\"true\"/> </persistence> </namedCache>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-LevelDB_Cache_Store
Using AMQ Core Protocol JMS
Using AMQ Core Protocol JMS Red Hat AMQ Core Protocol JMS 7.11 Developing an AMQ messaging client using Java
[ "cd <project-dir>", "<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>", "<dependency> <groupId>org.apache.activemq</groupId> <artifactId>artemis-jms-client</artifactId> <version>2.28.0.redhat-00011</version> </dependency>", "unzip amq-broker-7.11.4-maven-repository.zip", "unzip amq-broker-7.11.4.zip", "mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample", "> java -cp \"target\\classes;target\\dependency\\*\" org.apache.activemq.artemis.jms.example.QueueExample", "java -cp \"target/classes:target/dependency/*\" org.apache.activemq.artemis.jms.example.QueueExample Sent message: This is a text message Received message: This is a text message", "javax.naming.Context context = new javax.naming.InitialContext();", "java.naming.factory.initial = org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory", "java -Djava.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory", "Hashtable<Object, Object> env = new Hashtable<>(); env.put(\"java.naming.factory.initial\", \"org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory\"); InitialContext context = new InitialContext(env);", "connectionFactory. <lookup-name> = <connection-uri>", "connectionFactory.app1 = tcp://example.net:61616?clientID=backend", "ConnectionFactory factory = (ConnectionFactory) context.lookup(\"app1\");", "tcp://<host>:<port>[?<option>=<value>[&<option>=<value>...]]", "tcp://example.net:61616?clientID=backend", "(<connection-uri>[,<connection-uri>])[?<option>=<value>[&<option>=<value>...]]", "queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name>", "queue.jobs = app1/work-items topic.notifications = app1/updates", "Queue queue = (Queue) context.lookup(\"jobs\"); Topic topic = (Topic) context.lookup(\"notifications\");", "connectionFactory.ConnectionFactory=tcp://localhost:61616?ha=true&reconnectAttempts=3", "connectionFactory.ConnectionFactory=(tcp://host1:port,tcp://host2:port)?ha=true&reconnectAttempts=3", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setInitialConnectAttempts(3);", "<configuration> <core> <initial-connect-attempts>3</initial-connect-attempts> 1 </core> </configuration>", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?clientFailureCheckPeriod=10000", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setClientFailureCheckPeriod(10000);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?connectionTtl=30000", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConnectionTTL(30000);", "Connection jmsConnection = null; try { ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(...); jmsConnection = jmsConnectionFactory.createConnection(); ...use the connection } finally { if (jmsConnection != null) { jmsConnection.close(); } }", "java.naming.factory.initial = ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=udp://231.7.7.7:9876", "final String groupAddress = \"231.7.7.7\"; final int groupPort = 9876; DiscoveryGroupConfiguration discoveryGroupConfiguration = new DiscoveryGroupConfiguration(); UDPBroadcastEndpointFactory udpBroadcastEndpointFactory = new UDPBroadcastEndpointFactory(); udpBroadcastEndpointFactory.setGroupAddress(groupAddress).setGroupPort(groupPort); discoveryGroupConfiguration.setBroadcastEndpointFactory(udpBroadcastEndpointFactory); ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA (discoveryGroupConfiguration, JMSFactoryType.CF); Connection jmsConnection1 = jmsConnectionFactory.createConnection(); Connection jmsConnection2 = jmsConnectionFactory.createConnection();", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=(tcp://myhost:61616,tcp://myhost2:61616)", "HashMap<String, Object> map = new HashMap<String, Object>(); map.put(\"host\", \"myhost\"); map.put(\"port\", \"61616\"); TransportConfiguration broker1 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map); HashMap<String, Object> map2 = new HashMap<String, Object>(); map2.put(\"host\", \"myhost2\"); map2.put(\"port\", \"61617\"); TransportConfiguration broker2 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map2); ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithHA (JMSFactoryType.CF, broker1, broker2);", "Map<String, Object> connectionParams = new HashMap<String, Object>(); connectionParams.put(org.apache.activemq.artemis.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 61617); TransportConfiguration transportConfiguration = new TransportConfiguration( \"org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory\", connectionParams); ConnectionFactory connectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration); Connection jmsConnection = connectionFactory.createConnection();", "BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); }", "BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); }", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?groupID=MyGroup", "Message message = new TextMessage(); message.setStringProperty(\"JMSXGroupID\", \"MyGroup\"); producer.send(message);", "Message jmsMessage = session.createMessage(); String myUniqueID = \"This is my unique id\"; message.setStringProperty(HDR_DUPLICATE_DETECTION_ID.toString(), myUniqueID);", "package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?consumerWindowSize=300000", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(300000);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory java.naming.provider.url=tcp://localhost:61616?producerWindowSize=1024", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setProducerWindowSize(1024);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?consumerWindowSize=-1", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(-1);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?consumerWindowSize=0", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerWindowSize(0);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory java.naming.provider.url=tcp://localhost:61616?consumerMaxRate=10", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConsumerMaxRate(10);", "java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory java.naming.provider.url=tcp://localhost:61616?producerMaxRate=10", "ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setProducerMaxRate(10);", "/home/ <username> /.m2/settings.xml", "C:\\Users\\<username>\\.m2\\settings.xml", "<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>", "<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>", "<repository> <id>red-hat-local</id> <url>USD{repository-url}</url> </repository>", "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name exampleQueue --address exampleQueue --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html-single/using_amq_core_protocol_jms/index
Chapter 2. Managing Certificates
Chapter 2. Managing Certificates Abstract TLS authentication uses X.509 certificates-a common, secure and reliable method of authenticating your application objects. You can create X.509 certificates that identify your Red Hat Fuse applications. 2.1. What is an X.509 Certificate? Role of certificates An X.509 certificate binds a name to a public key value. The role of the certificate is to associate a public key with the identity contained in the X.509 certificate. Integrity of the public key Authentication of a secure application depends on the integrity of the public key value in the application's certificate. If an impostor replaces the public key with its own public key, it can impersonate the true application and gain access to secure data. To prevent this type of attack, all certificates must be signed by a certification authority (CA). A CA is a trusted node that confirms the integrity of the public key value in a certificate. Digital signatures A CA signs a certificate by adding its digital signature to the certificate. A digital signature is a message encoded with the CA's private key. The CA's public key is made available to applications by distributing a certificate for the CA. Applications verify that certificates are validly signed by decoding the CA's digital signature with the CA's public key. Warning The supplied demonstration certificates are self-signed certificates. These certificates are insecure because anyone can access their private key. To secure your system, you must create new certificates signed by a trusted CA. Contents of an X.509 certificate An X.509 certificate contains information about the certificate subject and the certificate issuer (the CA that issued the certificate). A certificate is encoded in Abstract Syntax Notation One (ASN.1), a standard syntax for describing messages that can be sent or received on a network. The role of a certificate is to associate an identity with a public key value. In more detail, a certificate includes: A subject distinguished name (DN) that identifies the certificate owner. The public key associated with the subject. X.509 version information. A serial number that uniquely identifies the certificate. An issuer DN that identifies the CA that issued the certificate. The digital signature of the issuer. Information about the algorithm used to sign the certificate. Some optional X.509 v.3 extensions; for example, an extension exists that distinguishes between CA certificates and end-entity certificates. Distinguished names A DN is a general purpose X.500 identifier that is often used in the context of security. See Appendix A, ASN.1 and Distinguished Names for more details about DNs. 2.2. Certification Authorities 2.2.1. Introduction to Certificate Authorities A CA consists of a set of tools for generating and managing certificates and a database that contains all of the generated certificates. When setting up a system, it is important to choose a suitable CA that is sufficiently secure for your requirements. There are two types of CA you can use: commercial CAs are companies that sign certificates for many systems. private CAs are trusted nodes that you set up and use to sign certificates for your system only. 2.2.2. Commercial Certification Authorities Signing certificates There are several commercial CAs available. The mechanism for signing a certificate using a commercial CA depends on which CA you choose. Advantages of commercial CAs An advantage of commercial CAs is that they are often trusted by a large number of people. If your applications are designed to be available to systems external to your organization, use a commercial CA to sign your certificates. If your applications are for use within an internal network, a private CA might be appropriate. Criteria for choosing a CA Before choosing a commercial CA, consider the following criteria: What are the certificate-signing policies of the commercial CAs? Are your applications designed to be available on an internal network only? What are the potential costs of setting up a private CA compared to the costs of subscribing to a commercial CA? 2.2.3. Private Certification Authorities Choosing a CA software package If you want to take responsibility for signing certificates for your system, set up a private CA. To set up a private CA, you require access to a software package that provides utilities for creating and signing certificates. Several packages of this type are available. OpenSSL software package One software package that allows you to set up a private CA is OpenSSL, http://www.openssl.org . The OpenSSL package includes basic command line utilities for generating and signing certificates. Complete documentation for the OpenSSL command line utilities is available at http://www.openssl.org/docs . Setting up a private CA using OpenSSL To set up a private CA, see the instructions in Section 2.5, "Creating Your Own Certificates" . Choosing a host for a private certification authority Choosing a host is an important step in setting up a private CA. The level of security associated with the CA host determines the level of trust associated with certificates signed by the CA. If you are setting up a CA for use in the development and testing of Red Hat Fuse applications, use any host that the application developers can access. However, when you create the CA certificate and private key, do not make the CA private key available on any hosts where security-critical applications run. Security precautions If you are setting up a CA to sign certificates for applications that you are going to deploy, make the CA host as secure as possible. For example, take the following precautions to secure your CA: Do not connect the CA to a network. Restrict all access to the CA to a limited set of trusted users. Use an RF-shield to protect the CA from radio-frequency surveillance. 2.3. Certificate Chaining Certificate chain A certificate chain is a sequence of certificates, where each certificate in the chain is signed by the subsequent certificate. Figure 2.1, "A Certificate Chain of Depth 2" shows an example of a simple certificate chain. Figure 2.1. A Certificate Chain of Depth 2 Self-signed certificate The last certificate in the chain is normally a self-signed certificate -a certificate that signs itself. Chain of trust The purpose of a certificate chain is to establish a chain of trust from a peer certificate to a trusted CA certificate. The CA vouches for the identity in the peer certificate by signing it. If the CA is one that you trust (indicated by the presence of a copy of the CA certificate in your root certificate directory), this implies you can trust the signed peer certificate as well. Certificates signed by multiple CAs A CA certificate can be signed by another CA. For example, an application certificate could be signed by the CA for the finance department of Progress Software, which in turn is signed by a self-signed commercial CA. Figure 2.2, "A Certificate Chain of Depth 3" shows what this certificate chain looks like. Figure 2.2. A Certificate Chain of Depth 3 Trusted CAs An application can accept a peer certificate, provided it trusts at least one of the CA certificates in the signing chain. 2.4. Special Requirements on HTTPS Certificates Overview The HTTPS specification mandates that HTTPS clients must be capable of verifying the identity of the server. This can potentially affect how you generate your X.509 certificates. The mechanism for verifying the server identity depends on the type of client. Some clients might verify the server identity by accepting only those server certificates signed by a particular trusted CA. In addition, clients can inspect the contents of a server certificate and accept only the certificates that satisfy specific constraints. In the absence of an application-specific mechanism, the HTTPS specification defines a generic mechanism, known as the HTTPS URL integrity check , for verifying the server identity. This is the standard mechanism used by Web browsers. HTTPS URL integrity check The basic idea of the URL integrity check is that the server certificate's identity must match the server host name. This integrity check has an important impact on how you generate X.509 certificates for HTTPS: the certificate identity (usually the certificate subject DN's common name) must match the host name on which the HTTPS server is deployed . The URL integrity check is designed to prevent man-in-the-middle attacks. Reference The HTTPS URL integrity check is specified by RFC 2818, published by the Internet Engineering Task Force (IETF) at http://www.ietf.org/rfc/rfc2818.txt . How to specify the certificate identity The certificate identity used in the URL integrity check can be specified in one of the following ways: Using commonName Using subectAltName Using commonName The usual way to specify the certificate identity (for the purpose of the URL integrity check) is through the Common Name (CN) in the subject DN of the certificate. For example, if a server supports secure TLS connections at the following URL: The corresponding server certificate would have the following subject DN: Where the CN has been set to the host name, www.redhat.com . For details of how to set the subject DN in a new certificate, see Section 2.5, "Creating Your Own Certificates" . Using subjectAltName (multi-homed hosts) Using the subject DN's Common Name for the certificate identity has the disadvantage that only one host name can be specified at a time. If you deploy a certificate on a multi-homed host, however, you might find it is practical to allow the certificate to be used with any of the multi-homed host names. In this case, it is necessary to define a certificate with multiple, alternative identities, and this is only possible using the subjectAltName certificate extension. For example, if you have a multi-homed host that supports connections to either of the following host names: Then you can define a subjectAltName that explicitly lists both of these DNS host names. If you generate your certificates using the openssl utility, edit the relevant line of your openssl.cnf configuration file to specify the value of the subjectAltName extension, as follows: Where the HTTPS protocol matches the server host name against either of the DNS host names listed in the subjectAltName (the subjectAltName takes precedence over the Common Name). The HTTPS protocol also supports the wildcard character, \* , in host names. For example, you can define the subjectAltName as follows: This certificate identity matches any three-component host name in the domain jboss.org . Warning You must never use the wildcard character in the domain name (and you must take care never to do this accidentally by forgetting to type the dot, . , delimiter in front of the domain name). For example, if you specified *jboss.org , your certificate could be used on *any* domain that ends in the letters jboss . 2.5. Creating Your Own Certificates 2.5.1. Prerequisites OpenSSL utilities The steps described in this section are based on the OpenSSL command-line utilities from the OpenSSL project. Further documentation of the OpenSSL command-line utilities can be obtained at http://www.openssl.org/docs/ . Sample CA directory structure For the purposes of illustration, the CA database is assumed to have the following directory structure: X509CA /ca X509CA /certs X509CA /newcerts X509CA /crl Where X509CA is the parent directory of the CA database. 2.5.2. Set Up Your Own CA Substeps to perform This section describes how to set up your own private CA. Before setting up a CA for a real deployment, read the additional notes in Section 2.2.3, "Private Certification Authorities" . To set up your own CA, perform the following steps: the section called "Add the bin directory to your PATH" the section called "Create the CA directory hierarchy" the section called "Copy and edit the openssl.cnf file" the section called "Initialize the CA database" the section called "Create a self-signed CA certificate and private key" Add the bin directory to your PATH On the secure CA host, add the OpenSSL bin directory to your path: Windows UNIX This step makes the openssl utility available from the command line. Create the CA directory hierarchy Create a new directory, X509CA , to hold the new CA. This directory is used to hold all of the files associated with the CA. Under the X509CA directory, create the following hierarchy of directories: X509CA /ca X509CA /certs X509CA /newcerts X509CA /crl Copy and edit the openssl.cnf file Copy the sample openssl.cnf from your OpenSSL installation to the X509CA directory. Edit the openssl.cnf to reflect the directory structure of the X509CA directory, and to identify the files used by the new CA. Edit the [CA_default] section of the openssl.cnf file to look like the following: You might decide to edit other details of the OpenSSL configuration at this point-for more details, see http://www.openssl.org/docs/ . Initialize the CA database In the X509CA directory, initialize two files, serial and index.txt . Windows To initialize the serial file in Windows, enter the following command: To create an empty file, index.txt , in Windows start Windows Notepad at the command line in the X509CA directory, as follows: In response to the dialog box with the text, Cannot find the text.txt file. Do you want to create a new file? , click Yes , and close Notepad. UNIX To initialize the serial file and the index.txt file in UNIX, enter the following command: These files are used by the CA to maintain its database of certificate files. Note The index.txt file must initially be completely empty, not even containing white space. Create a self-signed CA certificate and private key Create a new self-signed CA certificate and private key with the following command: The command prompts you for a pass phrase for the CA private key and details of the CA distinguished name. For example: Note The security of the CA depends on the security of the private key file and the private key pass phrase used in this step. You must ensure that the file names and location of the CA certificate and private key, new_ca.pem and new_ca_pk.pem , are the same as the values specified in openssl.cnf (see the preceding step). You are now ready to sign certificates with your CA. 2.5.3. Use the CA to Create Signed Certificates in a Java Keystore Substeps to perform To create and sign a certificate in a Java keystore (JKS), CertName .jks , perform the following substeps: the section called "Add the Java bin directory to your PATH" the section called "Generate a certificate and private key pair" the section called "Create a certificate signing request" the section called "Sign the CSR" the section called "Convert to PEM format" the section called "Concatenate the files" the section called "Update keystore with the full certificate chain" the section called "Repeat steps as required" Add the Java bin directory to your PATH If you have not already done so, add the Java bin directory to your path: Windows UNIX This step makes the keytool utility available from the command line. Generate a certificate and private key pair Open a command prompt and change directory to the directory where you store your keystore files, KeystoreDir . Enter the following command: This keytool command, invoked with the -genkey option, generates an X.509 certificate and a matching private key. The certificate and the key are both placed in a key entry in a newly created keystore, CertName .jks . Because the specified keystore, CertName .jks , did not exist prior to issuing the command, keytool implicitly creates a new keystore. The -dname and -validity flags define the contents of the newly created X.509 certificate, specifying the subject DN and the days before expiration respectively. For more details about DN format, see Appendix A, ASN.1 and Distinguished Names . Some parts of the subject DN must match the values in the CA certificate (specified in the CA Policy section of the openssl.cnf file). The default openssl.cnf file requires the following entries to match: Country Name (C) State or Province Name (ST) Organization Name (O) Note If you do not observe the constraints, the OpenSSL CA will refuse to sign the certificate (see the section called "Sign the CSR" ). Create a certificate signing request Create a new certificate signing request (CSR) for the CertName .jks certificate, as follows: This command exports a CSR to the file, CertName _csr.pem . Sign the CSR Sign the CSR using your CA, as follows: To sign the certificate successfully, you must enter the CA private key pass phrase (see Section 2.5.2, "Set Up Your Own CA" ). Note If you want to sign the CSR using a CA certificate other than the default CA, use the -cert and -keyfile options to specify the CA certificate and its private key file, respectively. Convert to PEM format Convert the signed certificate, CertName .pem , to PEM only format, as follows: Concatenate the files Concatenate the CA certificate file and CertName .pem certificate file, as follows: Windows UNIX Update keystore with the full certificate chain Update the keystore, CertName .jks , by importing the full certificate chain for the certificate, as follows: Repeat steps as required Repeat steps 2 through 7, to create a complete set of certificates for your system. 2.5.4. Use the CA to Create Signed PKCS#12 Certificates Substeps to perform If you have set up a private CA, as described in Section 2.5.2, "Set Up Your Own CA" , you are now ready to create and sign your own certificates. To create and sign a certificate in PKCS#12 format, CertName .p12 , perform the following substeps: the section called "Add the bin directory to your PATH" . the section called "Configure the subjectAltName extension (Optional)" . the section called "Create a certificate signing request" . the section called "Sign the CSR" . the section called "Concatenate the files" . the section called "Create a PKCS#12 file" . the section called "Repeat steps as required" . the section called "(Optional) Clear the subjectAltName extension" . Add the bin directory to your PATH If you have not already done so, add the OpenSSL bin directory to your path, as follows: Windows UNIX This step makes the openssl utility available from the command line. Configure the subjectAltName extension (Optional) Perform this step, if the certificate is intended for a HTTPS server whose clients enforce URL integrity check, and if you plan to deploy the server on a multi-homed host or a host with several DNS name aliases (for example, if you are deploying the certificate on a multi-homed Web server). In this case, the certificate identity must match multiple host names and this can be done only by adding a subjectAltName certificate extension (see Section 2.4, "Special Requirements on HTTPS Certificates" ). To configure the subjectAltName extension, edit your CA's openssl.cnf file as follows: Add the following req_extensions setting to the [req] section (if not already present in your openssl.cnf file): Add the [v3_req] section header (if not already present in your openssl.cnf file). Under the [v3_req] section, add or modify the subjectAltName setting, setting it to the list of your DNS host names. For example, if the server host supports the alternative DNS names, www.redhat.com and jboss.org , set the subjectAltName as follows: Add a copy_extensions setting to the appropriate CA configuration section. The CA configuration section used for signing certificates is one of the following: The section specified by the -name option of the openssl ca command, The section specified by the default_ca setting under the [ca] section (usually [CA_default] ). For example, if the appropriate CA configuration section is [CA_default] , set the copy_extensions property as follows: This setting ensures that certificate extensions present in the certificate signing request are copied into the signed certificate. Create a certificate signing request Create a new certificate signing request (CSR) for the CertName .p12 certificate, as shown: This command prompts you for a pass phrase for the certificate's private key, and for information about the certificate's distinguished name. Some of the entries in the CSR distinguished name must match the values in the CA certificate (specified in the CA Policy section of the openssl.cnf file). The default openssl.cnf file requires that the following entries match: Country Name State or Province Name Organization Name The certificate subject DN's Common Name is the field that is usually used to represent the certificate owner's identity. The Common Name must comply with the following conditions: The Common Name must be distinct for every certificate generated by the OpenSSL certificate authority. If your HTTPS clients implement the URL integrity check, you must ensure that the Common Name is identical to the DNS name of the host where the certificate is to be deployed (see Section 2.4, "Special Requirements on HTTPS Certificates" ). Note For the purpose of the HTTPS URL integrity check, the subjectAltName extension takes precedence over the Common Name. Sign the CSR Sign the CSR using your CA, as follows: This command requires the pass phrase for the private key associated with the new_ca.pem CA certificate. For example: To sign the certificate successfully, you must enter the CA private key pass phrase (see Section 2.5.2, "Set Up Your Own CA" ). Note If you did not set copy_extensions=copy under the [CA_default] section in the openssl.cnf file, the signed certificate will not include any of the certificate extensions that were in the original CSR. Concatenate the files Concatenate the CA certificate file, CertName .pem certificate file, and CertName _pk.pem private key file as follows: Windows UNIX Create a PKCS#12 file Create a PKCS#12 file from the CertName _list.pem file as follows: You are prompted to enter a password to encrypt the PKCS#12 certificate. Usually this password is the same as the CSR password (this is required by many certificate repositories). Repeat steps as required Repeat steps 3 through 6, to create a complete set of certificates for your system. (Optional) Clear the subjectAltName extension After generating certificates for a particular host machine, it is advisable to clear the subjectAltName setting in the openssl.cnf file to avoid accidentally assigning the wrong DNS names to another set of certificates. In the openssl.cnf file, comment out the subjectAltName setting (by adding a # character at the start of the line), and also comment out the copy_extensions setting.
[ "https://www.redhat.com/secure", "C=IE,ST=Co. Dublin,L=Dublin,O=RedHat, OU=System,CN=www.redhat.com", "www.redhat.com www.jboss.org", "subjectAltName=DNS:www.redhat.com,DNS:www.jboss.org", "subjectAltName=DNS:*.jboss.org", "> set PATH= OpenSSLDir \\bin;%PATH%", "% PATH= OpenSSLDir /bin:USDPATH; export PATH", "############################################################# [ CA_default ] dir = X509CA # Where CA files are kept certs = USDdir/certs # Where issued certs are kept crl_dir = USDdir/crl # Where the issued crl are kept database = USDdir/index.txt # Database index file new_certs_dir = USDdir/newcerts # Default place for new certs certificate = USDdir/ca/new_ca.pem # The CA certificate serial = USDdir/serial # The current serial number crl = USDdir/crl.pem # The current CRL private_key = USDdir/ca/new_ca_pk.pem # The private key RANDFILE = USDdir/ca/.rand Private random number file x509_extensions = usr_cert # The extensions to add to the cert", "> echo 01 > serial", "> notepad index.txt", "% echo \"01\" > serial % touch index.txt", "openssl req -x509 -new -config X509CA/openssl.cnf -days 365 -out X509CA/ca/new_ca.pem -keyout X509CA/ca/new_ca_pk.pem", "Using configuration from X509CA /openssl.cnf Generating a 512 bit RSA private key ....++ .++ writing new private key to 'new_ca_pk.pem' Enter PEM pass phrase: Verifying password - Enter PEM pass phrase: ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank. For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) []:IE State or Province Name (full name) []:Co. Dublin Locality Name (eg, city) []:Dublin Organization Name (eg, company) []:Red Hat Organizational Unit Name (eg, section) []:Finance Common Name (eg, YOUR name) []:Gordon Brown Email Address []:[email protected]", "> set PATH= JAVA_HOME \\bin;%PATH%", "% PATH= JAVA_HOME /bin:USDPATH; export PATH", "keytool -genkey -dname \"CN=Alice, OU=Engineering, O=Progress, ST=Co. Dublin, C=IE\" -validity 365 -alias CertAlias -keypass CertPassword -keystore CertName .jks -storepass CertPassword", "keytool -certreq -alias CertAlias -file CertName _csr.pem -keypass CertPassword -keystore CertName .jks -storepass CertPassword", "openssl ca -config X509CA /openssl.cnf -days 365 -in CertName _csr.pem -out CertName .pem", "openssl x509 -in CertName .pem -out CertName .pem -outform PEM", "copy CertName .pem + X509CA \\ca\\new_ca.pem CertName .chain", "cat CertName .pem X509CA /ca/new_ca.pem> CertName .chain", "keytool -import -file CertName .chain -keypass CertPassword -keystore CertName .jks -storepass CertPassword", "> set PATH= OpenSSLDir \\bin;%PATH%", "% PATH= OpenSSLDir /bin:USDPATH; export PATH", "openssl Configuration File [req] req_extensions=v3_req", "openssl Configuration File [v3_req] subjectAltName=DNS:www.redhat.com,DNS:jboss.org", "openssl Configuration File [CA_default] copy_extensions=copy", "openssl req -new -config X509CA /openssl.cnf -days 365 -out X509CA /certs/ CertName _csr.pem -keyout X509CA /certs/ CertName _pk.pem", "Using configuration from X509CA /openssl.cnf Generating a 512 bit RSA private key .++ .++ writing new private key to ' X509CA /certs/ CertName _pk.pem' Enter PEM pass phrase: Verifying password - Enter PEM pass phrase: ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank. For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) []:IE State or Province Name (full name) []:Co. Dublin Locality Name (eg, city) []:Dublin Organization Name (eg, company) []:Red Hat Organizational Unit Name (eg, section) []:Systems Common Name (eg, YOUR name) []:Artix Email Address []:[email protected] Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:password An optional company name []:Red Hat", "openssl ca -config X509CA /openssl.cnf -days 365 -in X509CA /certs/ CertName _csr.pem -out X509CA /certs/ CertName .pem", "Using configuration from X509CA /openssl.cnf Enter PEM pass phrase: Check that the request matches the signature Signature ok The Subjects Distinguished Name is as follows countryName :PRINTABLE:'IE' stateOrProvinceName :PRINTABLE:'Co. Dublin' localityName :PRINTABLE:'Dublin' organizationName :PRINTABLE:'Red Hat' organizationalUnitName:PRINTABLE:'Systems' commonName :PRINTABLE:'Bank Server Certificate' emailAddress :IA5STRING:' [email protected] ' Certificate is to be certified until May 24 13:06:57 2000 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y Write out database with 1 new entries Data Base Updated", "copy X509CA \\ca\\new_ca.pem + X509CA \\certspass:quotes[_CertName_].pem + X509CA \\certspass:quotes[_CertName_]_pk.pem X509CA \\certspass:quotes[_CertName_]_list.pem", "cat X509CA /ca/new_ca.pem X509CA /certs/ CertName .pem X509CA /certs/ CertName _pk.pem > X509CA /certs/ CertName _list.pem", "openssl pkcs12 -export -in X509CA /certs/ CertName _list.pem -out X509CA /certs/ CertName .p12 -name \"New cert\"" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_security_guide/managecertscxf
Chapter 1. Introduction to Service Telemetry Framework 1.5
Chapter 1. Introduction to Service Telemetry Framework 1.5 Service Telemetry Framework (STF) collects monitoring data from Red Hat OpenStack Platform (RHOSP) or third-party nodes. You can use STF to perform the following tasks: Store or archive the monitoring data for historical information. View the monitoring data graphically on the dashboard. Use the monitoring data to trigger alerts or warnings. The monitoring data can be either metric or event: Metric A numeric measurement of an application or system. Event Irregular and discrete occurrences that happen in a system. The components of STF use a message bus for data transport. Other modular components that receive and store data are deployed as containers on Red Hat OpenShift Container Platform. Important STF is compatible with Red Hat OpenShift Container Platform version 4.10 through 4.12. Additional resources Red Hat OpenShift Container Platform product documentation Service Telemetry Framework Performance and Scaling OpenShift Container Platform 4.12 Documentation 1.1. Support for Service Telemetry Framework Red Hat supports the core Operators and workloads, including AMQ Interconnect, Service Telemetry Operator, and Smart Gateway Operator. Red Hat does not support the community Operators or workload components, such as Elasticsearch, Prometheus, Alertmanager, Grafana, and their Operators. You can only deploy STF in a fully connected network environment. You cannot deploy STF in Red Hat OpenShift Container Platform-disconnected environments or network proxy environments. For more information about STF life cycle and support status, see the Service Telemetry Framework Supported Version Matrix . 1.2. Service Telemetry Framework architecture Service Telemetry Framework (STF) uses a client-server architecture, in which Red Hat OpenStack Platform (RHOSP) is the client and Red Hat OpenShift Container Platform is the server. STF consists of the following components: Data collection collectd: Collects infrastructure metrics and events. Ceilometer: Collects RHOSP metrics and events. Transport AMQ Interconnect: An AMQP 1.x compatible messaging bus that provides fast and reliable data transport to transfer the metrics to STF for storage. Smart Gateway: A Golang application that takes metrics and events from the AMQP 1.x bus to deliver to Elasticsearch or Prometheus. Data storage Prometheus: Time-series data storage that stores STF metrics received from the Smart Gateway. Elasticsearch: Events data storage that stores STF events received from the Smart Gateway. Observation Alertmanager: An alerting tool that uses Prometheus alert rules to manage alerts. Grafana: A visualization and analytics application that you can use to query, visualize, and explore data. The following table describes the application of the client and server components: Table 1.1. Client and server components of STF Component Client Server An AMQP 1.x compatible messaging bus yes yes Smart Gateway no yes Prometheus no yes Elasticsearch no yes collectd yes no Ceilometer yes no Important To ensure that the monitoring platform can report operational problems with your cloud, do not install STF on the same infrastructure that you are monitoring. Figure 1.1. Service Telemetry Framework architecture overview For client side metrics, collectd provides infrastructure metrics without project data, and Ceilometer provides RHOSP platform data based on projects or user workload. Both Ceilometer and collectd deliver data to Prometheus by using the AMQ Interconnect transport, delivering the data through the message bus. On the server side, a Golang application called the Smart Gateway takes the data stream from the bus and exposes it as a local scrape endpoint for Prometheus. If you plan to collect and store events, collectd and Ceilometer deliver event data to the server side by using the AMQ Interconnect transport. Another Smart Gateway writes the data to the Elasticsearch datastore. Server-side STF monitoring infrastructure consists of the following layers: Service Telemetry Framework 1.5 Red Hat OpenShift Container Platform 4.10 through 4.12 Infrastructure platform Figure 1.2. Server-side STF monitoring infrastructure 1.3. Installation size of Red Hat OpenShift Container Platform The size of your Red Hat OpenShift Container Platform installation depends on the following factors: The infrastructure that you select. The number of nodes that you want to monitor. The number of metrics that you want to collect. The resolution of metrics. The length of time that you want to store the data. Installation of Service Telemetry Framework (STF) depends on an existing Red Hat OpenShift Container Platform environment. For more information about minimum resources requirements when you install Red Hat OpenShift Container Platform on baremetal, see Minimum resource requirements in the Installing a cluster on bare metal guide. For installation requirements of the various public and private cloud platforms that you can install, see the corresponding installation documentation for your cloud platform of choice.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/service_telemetry_framework_1.5/assembly-introduction-to-stf_assembly
Chapter 8. Scheduling NUMA-aware workloads
Chapter 8. Scheduling NUMA-aware workloads Learn about NUMA-aware scheduling and how you can use it to deploy high performance workloads in an OpenShift Container Platform cluster. Important NUMA-aware scheduling is a Technology Preview feature in OpenShift Container Platform versions 4.12.0 to 4.12.23 only. It is generally available in OpenShift Container Platform version 4.12.24 and later. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The NUMA Resources Operator allows you to schedule high-performance workloads in the same NUMA zone. It deploys a node resources exporting agent that reports on available cluster node NUMA resources, and a secondary scheduler that manages the workloads. 8.1. About NUMA-aware scheduling Introduction to NUMA Non-Uniform Memory Access (NUMA) is a compute platform architecture that allows different CPUs to access different regions of memory at different speeds. NUMA resource topology refers to the locations of CPUs, memory, and PCI devices relative to each other in the compute node. Colocated resources are said to be in the same NUMA zone . For high-performance applications, the cluster needs to process pod workloads in a single NUMA zone. Performance considerations NUMA architecture allows a CPU with multiple memory controllers to use any available memory across CPU complexes, regardless of where the memory is located. This allows for increased flexibility at the expense of performance. A CPU processing a workload using memory that is outside its NUMA zone is slower than a workload processed in a single NUMA zone. Also, for I/O-constrained workloads, the network interface on a distant NUMA zone slows down how quickly information can reach the application. High-performance workloads, such as telecommunications workloads, cannot operate to specification under these conditions. NUMA-aware scheduling NUMA-aware scheduling aligns the requested cluster compute resources (CPUs, memory, devices) in the same NUMA zone to process latency-sensitive or high-performance workloads efficiently. NUMA-aware scheduling also improves pod density per compute node for greater resource efficiency. Integration with Node Tuning Operator By integrating the Node Tuning Operator's performance profile with NUMA-aware scheduling, you can further configure CPU affinity to optimize performance for latency-sensitive workloads. Default scheduling logic The default OpenShift Container Platform pod scheduler scheduling logic considers the available resources of the entire compute node, not individual NUMA zones. If the most restrictive resource alignment is requested in the kubelet topology manager, error conditions can occur when admitting the pod to a node. Conversely, if the most restrictive resource alignment is not requested, the pod can be admitted to the node without proper resource alignment, leading to worse or unpredictable performance. For example, runaway pod creation with Topology Affinity Error statuses can occur when the pod scheduler makes suboptimal scheduling decisions for guaranteed pod workloads without knowing if the pod's requested resources are available. Scheduling mismatch decisions can cause indefinite pod startup delays. Also, depending on the cluster state and resource allocation, poor pod scheduling decisions can cause extra load on the cluster because of failed startup attempts. NUMA-aware pod scheduling diagram The NUMA Resources Operator deploys a custom NUMA resources secondary scheduler and other resources to mitigate against the shortcomings of the default OpenShift Container Platform pod scheduler. The following diagram provides a high-level overview of NUMA-aware pod scheduling. Figure 8.1. NUMA-aware scheduling overview NodeResourceTopology API The NodeResourceTopology API describes the available NUMA zone resources in each compute node. NUMA-aware scheduler The NUMA-aware secondary scheduler receives information about the available NUMA zones from the NodeResourceTopology API and schedules high-performance workloads on a node where it can be optimally processed. Node topology exporter The node topology exporter exposes the available NUMA zone resources for each compute node to the NodeResourceTopology API. The node topology exporter daemon tracks the resource allocation from the kubelet by using the PodResources API. PodResources API The PodResources API is local to each node and exposes the resource topology and available resources to the kubelet. Additional resources For more information about running secondary pod schedulers in your cluster and how to deploy pods with a secondary pod scheduler, see Scheduling pods using a secondary scheduler . 8.2. Installing the NUMA Resources Operator NUMA Resources Operator deploys resources that allow you to schedule NUMA-aware workloads and deployments. You can install the NUMA Resources Operator using the OpenShift Container Platform CLI or the web console. 8.2.1. Installing the NUMA Resources Operator using the CLI As a cluster administrator, you can install the Operator using the CLI. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NUMA Resources Operator: Save the following YAML in the nro-namespace.yaml file: apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources Create the Namespace CR by running the following command: USD oc create -f nro-namespace.yaml Create the Operator group for the NUMA Resources Operator: Save the following YAML in the nro-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources Create the OperatorGroup CR by running the following command: USD oc create -f nro-operatorgroup.yaml Create the subscription for the NUMA Resources Operator: Save the following YAML in the nro-sub.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: "4.12" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f nro-sub.yaml Verification Verify that the installation succeeded by inspecting the CSV resource in the openshift-numaresources namespace. Run the following command: USD oc get csv -n openshift-numaresources Example output NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.12.2 numaresources-operator 4.12.2 Succeeded 8.2.2. Installing the NUMA Resources Operator using the web console As a cluster administrator, you can install the NUMA Resources Operator using the web console. Procedure Create a namespace for the NUMA Resources Operator: In the OpenShift Container Platform web console, click Administration Namespaces . Click Create Namespace , enter openshift-numaresources in the Name field, and then click Create . Install the NUMA Resources Operator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose numaresources-operator from the list of available Operators, and then click Install . In the Installed Namespaces field, select the openshift-numaresources namespace, and then click Install . Optional: Verify that the NUMA Resources Operator installed successfully: Switch to the Operators Installed Operators page. Ensure that NUMA Resources Operator is listed in the openshift-numaresources namespace with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. If the Operator does not appear as installed, to troubleshoot further: Go to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Go to the Workloads Pods page and check the logs for pods in the default project. 8.3. Scheduling NUMA-aware workloads Clusters running latency-sensitive workloads typically feature performance profiles that help to minimize workload latency and optimize performance. The NUMA-aware scheduler deploys workloads based on available node NUMA resources and with respect to any performance profile settings applied to the node. The combination of NUMA-aware deployments, and the performance profile of the workload, ensures that workloads are scheduled in a way that maximizes performance. For the NUMA Resources Operator to be fully operational, you must deploy the NUMAResourcesOperator custom resource and the NUMA-aware secondary pod scheduler. 8.3.1. Creating the NUMAResourcesOperator custom resource When you have installed the NUMA Resources Operator, then create the NUMAResourcesOperator custom resource (CR) that instructs the NUMA Resources Operator to install all the cluster infrastructure needed to support the NUMA-aware scheduler, including daemon sets and APIs. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Create the MachineConfigPool custom resource that enables custom kubelet configurations for worker nodes: Save the following YAML in the nro-machineconfig.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: labels: cnf-worker-tuning: enabled machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" name: worker spec: machineConfigSelector: matchLabels: machineconfiguration.openshift.io/role: worker nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Create the MachineConfigPool CR by running the following command: USD oc create -f nro-machineconfig.yaml Create the NUMAResourcesOperator custom resource: Save the following minimal required YAML file example as nrop.yaml : apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 1 This should match the MachineConfigPool that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool named worker-cnf that designates a set of nodes expected to run telecommunications workloads. Create the NUMAResourcesOperator CR by running the following command: USD oc create -f nrop.yaml Note Creating the NUMAResourcesOperator triggers a reboot on the corresponding machine config pool and therefore the affected node. Verification Verify that the NUMA Resources Operator deployed successfully by running the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io Example output NAME AGE numaresourcesoperator 27s After a few minutes, run the following command to verify that the required resources deployed successfully: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s 8.3.2. Deploying the NUMA-aware secondary pod scheduler After you install the NUMA Resources Operator, do the following to deploy the NUMA-aware secondary pod scheduler: Procedure Create the KubeletConfig custom resource that configures the pod admittance policy for the machine profile: Create the NUMAResourcesScheduler custom resource that deploys the NUMA-aware custom pod scheduler: Save the following minimal required YAML in the nro-scheduler.yaml file: apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.12" Create the NUMAResourcesScheduler CR by running the following command: USD oc create -f nro-scheduler.yaml After a few seconds, run the following command to confirm the successful deployment of the required resources: USD oc get all -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m 8.3.3. Configuring a single NUMA node policy The NUMA Resources Operator requires a single NUMA node policy to be configured on the cluster. This can be achieved in two ways: by creating and applying a performance profile, or by configuring a KubeletConfig. Note The preferred way to configure a single NUMA node policy is to apply a performance profile. You can use the Performance Profile Creator (PPC) tool to create the performance profile. If a performance profile is created on the cluster, it automatically creates other tuning components like KubeletConfig and the tuned profile. For more information about creating a performance profile, see "About the Performance Profile Creator" in the "Additional resources" section. Additional resources About the Performance Profile Creator . 8.3.4. Sample performance profile This example YAML shows a performance profile created by using the performance profile creator (PPC) tool: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: "3" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: "" 1 nodeSelector: node-role.kubernetes.io/worker: "" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true 1 This should match the MachineConfigPool that you want to configure the NUMA Resources Operator on. For example, you might have created a MachineConfigPool named worker-cnf that designates a set of nodes that run telecommunications workloads. 2 The topologyPolicy must be set to single-numa-node . Ensure that this is the case by setting the topology-manager-policy argument to single-numa-node when running the PPC tool. 8.3.5. Creating a KubeletConfig CRD The recommended way to configure a single NUMA node policy is to apply a performance profile. Another way is by creating and applying a KubeletConfig custom resource (CR), as shown in the following procedure. Procedure Create the KubeletConfig custom resource (CR) that configures the pod admittance policy for the machine profile: Save the following YAML in the nro-kubeletconfig.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 kubeletConfig: cpuManagerPolicy: "static" 2 cpuManagerReconcilePeriod: "5s" reservedSystemCPUs: "0,1" 3 memoryManagerPolicy: "Static" 4 evictionHard: memory.available: "100Mi" kubeReserved: memory: "512Mi" reservedMemory: - numaNode: 0 limits: memory: "1124Mi" systemReserved: memory: "512Mi" topologyManagerPolicy: "single-numa-node" 5 1 Adjust this label to match the machineConfigPoolSelector in the NUMAResourcesOperator CR. 2 For cpuManagerPolicy , static must use a lowercase s . 3 Adjust this based on the CPU on your nodes. 4 For memoryManagerPolicy , Static must use an uppercase S . 5 topologyManagerPolicy must be set to single-numa-node . Create the KubeletConfig CR by running the following command: USD oc create -f nro-kubeletconfig.yaml Note Applying performance profile or KubeletConfig automatically triggers rebooting of the nodes. If no reboot is triggered, you can troubleshoot the issue by looking at the labels in KubeletConfig that address the node group. 8.3.6. Scheduling workloads with the NUMA-aware scheduler Now that topo-aware-scheduler is installed, the NUMAResourcesOperator and NUMAResourcesScheduler CRs are applied and your cluster has a matching performance profile or kubeletconfig , you can schedule workloads with the NUMA-aware scheduler using deployment CRs that specify the minimum required resources to process the workload. The following example deployment uses NUMA-aware scheduling for a sample workload. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the name of the NUMA-aware scheduler that is deployed in the cluster by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output "topo-aware-scheduler" Create a Deployment CR that uses scheduler named topo-aware-scheduler , for example: Save the following YAML in the nro-deployment.yaml file: apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: "100Mi" cpu: "10" requests: memory: "100Mi" cpu: "10" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c"] args: [ "while true; do sleep 1h; done;" ] resources: limits: memory: "100Mi" cpu: "8" requests: memory: "100Mi" cpu: "8" 1 schedulerName must match the name of the NUMA-aware scheduler that is deployed in your cluster, for example topo-aware-scheduler . Create the Deployment CR by running the following command: USD oc create -f nro-deployment.yaml Verification Verify that the deployment was successful: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m Verify that the topo-aware-scheduler is scheduling the deployed pod by running the following command: USD oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1 Note Deployments that request more resources than is available for scheduling will fail with a MinimumReplicasUnavailable error. The deployment succeeds when the required resources become available. Pods remain in the Pending state until the required resources are available. Verify that the expected allocated resources are listed for the node. Identify the node that is running the deployment pod by running the following command: USD oc get pods -n openshift-numaresources -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none> Run the following command with the name of that node that is running the deployment pod. USD oc describe noderesourcetopologies.topology.node.k8s.io worker-1 Example output ... Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node 1 The Available capacity is reduced because of the resources that have been allocated to the guaranteed pod. Resources consumed by guaranteed pods are subtracted from the available node resources listed under noderesourcetopologies.topology.node.k8s.io . Resource allocations for pods with a Best-effort or Burstable quality of service ( qosClass ) are not reflected in the NUMA node resources under noderesourcetopologies.topology.node.k8s.io . If a pod's consumed resources are not reflected in the node resource calculation, verify that the pod has qosClass of Guaranteed by running the following command: USD oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath="{ .status.qosClass }" Example output Guaranteed 8.4. Optional: Configuring polling operations for NUMA resources updates The daemons controlled by the NUMA Resources Operator in their nodeGroup poll resources to retrieve updates about available NUMA resources. You can fine-tune polling operations for these daemons by configuring the spec.nodeGroups specification in the NUMAResourcesOperator custom resource (CR). This provides advanced control of polling operations. Configure these specifications to improve scheduling behaviour and troubleshoot suboptimal scheduling decisions. The configuration options are the following: infoRefreshMode : Determines the trigger condition for polling the kubelet. The NUMA Resources Operator reports the resulting information to the API server. infoRefreshPeriod : Determines the duration between polling updates. podsFingerprinting : Determines if point-in-time information for the current set of pods running on a node is exposed in polling updates. Note podsFingerprinting is enabled by default. podsFingerprinting is a requirement for the cacheResyncPeriod specification in the NUMAResourcesScheduler CR. The cacheResyncPeriod specification helps to report more exact resource availability by monitoring pending resources on nodes. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator. Procedure Configure the spec.nodeGroups specification in your NUMAResourcesOperator CR: apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker 1 Valid values are Periodic , Events , PeriodicAndEvents . Use Periodic to poll the kubelet at intervals that you define in infoRefreshPeriod . Use Events to poll the kubelet at every pod lifecycle event. Use PeriodicAndEvents to enable both methods. 2 Define the polling interval for Periodic or PeriodicAndEvents refresh modes. The field is ignored if the refresh mode is Events . 3 Valid values are Enabled , Disabled , and EnabledExclusiveResources . Setting to Enabled is a requirement for the cacheResyncPeriod specification in the NUMAResourcesScheduler . Verification After you deploy the NUMA Resources Operator, verify that the node group configurations were applied by running the following command: USD oc get numaresop numaresourcesoperator -o json | jq '.status' Example output ... "config": { "infoRefreshMode": "Periodic", "infoRefreshPeriod": "10s", "podsFingerprinting": "Enabled" }, "name": "worker" ... 8.5. Troubleshooting NUMA-aware scheduling To troubleshoot common problems with NUMA-aware pod scheduling, perform the following steps. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Verify that the noderesourcetopologies CRD is deployed in the cluster by running the following command: USD oc get crd | grep noderesourcetopologies Example output NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z Check that the NUMA-aware scheduler name matches the name specified in your NUMA-aware workloads by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName' Example output topo-aware-scheduler Verify that NUMA-aware schedulable nodes have the noderesourcetopologies CR applied to them. Run the following command: USD oc get noderesourcetopologies.topology.node.k8s.io Example output NAME AGE compute-0.example.com 17h compute-1.example.com 17h Note The number of nodes should equal the number of worker nodes that are configured by the machine config pool ( mcp ) worker definition. Verify the NUMA zone granularity for all schedulable nodes by running the following command: USD oc get noderesourcetopologies.topology.node.k8s.io -o yaml Example output apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:38Z" generation: 63760 name: worker-0 resourceVersion: "8450223" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262352048128" available: "262352048128" capacity: "270107316224" name: memory - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269231067136" available: "269231067136" capacity: "270573244416" name: memory - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: "2022-06-16T08:55:37Z" generation: 62061 name: worker-1 resourceVersion: "8450129" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: "38" available: "38" capacity: "40" name: cpu - allocatable: "6442450944" available: "6442450944" capacity: "6442450944" name: hugepages-1Gi - allocatable: "134217728" available: "134217728" capacity: "134217728" name: hugepages-2Mi - allocatable: "262391033856" available: "262391033856" capacity: "270146301952" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: "40" available: "40" capacity: "40" name: cpu - allocatable: "1073741824" available: "1073741824" capacity: "1073741824" name: hugepages-1Gi - allocatable: "268435456" available: "268435456" capacity: "268435456" name: hugepages-2Mi - allocatable: "269192085504" available: "269192085504" capacity: "270534262784" name: memory type: Node kind: List metadata: resourceVersion: "" selfLink: "" 1 Each stanza under zones describes the resources for a single NUMA zone. 2 resources describes the current state of the NUMA zone resources. Check that resources listed under items.zones.resources.available correspond to the exclusive NUMA zone resources allocated to each guaranteed pod. 8.5.1. Reporting more exact resource availability Enable the cacheResyncPeriod specification to help the NUMA Resources Operator report more exact resource availability by monitoring pending resources on nodes and synchronizing this information in the scheduler cache at a defined interval. This also helps to minimize Topology Affinity Error errors because of sub-optimal scheduling decisions. The lower the interval, the greater the network load. The cacheResyncPeriod specification is disabled by default. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 92m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-cacheresync.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.12" cacheResyncPeriod: "5s" 1 1 Enter an interval value in seconds for synchronization of the scheduler cache. A value of 5s is typical for most implementations. Create the updated NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-cacheresync.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler show the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 8.5.2. Checking the NUMA-aware scheduler logs Troubleshoot problems with the NUMA-aware scheduler by reviewing the logs. If required, you can increase the scheduler log level by modifying the spec.logLevel field of the NUMAResourcesScheduler resource. Acceptable values are Normal , Debug , and Trace , with Trace being the most verbose option. Note To change the log level of the secondary scheduler, delete the running scheduler resource and re-deploy it with the changed log level. The scheduler is unavailable for scheduling new workloads during this downtime. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Delete the currently running NUMAResourcesScheduler resource: Get the active NUMAResourcesScheduler by running the following command: USD oc get NUMAResourcesScheduler Example output NAME AGE numaresourcesscheduler 90m Delete the secondary scheduler resource by running the following command: USD oc delete NUMAResourcesScheduler numaresourcesscheduler Example output numaresourcesscheduler.nodetopology.openshift.io "numaresourcesscheduler" deleted Save the following YAML in the file nro-scheduler-debug.yaml . This example changes the log level to Debug : apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: "registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.12" logLevel: Debug Create the updated Debug logging NUMAResourcesScheduler resource by running the following command: USD oc create -f nro-scheduler-debug.yaml Example output numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created Verification steps Check that the NUMA-aware scheduler was successfully deployed: Run the following command to check that the CRD is created successfully: USD oc get crd | grep numaresourcesschedulers Example output NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z Check that the new custom scheduler is available by running the following command: USD oc get numaresourcesschedulers.nodetopology.openshift.io Example output NAME AGE numaresourcesscheduler 3h26m Check that the logs for the scheduler shows the increased log level: Get the list of pods running in the openshift-numaresources namespace by running the following command: USD oc get pods -n openshift-numaresources Example output NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m Get the logs for the secondary scheduler pod by running the following command: USD oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources Example output ... I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] "Add event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" I0223 11:05:53.461016 1 eventhandlers.go:244] "Delete event for scheduled pod" pod="openshift-marketplace/certified-operators-thtvq" 8.5.3. Troubleshooting the resource topology exporter Troubleshoot noderesourcetopologies objects where unexpected results are occurring by inspecting the corresponding resource-topology-exporter logs. Note It is recommended that NUMA resource topology exporter instances in the cluster are named for nodes they refer to. For example, a worker node with the name worker should have a corresponding noderesourcetopologies object called worker . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Get the daemonsets managed by the NUMA Resources Operator. Each daemonset has a corresponding nodeGroup in the NUMAResourcesOperator CR. Run the following command: USD oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath="{.status.daemonsets[0]}" Example output {"name":"numaresourcesoperator-worker","namespace":"openshift-numaresources"} Get the label for the daemonset of interest using the value for name from the step: USD oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath="{.spec.selector.matchLabels}" Example output {"name":"resource-topology"} Get the pods using the resource-topology label by running the following command: USD oc get pods -n openshift-numaresources -l name=resource-topology -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com Examine the logs of the resource-topology-exporter container running on the worker pod that corresponds to the node you are troubleshooting. Run the following command: USD oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c Example output I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: "0": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved "0-1" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online "0-103" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable "2-103" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi 8.5.4. Correcting a missing resource topology exporter config map If you install the NUMA Resources Operator in a cluster with misconfigured cluster settings, in some circumstances, the Operator is shown as active but the logs of the resource topology exporter (RTE) daemon set pods show that the configuration for the RTE is missing, for example: Info: couldn't find configuration in "/etc/resource-topology-exporter/config.yaml" This log message indicates that the kubeletconfig with the required configuration was not properly applied in the cluster, resulting in a missing RTE configmap . For example, the following cluster is missing a numaresourcesoperator-worker configmap custom resource (CR): USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h In a correctly configured cluster, oc get configmap also returns a numaresourcesoperator-worker configmap CR. Prerequisites Install the OpenShift Container Platform CLI ( oc ). Log in as a user with cluster-admin privileges. Install the NUMA Resources Operator and deploy the NUMA-aware secondary scheduler. Procedure Compare the values for spec.machineConfigPoolSelector.matchLabels in kubeletconfig and metadata.labels in the MachineConfigPool ( mcp ) worker CR using the following commands: Check the kubeletconfig labels by running the following command: USD oc get kubeletconfig -o yaml Example output machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled Check the mcp labels by running the following command: USD oc get mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" The cnf-worker-tuning: enabled label is not present in the MachineConfigPool object. Edit the MachineConfigPool CR to include the missing label, for example: USD oc edit mcp worker -o yaml Example output labels: machineconfiguration.openshift.io/mco-built-in: "" pools.operator.machineconfiguration.openshift.io/worker: "" cnf-worker-tuning: enabled Apply the label changes and wait for the cluster to apply the updated configuration. Run the following command: Verification Check that the missing numaresourcesoperator-worker configmap CR is applied: USD oc get configmap Example output NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources", "oc create -f nro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources", "oc create -f nro-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"4.12\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nro-sub.yaml", "oc get csv -n openshift-numaresources", "NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.12.2 numaresources-operator 4.12.2 Succeeded", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: labels: cnf-worker-tuning: enabled machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" name: worker spec: machineConfigSelector: matchLabels: machineconfiguration.openshift.io/role: worker nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"", "oc create -f nro-machineconfig.yaml", "apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1", "oc create -f nrop.yaml", "oc get numaresourcesoperators.nodetopology.openshift.io", "NAME AGE numaresourcesoperator 27s", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s", "apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-rhel9:v4.12\"", "oc create -f nro-scheduler.yaml", "oc get all -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7d9d84c58d-qk2mr 1/1 Running 0 12m pod/numaresourcesoperator-worker-7d96r 2/2 Running 0 97s pod/numaresourcesoperator-worker-crsht 2/2 Running 0 97s pod/numaresourcesoperator-worker-jp9mw 2/2 Running 0 97s pod/secondary-scheduler-847cb74f84-9whlm 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 3 3 3 3 3 node-role.kubernetes.io/worker= 98s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 12m deployment.apps/secondary-scheduler 1/1 1 1 10m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7d9d84c58d 1 1 1 12m replicaset.apps/secondary-scheduler-847cb74f84 1 1 1 10m", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"3\" reserved: 0-2 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 nodeSelector: node-role.kubernetes.io/worker: \"\" numa: topologyPolicy: single-numa-node 2 realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-tuning spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 kubeletConfig: cpuManagerPolicy: \"static\" 2 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" 3 memoryManagerPolicy: \"Static\" 4 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 5", "oc create -f nro-kubeletconfig.yaml", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "\"topo-aware-scheduler\"", "apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"", "oc create -f nro-deployment.yaml", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numa-deployment-1-6c4f5bdb84-wgn6g 2/2 Running 0 5m2s numaresources-controller-manager-7d9d84c58d-4v65j 1/1 Running 0 18m numaresourcesoperator-worker-7d96r 2/2 Running 4 43m numaresourcesoperator-worker-crsht 2/2 Running 2 43m numaresourcesoperator-worker-jp9mw 2/2 Running 2 43m secondary-scheduler-847cb74f84-fpncj 1/1 Running 0 18m", "oc describe pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m45s topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-6c4f5bdb84-wgn6g to worker-1", "oc get pods -n openshift-numaresources -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES numa-deployment-1-6c4f5bdb84-wgn6g 0/2 Running 0 82m 10.128.2.50 worker-1 <none> <none>", "oc describe noderesourcetopologies.topology.node.k8s.io worker-1", "Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node", "oc get pod numa-deployment-1-6c4f5bdb84-wgn6g -n openshift-numaresources -o jsonpath=\"{ .status.qosClass }\"", "Guaranteed", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - config: infoRefreshMode: Periodic 1 infoRefreshPeriod: 10s 2 podsFingerprinting: Enabled 3 name: worker", "oc get numaresop numaresourcesoperator -o json | jq '.status'", "\"config\": { \"infoRefreshMode\": \"Periodic\", \"infoRefreshPeriod\": \"10s\", \"podsFingerprinting\": \"Enabled\" }, \"name\": \"worker\"", "oc get crd | grep noderesourcetopologies", "NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z", "oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'", "topo-aware-scheduler", "oc get noderesourcetopologies.topology.node.k8s.io", "NAME AGE compute-0.example.com 17h compute-1.example.com 17h", "oc get noderesourcetopologies.topology.node.k8s.io -o yaml", "apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 92m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.12\" cacheResyncPeriod: \"5s\" 1", "oc create -f nro-scheduler-cacheresync.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get NUMAResourcesScheduler", "NAME AGE numaresourcesscheduler 90m", "oc delete NUMAResourcesScheduler numaresourcesscheduler", "numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted", "apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.12\" logLevel: Debug", "oc create -f nro-scheduler-debug.yaml", "numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created", "oc get crd | grep numaresourcesschedulers", "NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z", "oc get numaresourcesschedulers.nodetopology.openshift.io", "NAME AGE numaresourcesscheduler 3h26m", "oc get pods -n openshift-numaresources", "NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m", "oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources", "I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"", "oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"", "{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}", "oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"", "{\"name\":\"resource-topology\"}", "oc get pods -n openshift-numaresources -l name=resource-topology -o wide", "NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com", "oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c", "I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi", "Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h", "oc get kubeletconfig -o yaml", "machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled", "oc get mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"", "oc edit mcp worker -o yaml", "labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled", "oc get configmap", "NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/scalability_and_performance/cnf-numa-aware-scheduling
15.2. BIND
15.2. BIND This section covers BIND (Berkeley Internet Name Domain), the DNS server included in Red Hat Enterprise Linux. It focuses on the structure of its configuration files, and describes how to administer it both locally and remotely. 15.2.1. Empty Zones BIND configures a number of " empty zones " to prevent recursive servers from sending unnecessary queries to Internet servers that cannot handle them (thus creating delays and SERVFAIL responses to clients who query for them). These empty zones ensure that immediate and authoritative NXDOMAIN responses are returned instead. The configuration option empty-zones-enable controls whether or not empty zones are created, whilst the option disable-empty-zone can be used in addition to disable one or more empty zones from the list of default prefixes that would be used. The number of empty zones created for RFC 1918 prefixes has been increased, and users of BIND 9.9 and above will see the RFC 1918 empty zones both when empty-zones-enable is unspecified (defaults to yes ), and when it is explicitly set to yes . 15.2.2. Configuring the named Service When the named service is started, it reads the configuration from the files as described in Table 15.1, "The named Service Configuration Files" . Table 15.1. The named Service Configuration Files Path Description /etc/named.conf The main configuration file. /etc/named/ An auxiliary directory for configuration files that are included in the main configuration file. The configuration file consists of a collection of statements with nested options surrounded by opening and closing curly brackets ( { and } ). Note that when editing the file, you have to be careful not to make any syntax error, otherwise the named service will not start. A typical /etc/named.conf file is organized as follows: Note If you have installed the bind-chroot package, the BIND service will run in the chroot environment. In that case, the initialization script will mount the above configuration files using the mount --bind command, so that you can manage the configuration outside this environment. There is no need to copy anything into the /var/named/chroot/ directory because it is mounted automatically. This simplifies maintenance since you do not need to take any special care of BIND configuration files if it is run in a chroot environment. You can organize everything as you would with BIND not running in a chroot environment. The following directories are automatically mounted into the /var/named/chroot/ directory if the corresponding mount point directories underneath /var/named/chroot/ are empty: /etc/named /etc/pki/dnssec-keys /run/named /var/named /usr/lib64/bind or /usr/lib/bind (architecture dependent). The following files are also mounted if the target file does not exist in /var/named/chroot/ : /etc/named.conf /etc/rndc.conf /etc/rndc.key /etc/named.rfc1912.zones /etc/named.dnssec.keys /etc/named.iscdlv.key /etc/named.root.key Important Editing files which have been mounted in a chroot environment requires creating a backup copy and then editing the original file. Alternatively, use an editor with " edit-a-copy " mode disabled. For example, to edit the BIND's configuration file, /etc/named.conf , with Vim while it is running in a chroot environment, issue the following command as root : 15.2.2.1. Installing BIND in a chroot Environment To install BIND to run in a chroot environment, issue the following command as root : To enable the named-chroot service, first check if the named service is running by issuing the following command: If it is running, it must be disabled. To disable named , issue the following commands as root : Then, to enable the named-chroot service, issue the following commands as root : To check the status of the named-chroot service, issue the following command as root : 15.2.2.2. Common Statement Types The following types of statements are commonly used in /etc/named.conf : acl The acl (Access Control List) statement allows you to define groups of hosts, so that they can be permitted or denied access to the nameserver. It takes the following form: The acl-name statement name is the name of the access control list, and the match-element option is usually an individual IP address (such as 10.0.1.1 ) or a Classless Inter-Domain Routing ( CIDR ) network notation (for example, 10.0.1.0/24 ). For a list of already defined keywords, see Table 15.2, "Predefined Access Control Lists" . Table 15.2. Predefined Access Control Lists Keyword Description any Matches every IP address. localhost Matches any IP address that is in use by the local system. localnets Matches any IP address on any network to which the local system is connected. none Does not match any IP address. The acl statement can be especially useful in conjunction with other statements such as options . Example 15.2, "Using acl in Conjunction with Options" defines two access control lists, black-hats and red-hats , and adds black-hats on the blacklist while granting red-hats normal access. Example 15.2. Using acl in Conjunction with Options include The include statement allows you to include files in the /etc/named.conf , so that potentially sensitive data can be placed in a separate file with restricted permissions. It takes the following form: The file-name statement name is an absolute path to a file. Example 15.3. Including a File to /etc/named.conf options The options statement allows you to define global server configuration options as well as to set defaults for other statements. It can be used to specify the location of the named working directory, the types of queries allowed, and much more. It takes the following form: For a list of frequently used option directives, see Table 15.3, "Commonly Used Configuration Options" below. Table 15.3. Commonly Used Configuration Options Option Description allow-query Specifies which hosts are allowed to query the nameserver for authoritative resource records. It accepts an access control list, a collection of IP addresses, or networks in the CIDR notation. All hosts are allowed by default. allow-query-cache Specifies which hosts are allowed to query the nameserver for non-authoritative data such as recursive queries. Only localhost and localnets are allowed by default. blackhole Specifies which hosts are not allowed to query the nameserver. This option should be used when a particular host or network floods the server with requests. The default option is none . directory Specifies a working directory for the named service. The default option is /var/named/ . disable-empty-zone Used to disable one or more empty zones from the list of default prefixes that would be used. Can be specified in the options statement and also in view statements. It can be used multiple times. dnssec-enable Specifies whether to return DNSSEC related resource records. The default option is yes . dnssec-validation Specifies whether to prove that resource records are authentic through DNSSEC. The default option is yes . empty-zones-enable Controls whether or not empty zones are created. Can be specified only in the options statement. forwarders Specifies a list of valid IP addresses for nameservers to which the requests should be forwarded for resolution. forward Specifies the behavior of the forwarders directive. It accepts the following options: first - The server will query the nameservers listed in the forwarders directive before attempting to resolve the name on its own. only - When unable to query the nameservers listed in the forwarders directive, the server will not attempt to resolve the name on its own. listen-on Specifies the IPv4 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv4 interfaces are used by default. listen-on-v6 Specifies the IPv6 network interface on which to listen for queries. On a DNS server that also acts as a gateway, you can use this option to answer queries originating from a single network only. All IPv6 interfaces are used by default. max-cache-size Specifies the maximum amount of memory to be used for server caches. When the limit is reached, the server causes records to expire prematurely so that the limit is not exceeded. In a server with multiple views, the limit applies separately to the cache of each view. The default option is 32M . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. pid-file Specifies the location of the process ID file created by the named service. recursion Specifies whether to act as a recursive server. The default option is yes . statistics-file Specifies an alternate location for statistics files. The /var/named/named.stats file is used by default. Note The directory used by named for runtime data has been moved from the BIND default location, /var/run/named/ , to a new location /run/named/ . As a result, the PID file has been moved from the default location /var/run/named/named.pid to the new location /run/named/named.pid . In addition, the session-key file has been moved to /run/named/session.key . These locations need to be specified by statements in the options section. See Example 15.4, "Using the options Statement" . Important To prevent distributed denial of service (DDoS) attacks, it is recommended that you use the allow-query-cache option to restrict recursive DNS services for a particular subset of clients only. See the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" , and the named.conf manual page for a complete list of available options. Example 15.4. Using the options Statement zone The zone statement allows you to define the characteristics of a zone, such as the location of its configuration file and zone-specific options, and can be used to override the global options statements. It takes the following form: The zone-name attribute is the name of the zone, zone-class is the optional class of the zone, and option is a zone statement option as described in Table 15.4, "Commonly Used Options in Zone Statements" . The zone-name attribute is particularly important, as it is the default value assigned for the USDORIGIN directive used within the corresponding zone file located in the /var/named/ directory. The named daemon appends the name of the zone to any non-fully qualified domain name listed in the zone file. For example, if a zone statement defines the namespace for example.com , use example.com as the zone-name so that it is placed at the end of host names within the example.com zone file. For more information about zone files, see Section 15.2.3, "Editing Zone Files" . Table 15.4. Commonly Used Options in Zone Statements Option Description allow-query Specifies which clients are allowed to request information about this zone. This option overrides global allow-query option. All query requests are allowed by default. allow-transfer Specifies which secondary servers are allowed to request a transfer of the zone's information. All transfer requests are allowed by default. allow-update Specifies which hosts are allowed to dynamically update information in their zone. The default option is to deny all dynamic update requests. Note that you should be careful when allowing hosts to update information about their zone. Do not set IP addresses in this option unless the server is in the trusted network. Instead, use TSIG key as described in Section 15.2.6.3, "Transaction SIGnatures (TSIG)" . file Specifies the name of the file in the named working directory that contains the zone's configuration data. masters Specifies from which IP addresses to request authoritative zone information. This option is used only if the zone is defined as type slave . notify Specifies whether to notify the secondary nameservers when a zone is updated. It accepts the following options: yes - The server will notify all secondary nameservers. no - The server will not notify any secondary nameserver. master-only - The server will notify primary server for the zone only. explicit - The server will notify only the secondary servers that are specified in the also-notify list within a zone statement. type Specifies the zone type. It accepts the following options: delegation-only - Enforces the delegation status of infrastructure zones such as COM, NET, or ORG. Any answer that is received without an explicit or implicit delegation is treated as NXDOMAIN . This option is only applicable in TLDs (Top-Level Domain) or root zone files used in recursive or caching implementations. forward - Forwards all requests for information about this zone to other nameservers. hint - A special type of zone used to point to the root nameservers which resolve queries when a zone is not otherwise known. No configuration beyond the default is necessary with a hint zone. master - Designates the nameserver as authoritative for this zone. A zone should be set as the master if the zone's configuration files reside on the system. slave - Designates the nameserver as a secondary server for this zone. Primary server is specified in the masters directive. Most changes to the /etc/named.conf file of a primary or secondary nameserver involve adding, modifying, or deleting zone statements, and only a small subset of zone statement options is usually needed for a nameserver to work efficiently. In Example 15.5, "A Zone Statement for a Primary nameserver" , the zone is identified as example.com , the type is set to master , and the named service is instructed to read the /var/named/example.com.zone file. It also allows only a secondary nameserver ( 192.168.0.2 ) to transfer the zone. Example 15.5. A Zone Statement for a Primary nameserver A secondary server's zone statement is slightly different. The type is set to slave , and the masters directive is telling named the IP address of the primary server. In Example 15.6, "A Zone Statement for a Secondary nameserver" , the named service is configured to query the primary server at the 192.168.0.1 IP address for information about the example.com zone. The received information is then saved to the /var/named/slaves/example.com.zone file. Note that you have to put all secondary zones in the /var/named/slaves/ directory, otherwise the service will fail to transfer the zone. Example 15.6. A Zone Statement for a Secondary nameserver 15.2.2.3. Other Statement Types The following types of statements are less commonly used in /etc/named.conf : controls The controls statement allows you to configure various security requirements necessary to use the rndc command to administer the named service. See Section 15.2.4, "Using the rndc Utility" for more information on the rndc utility and its usage. key The key statement allows you to define a particular key by name. Keys are used to authenticate various actions, such as secure updates or the use of the rndc command. Two options are used with key : algorithm algorithm-name - The type of algorithm to be used (for example, hmac-md5 ). secret " key-value " - The encrypted key. See Section 15.2.4, "Using the rndc Utility" for more information on the rndc utility and its usage. logging The logging statement allows you to use multiple types of logs, so called channels . By using the channel option within the statement, you can construct a customized type of log with its own file name ( file ), size limit ( size ), version number ( version ), and level of importance ( severity ). Once a customized channel is defined, a category option is used to categorize the channel and begin logging when the named service is restarted. By default, named sends standard messages to the rsyslog daemon, which places them in /var/log/messages . Several standard channels are built into BIND with various severity levels, such as default_syslog (which handles informational logging messages) and default_debug (which specifically handles debugging messages). A default category, called default , uses the built-in channels to do normal logging without any special configuration. Customizing the logging process can be a very detailed process and is beyond the scope of this chapter. For information on creating custom BIND logs, see the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" . server The server statement allows you to specify options that affect how the named service should respond to remote nameservers, especially with regard to notifications and zone transfers. The transfer-format option controls the number of resource records that are sent with each message. It can be either one-answer (only one resource record), or many-answers (multiple resource records). Note that while the many-answers option is more efficient, it is not supported by older versions of BIND. trusted-keys The trusted-keys statement allows you to specify assorted public keys used for secure DNS (DNSSEC). See Section 15.2.6.4, "DNS Security Extensions (DNSSEC)" for more information on this topic. view The view statement allows you to create special views depending upon which network the host querying the nameserver is on. This allows some hosts to receive one answer regarding a zone while other hosts receive totally different information. Alternatively, certain zones may only be made available to particular trusted hosts while non-trusted hosts can only make queries for other zones. Multiple views can be used as long as their names are unique. The match-clients option allows you to specify the IP addresses that apply to a particular view. If the options statement is used within a view, it overrides the already configured global options. Finally, most view statements contain multiple zone statements that apply to the match-clients list. Note that the order in which the view statements are listed is important, as the first statement that matches a particular client's IP address is used. For more information on this topic, see Section 15.2.6.1, "Multiple Views" . 15.2.2.4. Comment Tags Additionally to statements, the /etc/named.conf file can also contain comments. Comments are ignored by the named service, but can prove useful when providing additional information to a user. The following are valid comment tags: // Any text after the // characters to the end of the line is considered a comment. For example: # Any text after the # character to the end of the line is considered a comment. For example: /* and */ Any block of text enclosed in /* and */ is considered a comment. For example: 15.2.3. Editing Zone Files As outlined in Section 15.1.1, "Name server Zones" , zone files contain information about a namespace. They are stored in the named working directory located in /var/named/ by default. Each zone file is named according to the file option in the zone statement, usually in a way that relates to the domain in and identifies the file as containing zone data, such as example.com.zone . Table 15.5. The named Service Zone Files Path Description /var/named/ The working directory for the named service. The nameserver is not allowed to write to this directory. /var/named/slaves/ The directory for secondary zones. This directory is writable by the named service. /var/named/dynamic/ The directory for other files, such as dynamic DNS (DDNS) zones or managed DNSSEC keys. This directory is writable by the named service. /var/named/data/ The directory for various statistics and debugging files. This directory is writable by the named service. A zone file consists of directives and resource records. Directives tell the nameserver to perform tasks or apply special settings to the zone, resource records define the parameters of the zone and assign identities to individual hosts. While the directives are optional, the resource records are required in order to provide name service to a zone. All directives and resource records should be entered on individual lines. 15.2.3.1. Common Directives Directives begin with the dollar sign character ( USD ) followed by the name of the directive, and usually appear at the top of the file. The following directives are commonly used in zone files: USDINCLUDE The USDINCLUDE directive allows you to include another file at the place where it appears, so that other zone settings can be stored in a separate zone file. Example 15.7. Using the USDINCLUDE Directive USDORIGIN The USDORIGIN directive allows you to append the domain name to unqualified records, such as those with the host name only. Note that the use of this directive is not necessary if the zone is specified in /etc/named.conf , since the zone name is used by default. In Example 15.8, "Using the USDORIGIN Directive" , any names used in resource records that do not end in a trailing period (the . character) are appended with example.com . Example 15.8. Using the USDORIGIN Directive USDTTL The USDTTL directive allows you to set the default Time to Live (TTL) value for the zone, that is, how long is a zone record valid. Each resource record can contain its own TTL value, which overrides this directive. Increasing this value allows remote nameservers to cache the zone information for a longer period of time, reducing the number of queries for the zone and lengthening the amount of time required to propagate resource record changes. Example 15.9. Using the USDTTL Directive 15.2.3.2. Common Resource Records The following resource records are commonly used in zone files: A The Address record specifies an IP address to be assigned to a name. It takes the following form: If the hostname value is omitted, the record will point to the last specified hostname . In Example 15.10, "Using the A Resource Record" , the requests for server1.example.com are pointed to 10.0.1.3 or 10.0.1.5 . Example 15.10. Using the A Resource Record CNAME The Canonical Name record maps one name to another. Because of this, this type of record is sometimes referred to as an alias record . It takes the following form: CNAME records are most commonly used to point to services that use a common naming scheme, such as www for Web servers. However, there are multiple restrictions for their usage: CNAME records should not point to other CNAME records. This is mainly to avoid possible infinite loops. CNAME records should not contain other resource record types (such as A, NS, MX, and so on). The only exception are DNSSEC related records (RRSIG, NSEC, and so on) when the zone is signed. Other resource records that point to the fully qualified domain name (FQDN) of a host (NS, MX, PTR) should not point to a CNAME record. In Example 15.11, "Using the CNAME Resource Record" , the A record binds a host name to an IP address, while the CNAME record points the commonly used www host name to it. Example 15.11. Using the CNAME Resource Record MX The Mail Exchange record specifies where the mail sent to a particular namespace controlled by this zone should go. It takes the following form: The email-server-name is a fully qualified domain name (FQDN). The preference-value allows numerical ranking of the email servers for a namespace, giving preference to some email systems over others. The MX resource record with the lowest preference-value is preferred over the others. However, multiple email servers can possess the same value to distribute email traffic evenly among them. In Example 15.12, "Using the MX Resource Record" , the first mail.example.com email server is preferred to the mail2.example.com email server when receiving email destined for the example.com domain. Example 15.12. Using the MX Resource Record NS The Nameserver record announces authoritative nameservers for a particular zone. It takes the following form: The nameserver-name should be a fully qualified domain name (FQDN). Note that when two nameservers are listed as authoritative for the domain, it is not important whether these nameservers are secondary nameservers, or if one of them is a primary server. They are both still considered authoritative. Example 15.13. Using the NS Resource Record PTR The Pointer record points to another part of the namespace. It takes the following form: The last-IP-digit directive is the last number in an IP address, and the FQDN-of-system is a fully qualified domain name (FQDN). PTR records are primarily used for reverse name resolution, as they point IP addresses back to a particular name. See Section 15.2.3.4.2, "A Reverse Name Resolution Zone File" for examples of PTR records in use. SOA The Start of Authority record announces important authoritative information about a namespace to the nameserver. Located after the directives, it is the first resource record in a zone file. It takes the following form: The directives are as follows: The @ symbol places the USDORIGIN directive (or the zone's name if the USDORIGIN directive is not set) as the namespace being defined by this SOA resource record. The primary-name-server directive is the host name of the primary nameserver that is authoritative for this domain. The hostmaster-email directive is the email of the person to contact about the namespace. The serial-number directive is a numerical value incremented every time the zone file is altered to indicate it is time for the named service to reload the zone. The time-to-refresh directive is the numerical value secondary nameservers use to determine how long to wait before asking the primary nameserver if any changes have been made to the zone. The time-to-retry directive is a numerical value used by secondary nameservers to determine the length of time to wait before issuing a refresh request in the event that the primary nameserver is not answering. If the primary server has not replied to a refresh request before the amount of time specified in the time-to-expire directive elapses, the secondary servers stop responding as an authority for requests concerning that namespace. In BIND 4 and 8, the minimum-TTL directive is the amount of time other nameservers cache the zone's information. In BIND 9, it defines how long negative answers are cached for. Caching of negative answers can be set to a maximum of 3 hours ( 3H ). When configuring BIND, all times are specified in seconds. However, it is possible to use abbreviations when specifying units of time other than seconds, such as minutes ( M ), hours ( H ), days ( D ), and weeks ( W ). Table 15.6, "Seconds compared to other time units" shows an amount of time in seconds and the equivalent time in another format. Table 15.6. Seconds compared to other time units Seconds Other Time Units 60 1M 1800 30M 3600 1H 10800 3H 21600 6H 43200 12H 86400 1D 259200 3D 604800 1W 31536000 365D Example 15.14. Using the SOA Resource Record 15.2.3.3. Comment Tags Additionally to resource records and directives, a zone file can also contain comments. Comments are ignored by the named service, but can prove useful when providing additional information to the user. Any text after the semicolon character to the end of the line is considered a comment. For example: 15.2.3.4. Example Usage The following examples show the basic usage of zone files. 15.2.3.4.1. A Simple Zone File Example 15.15, "A simple zone file" demonstrates the use of standard directives and SOA values. Example 15.15. A simple zone file In this example, the authoritative nameservers are set as dns1.example.com and dns2.example.com , and are tied to the 10.0.1.1 and 10.0.1.2 IP addresses respectively using the A record. The email servers configured with the MX records point to mail and mail2 through A records. Since these names do not end in a trailing period, the USDORIGIN domain is placed after them, expanding them to mail.example.com and mail2.example.com . Services available at the standard names, such as www.example.com ( WWW ), are pointed at the appropriate servers using the CNAME record. This zone file would be called into service with a zone statement in the /etc/named.conf similar to the following: 15.2.3.4.2. A Reverse Name Resolution Zone File A reverse name resolution zone file is used to translate an IP address in a particular namespace into a fully qualified domain name (FQDN). It looks very similar to a standard zone file, except that the PTR resource records are used to link the IP addresses to a fully qualified domain name as shown in Example 15.16, "A reverse name resolution zone file" . Example 15.16. A reverse name resolution zone file In this example, IP addresses 10.0.1.1 through 10.0.1.6 are pointed to the corresponding fully qualified domain name. This zone file would be called into service with a zone statement in the /etc/named.conf file similar to the following: There is very little difference between this example and a standard zone statement, except for the zone name. Note that a reverse name resolution zone requires the first three blocks of the IP address reversed followed by .in-addr.arpa . This allows the single block of IP numbers used in the reverse name resolution zone file to be associated with the zone. 15.2.4. Using the rndc Utility The rndc utility is a command-line tool that allows you to administer the named service, both locally and from a remote machine. Its usage is as follows: 15.2.4.1. Configuring the Utility To prevent unauthorized access to the service, named must be configured to listen on the selected port ( 953 by default), and an identical key must be used by both the service and the rndc utility. Table 15.7. Relevant files Path Description /etc/named.conf The default configuration file for the named service. /etc/rndc.conf The default configuration file for the rndc utility. /etc/rndc.key The default key location. The rndc configuration is located in /etc/rndc.conf . If the file does not exist, the utility will use the key located in /etc/rndc.key , which was generated automatically during the installation process using the rndc-confgen -a command. The named service is configured using the controls statement in the /etc/named.conf configuration file as described in Section 15.2.2.3, "Other Statement Types" . Unless this statement is present, only the connections from the loopback address ( 127.0.0.1 ) will be allowed, and the key located in /etc/rndc.key will be used. For more information on this topic, see manual pages and the BIND 9 Administrator Reference Manual listed in Section 15.2.8, "Additional Resources" . Important To prevent unprivileged users from sending control commands to the service, make sure only root is allowed to read the /etc/rndc.key file: 15.2.4.2. Checking the Service Status To check the current status of the named service, use the following command: 15.2.4.3. Reloading the Configuration and Zones To reload both the configuration file and zones, type the following at a shell prompt: This will reload the zones while keeping all previously cached responses, so that you can make changes to the zone files without losing all stored name resolutions. To reload a single zone, specify its name after the reload command, for example: Finally, to reload the configuration file and newly added zones only, type: Note If you intend to manually modify a zone that uses Dynamic DNS (DDNS), make sure you run the freeze command first: Once you are finished, run the thaw command to allow the DDNS again and reload the zone: 15.2.4.4. Updating Zone Keys To update the DNSSEC keys and sign the zone, use the sign command. For example: Note that to sign a zone with the above command, the auto-dnssec option has to be set to maintain in the zone statement. For example: 15.2.4.5. Enabling the DNSSEC Validation To enable the DNSSEC validation, issue the following command as root : Similarly, to disable this option, type: See the options statement described in Section 15.2.2.2, "Common Statement Types" for information on how to configure this option in /etc/named.conf . The Red Hat Enterprise Linux 7 Security Guide has a comprehensive section on DNSSEC. 15.2.4.6. Enabling the Query Logging To enable (or disable in case it is currently enabled) the query logging, issue the following command as root : To check the current setting, use the status command as described in Section 15.2.4.2, "Checking the Service Status" . 15.2.5. Using the dig Utility The dig utility is a command-line tool that allows you to perform DNS lookups and debug a nameserver configuration. Its typical usage is as follows: See Section 15.2.3.2, "Common Resource Records" for a list of common values to use for type . 15.2.5.1. Looking Up a Nameserver To look up a nameserver for a particular domain, use the command in the following form: In Example 15.17, "A sample nameserver lookup" , the dig utility is used to display nameservers for example.com . Example 15.17. A sample nameserver lookup 15.2.5.2. Looking Up an IP Address To look up an IP address assigned to a particular domain, use the command in the following form: In Example 15.18, "A sample IP address lookup" , the dig utility is used to display the IP address of example.com . Example 15.18. A sample IP address lookup 15.2.5.3. Looking Up a Host Name To look up a host name for a particular IP address, use the command in the following form: In Example 15.19, "A Sample Host Name Lookup" , the dig utility is used to display the host name assigned to 192.0.32.10 . Example 15.19. A Sample Host Name Lookup 15.2.6. Advanced Features of BIND Most BIND implementations only use the named service to provide name resolution services or to act as an authority for a particular domain. However, BIND version 9 has a number of advanced features that allow for a more secure and efficient DNS service. Important Before attempting to use advanced features like DNSSEC, TSIG, or IXFR (Incremental Zone Transfer), make sure that the particular feature is supported by all nameservers in the network environment, especially when you use older versions of BIND or non-BIND servers. All of the features mentioned are discussed in greater detail in the BIND 9 Administrator Reference Manual referenced in Section 15.2.8.1, "Installed Documentation" . 15.2.6.1. Multiple Views Optionally, different information can be presented to a client depending on the network a request originates from. This is primarily used to deny sensitive DNS entries from clients outside of the local network, while allowing queries from clients inside the local network. To configure multiple views, add the view statement to the /etc/named.conf configuration file. Use the match-clients option to match IP addresses or entire networks and give them special options and zone data. 15.2.6.2. Incremental Zone Transfers (IXFR) Incremental Zone Transfers ( IXFR ) allow a secondary nameserver to only download the updated portions of a zone modified on a primary nameserver. Compared to the standard transfer process, this makes the notification and update process much more efficient. Note that IXFR is only available when using dynamic updating to make changes to primary zone records. If manually editing zone files to make changes, Automatic Zone Transfer ( AXFR ) is used. 15.2.6.3. Transaction SIGnatures (TSIG) Transaction SIGnatures (TSIG) ensure that a shared secret key exists on both primary and secondary nameservers before allowing a transfer. This strengthens the standard IP address-based method of transfer authorization, since attackers would not only need to have access to the IP address to transfer the zone, but they would also need to know the secret key. Since version 9, BIND also supports TKEY , which is another shared secret key method of authorizing zone transfers. Important When communicating over an insecure network, do not rely on IP address-based authentication only. 15.2.6.4. DNS Security Extensions (DNSSEC) Domain Name System Security Extensions ( DNSSEC ) provide origin authentication of DNS data, authenticated denial of existence, and data integrity. When a particular domain is marked as secure, the SERVFAIL response is returned for each resource record that fails the validation. Note that to debug a DNSSEC-signed domain or a DNSSEC-aware resolver, you can use the dig utility as described in Section 15.2.5, "Using the dig Utility" . Useful options are +dnssec (requests DNSSEC-related resource records by setting the DNSSEC OK bit), +cd (tells recursive nameserver not to validate the response), and +bufsize=512 (changes the packet size to 512B to get through some firewalls). 15.2.6.5. Internet Protocol version 6 (IPv6) Internet Protocol version 6 ( IPv6 ) is supported through the use of AAAA resource records, and the listen-on-v6 directive as described in Table 15.3, "Commonly Used Configuration Options" . 15.2.7. Common Mistakes to Avoid The following is a list of recommendations on how to avoid common mistakes users make when configuring a nameserver: Use semicolons and curly brackets correctly An omitted semicolon or unmatched curly bracket in the /etc/named.conf file can prevent the named service from starting. Use period (the . character) correctly In zone files, a period at the end of a domain name denotes a fully qualified domain name. If omitted, the named service will append the name of the zone or the value of USDORIGIN to complete it. Increment the serial number when editing a zone file If the serial number is not incremented, the primary nameserver will have the correct, new information, but the secondary nameservers will never be notified of the change, and will not attempt to refresh their data of that zone. Configure the firewall If a firewall is blocking connections from the named service to other nameservers, the recommended practice is to change the firewall settings. Warning Using a fixed UDP source port for DNS queries is a potential security vulnerability that could allow an attacker to conduct cache-poisoning attacks more easily. To prevent this, by default DNS sends from a random ephemeral port. Configure your firewall to allow outgoing queries from a random UDP source port. The range 1024 to 65535 is used by default. 15.2.8. Additional Resources The following sources of information provide additional resources regarding BIND. 15.2.8.1. Installed Documentation BIND features a full range of installed documentation covering many different topics, each placed in its own subject directory. For each item below, replace version with the version of the bind package installed on the system: /usr/share/doc/bind- version / The main directory containing the most recent documentation. The directory contains the BIND 9 Administrator Reference Manual in HTML and PDF formats, which details BIND resource requirements, how to configure different types of nameservers, how to perform load balancing, and other advanced topics. /usr/share/doc/bind- version /sample/etc/ The directory containing examples of named configuration files. rndc(8) The manual page for the rndc name server control utility, containing documentation on its usage. named(8) The manual page for the Internet domain name server named , containing documentation on assorted arguments that can be used to control the BIND nameserver daemon. lwresd(8) The manual page for the lightweight resolver daemon lwresd , containing documentation on the daemon and its usage. named.conf(5) The manual page with a comprehensive list of options available within the named configuration file. rndc.conf(5) The manual page with a comprehensive list of options available within the rndc configuration file. 15.2.8.2. Online Resources https://access.redhat.com/site/articles/770133 A Red Hat Knowledgebase article about running BIND in a chroot environment, including the differences compared to Red Hat Enterprise Linux 6. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/ The Red Hat Enterprise Linux 7 Security Guide has a comprehensive section on DNSSEC. https://www.icann.org/namecollision The ICANN FAQ on domain name collision .
[ "statement-1 [\" statement-1-name \"] [ statement-1-class ] { option-1 ; option-2 ; option-N ; }; statement-2 [\" statement-2-name \"] [ statement-2-class ] { option-1 ; option-2 ; option-N ; }; statement-N [\" statement-N-name \"] [ statement-N-class ] { option-1 ; option-2 ; option-N ; };", "~]# vim -c \"set backupcopy=yes\" /etc/named.conf", "~]# yum install bind-chroot", "~]USD systemctl status named", "~]# systemctl stop named", "~]# systemctl disable named", "~]# systemctl enable named-chroot", "~]# systemctl start named-chroot", "~]# systemctl status named-chroot", "acl acl-name { match-element ; };", "acl black-hats { 10.0.2.0/24; 192.168.0.0/24; 1234:5678::9abc/24; }; acl red-hats { 10.0.1.0/24; }; options { blackhole { black-hats; }; allow-query { red-hats; }; allow-query-cache { red-hats; }; };", "include \" file-name \"", "include \"/etc/named.rfc1912.zones\";", "options { option ; };", "options { allow-query { localhost; }; listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; max-cache-size 256M; directory \"/var/named\"; statistics-file \"/var/named/data/named_stats.txt\"; recursion yes; dnssec-enable yes; dnssec-validation yes; pid-file \"/run/named/named.pid\"; session-keyfile \"/run/named/session.key\"; };", "zone zone-name [ zone-class ] { option ; };", "zone \"example.com\" IN { type master; file \"example.com.zone\"; allow-transfer { 192.168.0.2; }; };", "zone \"example.com\" { type slave; file \"slaves/example.com.zone\"; masters { 192.168.0.1; }; };", "notify yes; // notify all secondary nameservers", "notify yes; # notify all secondary nameservers", "notify yes; /* notify all secondary nameservers */", "USDINCLUDE /var/named/penguin.example.com", "USDORIGIN example.com.", "USDTTL 1D", "hostname IN A IP-address", "server1 IN A 10.0.1.3 IN A 10.0.1.5", "alias-name IN CNAME real-name", "server1 IN A 10.0.1.5 www IN CNAME server1", "IN MX preference-value email-server-name", "example.com. IN MX 10 mail.example.com. IN MX 20 mail2.example.com.", "IN NS nameserver-name", "IN NS dns1.example.com. IN NS dns2.example.com.", "last-IP-digit IN PTR FQDN-of-system", "@ IN SOA primary-name-server hostmaster-email ( serial-number time-to-refresh time-to-retry time-to-expire minimum-TTL )", "@ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day", "604800 ; expire after 1 week", "USDORIGIN example.com. USDTTL 86400 @ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day ; ; IN NS dns1.example.com. IN NS dns2.example.com. dns1 IN A 10.0.1.1 IN AAAA aaaa:bbbb::1 dns2 IN A 10.0.1.2 IN AAAA aaaa:bbbb::2 ; ; @ IN MX 10 mail.example.com. IN MX 20 mail2.example.com. mail IN A 10.0.1.5 IN AAAA aaaa:bbbb::5 mail2 IN A 10.0.1.6 IN AAAA aaaa:bbbb::6 ; ; ; This sample zone file illustrates sharing the same IP addresses ; for multiple services: ; services IN A 10.0.1.10 IN AAAA aaaa:bbbb::10 IN A 10.0.1.11 IN AAAA aaaa:bbbb::11 ftp IN CNAME services.example.com. www IN CNAME services.example.com. ; ;", "zone \"example.com\" IN { type master; file \"example.com.zone\"; allow-update { none; }; };", "USDORIGIN 1.0.10.in-addr.arpa. USDTTL 86400 @ IN SOA dns1.example.com. hostmaster.example.com. ( 2001062501 ; serial 21600 ; refresh after 6 hours 3600 ; retry after 1 hour 604800 ; expire after 1 week 86400 ) ; minimum TTL of 1 day ; @ IN NS dns1.example.com. ; 1 IN PTR dns1.example.com. 2 IN PTR dns2.example.com. ; 5 IN PTR server1.example.com. 6 IN PTR server2.example.com. ; 3 IN PTR ftp.example.com. 4 IN PTR ftp.example.com.", "zone \"1.0.10.in-addr.arpa\" IN { type master; file \"example.com.rr.zone\"; allow-update { none; }; };", "rndc [ option ...] command [ command-option ]", "~]# chmod o-rwx /etc/rndc.key", "~]# rndc status version: 9.7.0-P2-RedHat-9.7.0-5.P2.el6 CPUs found: 1 worker threads: 1 number of zones: 16 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running", "~]# rndc reload server reload successful", "~]# rndc reload localhost zone reload up-to-date", "~]# rndc reconfig", "~]# rndc freeze localhost", "~]# rndc thaw localhost The zone reload and thaw was successful.", "~]# rndc sign localhost", "zone \"localhost\" IN { type master; file \"named.localhost\"; allow-update { none; }; auto-dnssec maintain; };", "~]# rndc validation on", "~]# rndc validation off", "~]# rndc querylog", "dig [@ server ] [ option ...] name type", "dig name NS", "~]USD dig example.com NS ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> example.com NS ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57883 ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;example.com. IN NS ;; ANSWER SECTION: example.com. 99374 IN NS a.iana-servers.net. example.com. 99374 IN NS b.iana-servers.net. ;; Query time: 1 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:04:06 2010 ;; MSG SIZE rcvd: 77", "dig name A", "~]USD dig example.com A ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> example.com A ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4849 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;example.com. IN A ;; ANSWER SECTION: example.com. 155606 IN A 192.0.32.10 ;; AUTHORITY SECTION: example.com. 99175 IN NS a.iana-servers.net. example.com. 99175 IN NS b.iana-servers.net. ;; Query time: 1 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:07:25 2010 ;; MSG SIZE rcvd: 93", "dig -x address", "~]USD dig -x 192.0.32.10 ; <<>> DiG 9.7.1-P2-RedHat-9.7.1-2.P2.fc13 <<>> -x 192.0.32.10 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29683 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 5, ADDITIONAL: 6 ;; QUESTION SECTION: ;10.32.0.192.in-addr.arpa. IN PTR ;; ANSWER SECTION: 10.32.0.192.in-addr.arpa. 21600 IN PTR www.example.com. ;; AUTHORITY SECTION: 32.0.192.in-addr.arpa. 21600 IN NS b.iana-servers.org. 32.0.192.in-addr.arpa. 21600 IN NS c.iana-servers.net. 32.0.192.in-addr.arpa. 21600 IN NS d.iana-servers.net. 32.0.192.in-addr.arpa. 21600 IN NS ns.icann.org. 32.0.192.in-addr.arpa. 21600 IN NS a.iana-servers.net. ;; ADDITIONAL SECTION: a.iana-servers.net. 13688 IN A 192.0.34.43 b.iana-servers.org. 5844 IN A 193.0.0.236 b.iana-servers.org. 5844 IN AAAA 2001:610:240:2::c100:ec c.iana-servers.net. 12173 IN A 139.91.1.10 c.iana-servers.net. 12173 IN AAAA 2001:648:2c30::1:10 ns.icann.org. 12884 IN A 192.0.34.126 ;; Query time: 156 msec ;; SERVER: 10.34.255.7#53(10.34.255.7) ;; WHEN: Wed Aug 18 18:25:15 2010 ;; MSG SIZE rcvd: 310" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-BIND
Chapter 15. Integrating with Apache ActiveMQ
Chapter 15. Integrating with Apache ActiveMQ Overview If you are using Apache ActiveMQ as your JMS provider, the JNDI name of your destinations can be specified in a special format that dynamically creates JNDI bindings for queues or topics. This means that it is not necessary to configure the JMS provider in advance with the JNDI bindings for your queues or topics. The initial context factory The key to integrating Apache ActiveMQ with JNDI is the ActiveMQInitialContextFactory class. This class is used to create a JNDI InitialContext instance, which you can then use to access JMS destinations in the JMS broker. Example 15.1, "SOAP/JMS WSDL to connect to Apache ActiveMQ" shows SOAP/JMS WSDL extensions to create a JNDI InitialContext that is integrated with Apache ActiveMQ. Example 15.1. SOAP/JMS WSDL to connect to Apache ActiveMQ In Example 15.1, "SOAP/JMS WSDL to connect to Apache ActiveMQ" , the Apache ActiveMQ client connects to the broker port located at tcp://localhost:61616 . Looking up the connection factory As well as creating a JNDI InitialContext instance, you must specify the JNDI name that is bound to a javax.jms.ConnectionFactory instance. In the case of Apache ActiveMQ, there is a predefined binding in the InitialContext instance, which maps the JNDI name ConnectionFactory to an ActiveMQConnectionFactory instance. Example 15.2, "SOAP/JMS WSDL for specifying the Apache ActiveMQ connection factory" shaows the SOAP/JMS extension element for specifying the Apache ActiveMQ connection factory. Example 15.2. SOAP/JMS WSDL for specifying the Apache ActiveMQ connection factory Syntax for dynamic destinations To access queues or topics dynamically, specify the destination's JNDI name as a JNDI composite name in either of the following formats: QueueName and TopicName are the names that the Apache ActiveMQ broker uses. They are not abstract JNDI names. Example 15.3, "WSDL port specification with a dynamically created queue" shows a WSDL port that uses a dynamically created queue. Example 15.3. WSDL port specification with a dynamically created queue When the application attempts to open the JMS connection, Apache ActiveMQ will check to see if a queue with the JNDI name greeter.request.queue exists. If it does not exist, it will create a new queue and bind it to the JNDI name greeter.request.queue .
[ "<soapjms:jndiInitialContextFactory> org.apache.activemq.jndi.ActiveMQInitialContextFactory </soapjms:jndiInitialContextFactory> <soapjms:jndiURL>tcp://localhost:61616</soapjms:jndiURL>", "<soapjms:jndiConnectionFactoryName> ConnectionFactory </soapjms:jndiConnectionFactoryName>", "dynamicQueues/ QueueName dynamicTopics/ TopicName", "<service name=\"JMSService\"> <port binding=\"tns:GreeterBinding\" name=\"JMSPort\"> <jms:address jndiConnectionFactoryName=\"ConnectionFactory\" jndiDestinationName=\"dynamicQueues/greeter.request.queue\" > <jms:JMSNamingProperty name=\"java.naming.factory.initial\" value=\"org.activemq.jndi.ActiveMQInitialContextFactory\" /> <jms:JMSNamingProperty name=\"java.naming.provider.url\" value=\"tcp://localhost:61616\" /> </jms:address> </port> </service>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFAMQIntegration
Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1]
Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. status object status represents the current information of a snapshot. 13.1.1. .spec Description spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. Type object Required deletionPolicy driver source volumeSnapshotRef Property Type Description deletionPolicy string deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. driver string driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. source object source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. sourceVolumeMode string SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either "Filesystem" or "Block". If not specified, it indicates the source volume's mode is unknown. This field is immutable. This field is an alpha field. volumeSnapshotClassName string name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. volumeSnapshotRef object volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 13.1.2. .spec.source Description source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. Type object Property Type Description snapshotHandle string snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. volumeHandle string volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 13.1.3. .spec.volumeSnapshotRef Description volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 13.1.4. .status Description status represents the current information of a snapshot. Type object Property Type Description creationTime integer creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command date +%s%N returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. error object error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. readyToUse boolean readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. snapshotHandle string snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. volumeGroupSnapshotHandle string VolumeGroupSnapshotHandle is the CSI "group_snapshot_id" of a group snapshot on the underlying storage system. 13.1.5. .status.error Description error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 13.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents DELETE : delete collection of VolumeSnapshotContent GET : list objects of kind VolumeSnapshotContent POST : create a VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} DELETE : delete a VolumeSnapshotContent GET : read the specified VolumeSnapshotContent PATCH : partially update the specified VolumeSnapshotContent PUT : replace the specified VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status GET : read status of the specified VolumeSnapshotContent PATCH : partially update status of the specified VolumeSnapshotContent PUT : replace status of the specified VolumeSnapshotContent 13.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents HTTP method DELETE Description delete collection of VolumeSnapshotContent Table 13.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotContent Table 13.2. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotContent Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.4. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 202 - Accepted VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} Table 13.6. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent HTTP method DELETE Description delete a VolumeSnapshotContent Table 13.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotContent Table 13.9. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotContent Table 13.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotContent Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.3. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status Table 13.15. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent HTTP method GET Description read status of the specified VolumeSnapshotContent Table 13.16. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshotContent Table 13.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshotContent Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/storage_apis/volumesnapshotcontent-snapshot-storage-k8s-io-v1