title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/security_overview/making-open-source-more-inclusive
10.5. Multi-Source Models: Planning and Execution
10.5. Multi-Source Models: Planning and Execution The planner logically treats a multi-source table as if it were a view containing the union all of the respective source tables. More complex partitioning scenarios, such as heterogeneous sources or list partitioning will require the use of a Partitioned Union. Most of the federated optimizations available over unions are still applicable in multi-source mode. This includes aggregation pushdown/decomposition, limit pushdown, join partitioning, etc.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/multi-source_models_planning_and_execution
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.6/pr01
2.3. Checking the Status of NetworkManager
2.3. Checking the Status of NetworkManager To check whether NetworkManager is running: Note that the systemctl status command displays Active: inactive (dead) when NetworkManager is not running.
[ "~]USD systemctl status NetworkManager NetworkManager.service - Network Manager Loaded: loaded (/lib/systemd/system/NetworkManager.service; enabled) Active: active (running) since Fri, 08 Mar 2013 12:50:04 +0100; 3 days ago" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-checking_the_status_of_networkmanager
Chapter 1. Red Hat build of Cryostat Operator
Chapter 1. Red Hat build of Cryostat Operator You can use the Red Hat build of Cryostat Operator to manage and configure your Cryostat instance. The Red Hat build of Cryostat Operator is available on the OpenShift Container Platform (OCP). 1.1. Overview of the Red Hat build of Cryostat Operator After you create or update a Cryostat application on the OpenShift Container Platform, the Red Hat build of Cryostat Operator creates and manages the Cryostat application. Operator level 2 seamless upgrades The Operator Capability Level for the Red Hat build of Cryostat Operator is set to Level 2 Seamless Upgrades on the Operator Lifecycle Manager framework. After you upgrade your Red Hat build of Cryostat Operator, the Red Hat build of Cryostat Operator automatically upgrades Cryostat and its related components. The automatic upgrade operation does not remove any JFR recordings, templates, rules, and other stored components, from your Cryostat instance. Note The automatic upgrade operation occurs only for minor releases or patch update releases of Cryostat. For major releases, you might need to re-install the Red Hat build of Cryostat Operator. Persistent volume claims You can create persistent volume claims (PVCs) on Red Hat OpenShift with the Red Hat build of Cryostat Operator so that your Cryostat application can store archived recordings on a cloud storage disk. Operator configuration settings Additionally, you can make the following changes to the default configuration settings for the Red Hat build of Cryostat Operator: Configure the PVC that was created by the Red Hat build of Cryostat Operator, so that your Cryostat application can store archived recordings on a cloud storage disk. Configure your Cryostat application to trust TLS certificates from specific applications. Disable cert-manager, so that the operator does not need to generate self-signed certificates for Cryostat components. Install custom event template files, which are located in ConfigMaps, to your Cryostat instance, so you can use the templates to create recordings when Cryostat starts. The following configuration options for the Red Hat build of Cryostat Operator are included: Resource requirements, which you can use to specify resource requests or limits for the core , datasource , grafana , storage , db , or auth-proxy containers. Service customization, so that you can control the services that the Red Hat build of Cryostat Operator creates. Sidecar report options, which the Red Hat build of Cryostat Operator can use to provision one or more report generators for your Cryostat application. Single-namespace or multi-namespace Cryostat instances The Red Hat build of Cryostat Operator provides a Cryostat API that you can use to create Cryostat instances that work in a single namespace or across multiple namespaces. You can control these Cryostat instances by using a GUI that is accessible from the Red Hat OpenShift web console. Note From Cryostat 3.0, the Cryostat API supports the creation of both single-namespace and multi-namespace instances. The Cluster Cryostat API that you could use to create multi-namespace instances in Cryostat 2.x releases is deprecated and superseded by the Cryostat API in Cryostat 3.x. Users who can access the multi-namespace Cryostat instance have access to all target applications in any namespace that is visible to that Cryostat instance. Therefore, when you deploy a multi-namespace Cryostat instance, you must consider which namespaces to select for monitoring, which namespace to install Cryostat into, and which users can have access rights. Prerequisites for configuring the Red Hat build of Cryostat Operator Before you configure the Red Hat build of Cryostat Operator, ensure that the following prerequisites are met: Installed the Red Hat build of Cryostat Operator in a project on Red Hat OpenShift. Created a Cryostat instance by using the Red Hat build of Cryostat Operator. Additional resources See Operator Capability Levels (Operator SDK) See Installing Cryostat on Red Hat OpenShift using an operator (Installing Cryostat) 1.2. Disabling cert-manager You can disable cert-manager functionality by configuring the enableCertManager property of the Red Hat build of Cryostat Operator. By default, Red Hat build of Cryostat Operator's enableCertManager property is set to true . This means that the Red Hat build of Cryostat Operator uses the cert-manager CA issuer to generate self-signed certificates for your Cryostat components. The Red Hat build of Cryostat Operator uses these certificates to enable HTTPS communication among Cryostat components operating in a cluster. You can set the enableCertManager property to false , so that the Red Hat build of Cryostat Operator does not need to generate self-signed certificates for Cryostat components. Important If you set the enableCertManager property to false , you could introduce potential security implications from unencrypted internal traffic to the cluster that contains your running Cryostat application. Prerequisites Logged in to the OpenShift Container Platform by using the Red Hat OpenShift web console. Procedure If you want to start creating a Cryostat instance, perform the following steps: On your Red Hat OpenShift web console, click Operators > Installed Operators . From the list of available Operators, select Red Hat build of Cryostat. On the Operator details page, click the Details tab. In the Provided APIs section, select Cryostat, and then click Create instance . On the Create Cryostat panel, to configure the enableCertManager property, choose one of the following options: If you want to use the Form view: Click the Form view radio button. Set the Enable cert-manager Integration switch to false , and then enter a value in the Name field. Figure 1.1. Toggling the Enable cert-manager Integration switch to false If you want to use the YAML view: Click the YAML view radio button. In the spec: key set of the YAML file, change the enableCertManager property to false . Example of configuring the spec: key set in a YAML file -- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: enableCertManager: false -- If you want to configure other properties in the custom resource (CR) for this Cryostat instance, see the other sections of this document for more information about these properties. If you want to finish creating this Cryostat instance, click Create . When you click Create , this Cryostat instance is available under the Cryostat tab on the Operator details page. You can subsequently edit the CR properties for a Cryostat instance by clicking the instance name on the Operator details page and then select Edit Cryostat from the Actions drop-down menu. The Red Hat build of Cryostat Operator automatically restarts your Cryostat application, enabling the application to run with the updated enableCertManager property configuration. Verification Select your Cryostat instance from the Cryostat tab on the Operator details page. Navigate to the Cryostat Conditions table. Verify that the TLSSetupComplete condition is set to true and that the Reason column for this condition is set to CertManagerDisabled . This indicates that you have set the enableCertManager property to false . Figure 1.2. Example showing the TLSSetupComplete condition set to true Additional resources See the cert-manager documentation See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat) 1.3. Customizing event templates You can configure the eventTemplates property of the Red Hat build of Cryostat Operator YAML configuration file to include multiple custom templates. An event template outlines the event recording criteria for your JDK Flight Recording (JFR). You can configure a JFR through its associated event template. By default, Red Hat build of Cryostat Operator includes some pre-configured event templates. These pre-configured event templates might not meet your needs, so you can use Red Hat build of Cryostat Operator to generate custom event templates for your Cryostat instance and store these templates in ConfigMaps for easier retrieval. You can generate a custom event template in the following ways: Use the Red Hat OpenShift web console to upload an event template into a custom resource. Edit the YAML file for your Cryostat custom resource on the Red Hat OpenShift web console. After you store a custom event template in a ConfigMap , you can deploy a new Cryostat instance with this custom event template. You can then use your custom event template with JFR to monitor your Java application to meet your needs. Prerequisites Logged in to the OpenShift Container Platform by using the Red Hat OpenShift web console. Logged in to your Cryostat web console. Procedure To download a default event template, navigate to your Cryostat web console and from the Events menu, click Downloads . Note Event templates are in XML format and have a file name extension of .jfc . Optional: If you want a custom event template, edit the downloaded default event template by using a text editor or XML editor to configure the template to meet your needs. Log in to your Red Hat OpenShift web console by entering the oc login command in your CLI. Create a ConfigMap resource from the event template by entering the following command in your CLI. You must issue the command in the path where you want to deploy your Cryostat application. You can use this resource to store an event template file that is inside the cluster where you run your Cryostat instance. Example of creating a ConfigMap resource by using the CLI If you want to start creating a Cryostat instance, perform the following steps: On your Red Hat OpenShift web console, click Operators > Installed Operators . From the list of available Operators, select Red Hat build of Cryostat. On the Operator details page, click the Details tab. In the Provided APIs section, select Cryostat, and then click Create instance . On the Create Cryostat panel, to upload an event template in XML format into a resource, choose one of the following options: If you want to use the Form view: Click the Form view radio button. Navigate to the Event Templates section of the Cryostat instance. From the Event Templates menu, click Add Event Template . An Event Templates section opens on your Red Hat OpenShift console. From the Config Map Name drop-down list, select the ConfigMap resource that contains your event template. Figure 1.3. Event Templates option for a Cryostat instance In the Filename field, enter the name of the .jfc file that is contained within your ConfigMap. If you want to use the YAML view: Click the YAML view radio button. Specify any custom event templates for the eventTemplates property. This property points the Red Hat build of Cryostat Operator to your ConfigMap, so that the Red Hat build of Cryostat Operator can read the event template. Example of specifying custom event templates for the eventTemplates property -- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: eventTemplates: - configMapName: custom-template1 filename: my-template1.jfc - configMapName: custom-template2 filename: my-template2.jfc -- Important You must select the name of a ConfigMap, which is associated with your Cryostat or Cluster Cryostat instance, from the configMapName drop-down list. Additionally, you must specify a key associated with the ConfigMap in the filename field. If you want to configure other properties in the custom resource (CR) for this Cryostat instance, see the other sections of this document for more information about these properties. If you want to finish creating this Cryostat instance, click Create . When you click Create , this Cryostat instance is available under the Cryostat tab on the Operator details page. You can subsequently edit the CR properties for a Cryostat instance by clicking the instance name on the Operator details page and then select Edit Cryostat from the Actions drop-down menu. The Red Hat build of Cryostat Operator can now provide the custom event template as an XML file to your Cryostat application. Your custom event template opens alongside default event templates in your Cryostat web console. Verification On the Cryostat web console, click Events from the menu. If an Authentication Required window opens on your web console, enter your credentials and click Save . Under the Event Templates tab, check if your custom event template shows in the list of available event templates. Figure 1.4. Example of a listed custom event template under the Event Templates tab Additional resources See Installing Cryostat on OpenShift using an operator (Installing Cryostat) See Accessing Cryostat by using the web console (Installing Cryostat) See Using custom event templates (Using Cryostat to manage a JFR recording) 1.4. Configuring TLS certificates You can specify the Red Hat build of Cryostat Operator to configure Cryostat to trust TLS certificates from specific applications. Cryostat attempts to open a JMX connection to a target JVM that uses a TLS certificate. For a successful JMX connection, the Cryostat must pass all its authentication checks on the target JVM certificate. You can specify multiple TLS secrets in the trustedCertSecrets array of the Red Hat build of Cryostat Operator YAML configuration file. You must specify the secret located in the same namespace as your Cryostat application in the secretName property of the array. The certificateKey property defaults to tls.crt , but you can change the value to an X.509 certificate file name. Important Configuring a TLS certificate is required only for applications that have enabled TLS for remote JMX connections by using the com.sun.management.jmxremote.registry.ssl=true attribute. Prerequisites Logged in to the OpenShift Container Platform by using the OpenShift web console. Logged in to your Cryostat web console. Procedure If you want to start creating a Cryostat instance, perform the following steps: On your Red Hat OpenShift web console, click Operators > Installed Operators . From the list of available Operators, select Red Hat build of Cryostat. On the Operator details page, click the Details tab. In the Provided APIs section, select Cryostat, and then click Create instance . On the Create Cryostat panel, to configure a TLS certificate, choose one of the following options: If you want to use the Form view: Click the Form view radio button. In the Name field, specify a name for the instance of Cryostat that you want to create. Expand the Trusted TLS Certificates option, then click Add Trusted TLS Certificates . A list of options displays on your Red Hat OpenShift web console. Figure 1.5. The Trusted TLS Certificates option Select a TLS secret from the Secret Name list. The Certificate Key field is optional. Note You can remove a TLS certificate by clicking Remove Trusted TLS Certificates . If you want to use the YAML view: Click the YAML view radio button. Specify your secret, which is located in the same namespace as your Cryostat application, in the secretName property of the trustedCertSecrets array. Example of specifying a secret in the trustedCertSecrets array -- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: trustedCertSecrets: - secretName: my-tls-secret -- Optional: Change the certificateKey property value to the application's X.509 certificate file name. If you do not change the value, the certificateKey property defaults to tls.crt . Example of changing the certificateKey property's value -- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: trustedCertSecrets: - secretName: my-tls-secret certificateKey: ca.crt -- If you want to configure other properties in the custom resource (CR) for this Cryostat instance, see the other sections of this document for more information about these properties. If you want to finish creating this Cryostat instance, click Create . When you click Create , this Cryostat instance is available under the Cryostat tab on the Operator details page. You can subsequently edit the CR properties for a Cryostat instance by clicking the instance name on the Operator details page and then select Edit Cryostat from the Actions drop-down menu. The Red Hat build of Cryostat Operator automatically restarts your Cryostat instance with the configured security settings. Verification Determine that all your application pods exist in the same OpenShift cluster namespace as your Cryostat pod by issuing the following command in your CLI: USD oc get pods Log in to the web console of your Cryostat instance. On the Dashboard menu for your Cryostat instance, select a target JVM from the Target list. In the navigation menu on the Cryostat web console, select Recordings . On the Authentication Required dialog window, enter your secret's credentials and then select Save to provide your credentials to the target JVM. Note If the selected target has password authentication enabled for JMX connections, you must provide the JMX credentials for the target JVM when prompted for a connection. Cryostat connects to your application through the authenticated JMX connection. You can now use the Recordings and Events functions to monitor your application's JFR data. Additional resources See Creating a JDK Flight Recorder (JFR) recording (Creating a JFR recording with Cryostat) See Installing Cryostat on Red Hat OpenShift using an operator (Installing Cryostat) See Accessing Cryostat by using the web console (Installing Cryostat) 1.5. Changing storage volume options You can use the Red Hat build of Cryostat Operator to configure storage volumes for your Cryostat or Cluster Cryostat instance. Cryostat supports persistent volume claim (PVC) and emptyDir storage volume types. By default, Red Hat build of Cryostat Operator creates a PVC for your Cryostat or Cluster Cryostat instance that uses the default StorageClass resource with 500 mebibytes (MiB) of allocated storage. You can create a custom PVC for your Cryostat application on OpenShift Container Platform by choosing one of the following options: Navigating to Storage Options > PVC > Spec in the Form view window, and then customizing your PVC by completing the relevant fields. Navigating to the YAML view window, and then editing the storageOptions array in the spec: key set to meet your needs. Note You can learn more about creating a custom PVC by navigating to Changing storage volume options in the Using the Red Hat build of Cryostat Operator to configure Cryostat guide. You can configure the emptyDir storage volume for your Cryostat application on OpenShift Container Platform by choosing one of the following options: Enabling the Empty Dir setting in Storage Options on the Form view window. Setting the spec.storageOptions.emptyDir.enabled to true in the YAML view window. Prerequisites Logged in to the OpenShift Container Platform by using the Red Hat OpenShift web console. Procedure If you want to start creating a Cryostat instance, perform the following steps: On your Red Hat OpenShift web console, click Operators > Installed Operators . From the list of available Operators, select Red Hat build of Cryostat. On the Operator details page, click the Details tab. In the Provided APIs section, select Cryostat, and then click Create instance . On the Create Cryostat panel, to change storage settings for your Cryostat application, choose one of the following options: If you want to use the Form view: Click the Form view radio button. Navigate to the Storage Options section, and enter a value in the Name field. Expand Storage Options and click Empty Dir . An expanded selection of options opens on your Red Hat OpenShift web console. Set the Enabled switch to true . Figure 1.6. Example showing the Empty Dir switch set to true If you want to use the YAML view: Click the YAML view radio button. In the spec: key set of the YAML file, add the storageOptions definition and set the emptyDir property to true . Example showing the emptyDir property set as true -- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: storageOptions: emptyDir: enabled: true medium: "Memory" sizeLimit: 1Gi -- Optional: Set values for the medium and sizeLimit properties. If you want to configure other properties in the custom resource (CR) for this Cryostat instance, see the other sections of this document for more information about these properties. If you want to finish creating this Cryostat instance, click Create . When you click Create , this Cryostat instance is available under the Cryostat tab on the Operator details page. You can subsequently edit the CR properties for a Cryostat instance by clicking the instance name on the Operator details page and then select Edit Cryostat from the Actions drop-down menu. The Red Hat build of Cryostat Operator creates an EmptyDir volume for storage instead of creating a PVC for your Cryostat instance. 1.6. Scheduling options for Cryostat From the Red Hat OpenShift web console, you can use the Red Hat build of Cryostat Operator to define policies for scheduling a Cryostat application and its generated reports to nodes. You can define Node Selector , Affinities , and Tolerations definitions in the YAML configuration file for a Cryostat or Cluster Cryostat custom resource (CR) on Red Hat OpenShift. You must define these definitions under the spec.SchedulingOptions property for the Cryostat application and the spec.ReportOptions.SchedulingOptions property for the report generator sidecar. By specifying the SchedulingOptions property, the Cryostat application and its report generator sidecar pods will be scheduled on nodes that meet the scheduling criteria. a targeted node application can receive sidecar reports updates from a Cryostat instance. Example that shows the YAML configuration for a Cryostat CR that defines schedule options kind: Cryostat apiVersion: operator.cryostat.io/v1beta2 metadata: name: cryostat spec: schedulingOptions: nodeSelector: node: good affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node operator: In values: - good - better podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: good topologyKey: topology.kubernetes.io/zone podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: bad topologyKey: topology.kubernetes.io/zone tolerations: - key: node operator: Equal value: ok effect: NoExecute reportOptions: replicas: 1 schedulingOptions: nodeSelector: node: good affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node operator: In values: - good - better podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: good topologyKey: topology.kubernetes.io/zone podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: bad topologyKey: topology.kubernetes.io/zone tolerations: - key: node operator: Equal value: ok effect: NoExecute Alternatively, you can open your Red Hat OpenShift web console, create a Cryostat instance, and then define Affinities and Tolerations definitions in the SchedulingOptions and reportOptions.SchedulingOptions options for that Cryostat instance. Figure 1.7. The Report Options and Scheduling Options panels on the OpenShift web console
[ "-- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: enableCertManager: false --", "oc create configmap <template_name> --from-file= <path_to_custom_event_template>", "-- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: eventTemplates: - configMapName: custom-template1 filename: my-template1.jfc - configMapName: custom-template2 filename: my-template2.jfc --", "-- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: trustedCertSecrets: - secretName: my-tls-secret --", "-- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: trustedCertSecrets: - secretName: my-tls-secret certificateKey: ca.crt --", "oc get pods", "-- apiVersion: operator.cryostat.io/v1beta2 kind: Cryostat metadata: name: cryostat-sample spec: storageOptions: emptyDir: enabled: true medium: \"Memory\" sizeLimit: 1Gi --", "kind: Cryostat apiVersion: operator.cryostat.io/v1beta2 metadata: name: cryostat spec: schedulingOptions: nodeSelector: node: good affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node operator: In values: - good - better podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: good topologyKey: topology.kubernetes.io/zone podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: bad topologyKey: topology.kubernetes.io/zone tolerations: - key: node operator: Equal value: ok effect: NoExecute reportOptions: replicas: 1 schedulingOptions: nodeSelector: node: good affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node operator: In values: - good - better podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: good topologyKey: topology.kubernetes.io/zone podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: pod: bad topologyKey: topology.kubernetes.io/zone tolerations: - key: node operator: Equal value: ok effect: NoExecute" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_the_red_hat_build_of_cryostat_operator_to_configure_cryostat/assembly_cryostat-operator_cryostat
Chapter 7. Migrating JBoss EAP 7.x configurations to JBoss EAP 8.0
Chapter 7. Migrating JBoss EAP 7.x configurations to JBoss EAP 8.0 7.1. Migrating a JBoss EAP 7.x standalone server to JBoss EAP 8.0 By default, the JBoss Server Migration Tool performs the following tasks when migrating a standalone server configuration from JBoss EAP x to JBoss EAP 8.0. 7.1.1. Migrate JBoss Domain Properties The words master and slave on Domain related property names were replaced with the words 'primary' and 'secondary', and the migration automatically fixes any usage of the old property names. The console logs any property renamed by the migration. If the properties are successfully renamed, the following message is displayed: 7.1.2. Remove unsupported subsystems The JBoss Server Migration Tool removes all unsupported subsystem configurations and extensions from migrated server configurations. The tool logs each subsystem and extension to its log file and to the console as it is removed. Note Any subsystem that was not supported in JBoss EAP 7.x, but was added by an administrator to that server, is also not supported in JBoss EAP 8.0 and will be removed. To skip removal of the unsupported subsystems, set the subsystems.remove-unsupported-subsystems.skip environment property to true . You can override the default behavior of the JBoss Server Migration Tool and specify which subsystems and extensions should be included or excluded during the migration using the following environment properties. Table 7.1. Server migration environment properties Property name Property description extensions.excludes A list of module names of extensions that should never be migrated, for example, com.example.extension1 , com.example.extension3 . extensions.includes A list of module names of extensions that should always be migrated, for example, com.example.extension2 , com.example.extension4 . subsystems.excludes A list of subsystem namespaces, stripped of the version, that should never be migrated, for example, urn:jboss:domain:logging , urn:jboss:domain:ejb3 . subsystems.includes A list of subsystem namespaces, stripped of the version, that should always be migrated, for example, urn:jboss:domain:security , urn:jboss:domain:ee . 7.1.3. Migrate referenced modules for a standalone server A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a standalone server configuration is migrated using the following process. A module referenced by the datasource subsystem configuration is migrated as a datasource driver module. A module referenced by the ee subsystem configuration is migrated as a global module. A module referenced by the naming subsystem configuration is migrated as an object factory module. A module referenced by the messaging subsystem configuration is migrated as a Jakarta Messaging bridge module. Any extension that is not installed on the target configuration is migrated to the target server configuration. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. 7.1.4. Migrate referenced paths for a standalone server A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. 7.1.5. Migrate legacy Security Realms JBoss EAP 8 does not support the legacy Security Realms framework. The JBoss Server Migration Tool migrates the configuration to using the default JBoss EAP 8 Elytron replacements. If the default legacy security realm was not used, you may need to manually configure Elytron. The console logs configuration resources migrated to the default JBoss EAP 8 Elytron replacements. 7.1.6. Migrate legacy Security Domains JBoss EAP 8 does not support the legacy Security Domains framework. The the JBoss Server Migration Tool migrates the configuration to using the default JBoss EAP 8 Elytron replacements. If the default legacy security domain was not used, you may need to manually configure Elytron. The console logs all configuration resources migrated to the default JBoss EAP 8 Elytron replacements. 7.1.7. Migrate keycloak subsystem The keycloak subsystem is not supported in JBoss EAP 8 and is replaced by the elytron-oidc-client subsystem. By default, the JBoss Server Migration Tool automatically migrates any legacy subsystem configuration. To skip this migration task, set the subsystem.keycloak.migrate.skip environment property value to true . The legacy subsystem migration is performed without any interaction from the user. When the legacy keycloak subsystem migration is completed, the following message is displayed in the migration console: Any issues encountered during the migration are written to the log files and displayed in the migration console. 7.1.8. Migrate picketlink-federation subsystem The picketlink-federation subsystem is deprecated in JBoss EAP 8 and is replaced by the keycloak-saml subsystem. By default, the JBoss Server Migration Tool automatically migrates any legacy subsystem configuration. To skip this migration task, set the subsystem.picketlink-federation.migrate.skip environment property value to true . The legacy subsystem migration is performed without any interaction from the user. The legacy subsystem migration can fail due to the following reasons: The legacy picketlink-federation subsystem cannot be migrated to the keycloak-saml subsystem due to the target server missing the Keycloak client SAML adapter. Non-empty legacy picketlink-federation subsystem configurations that must be migrated manually. When the legacy picketlink-federation subsystem migration is completed, the following message is displayed in the migration console: Any issues encountered during the migration are written to the log files and displayed in the migration console. For more information, see the Migration Guide . 7.1.9. Update jgroups subsystem configuration The JBoss Server Migration Tool does not automate the migration of the jgroups subsystem configuration. The JBoss Server Migration Tool reverts the configuration to the default JBoss EAP 8 jgroups configuration. If the default JBoss EAP 8 jgroups subsystem configuration was not used, you may need to manually configure the jgroups subsystem configuration. The console logs a message when the jgroups subsystem configuration is updated: 7.1.10. Add the health subsystem for a standalone server The JBoss EAP 8.0 health subsystem provides support for a server's health functionality. The JBoss Server Migration Tool automatically adds the default health subsystem configuration to the migrated configuration file. To skip the addition of the health subsystem configuration, set the subsystem.health.add.skip environment property to true . After you add the health subsystem to JBoss EAP 8.0, you'll see the following message in your web console: 7.1.11. Add the metrics subsystem for a standalone server The JBoss EAP 8.0 metrics subsystem provides support for a server's metric functionality. The JBoss Server Migration Tool automatically adds the default metrics subsystem configuration to the migrated configuration file. To skip the addition of the metrics subsystem configuration, set the subsystem.metrics.add.skip environment property to true . After you add the metrics subsystem to JBoss EAP 8.0, you'll see the following message in your web console: 7.1.12. Migrate deployments for a standalone server The JBoss Server Migration Tool can migrate the following types of standalone server deployment configurations. Deployments it references, also known as persistent deployments . Deployments found in directories monitored by its deployment scanners . Deployment overlays it references. The migration of a deployment consists of installing related file resources on the target server, and possibly updating the migrated configuration. The JBoss Server Migration Tool is preconfigured to skip deployments by default when running in non-interactive mode. To enable migration of deployments, set the deployments.migrate-deployments.skip environment property to false . Important Be aware that when you run the JBoss Server Migration Tool in interactive mode and enter invalid input, the resulting behavior depends on the value of the deployments.migrate-deployments environment property. If deployments.migrate-deployments.skip is set to false and you enter invalid input, the tool will try to migrate the deployments. If deployments.migrate-deployments.skip is set to true and you enter invalid input, the tool will skip the deployments migration. Warning The JBoss Server Migration Tool does not determine whether deployed resources are compatible with the target server. This means that applications or resources might not deploy, might not work as expected, or might not work at all. Also be aware that artifacts such as JBoss EAP 7.3 *-jms.xml configuration files are copied without modification and can cause the JBoss EAP server to boot with errors. Red Hat recommends that you use the Migration Toolkit for Runtimes (MTR) to analyze deployments to determine compatibility among different JBoss EAP servers. For more information, see Product Documentation for Migration Toolkit for Runtimes . 7.1.12.1. Migrate persistent deployments for a standalone server To enable migration of persistent deployments when running in non-interactive mode, set the deployments.migrate-persistent-deployments.skip environment property to false . The JBoss Server Migration Tool searches for any persistent deployment references and lists them to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode, as described below. Migrating persistent deployments in non-interactive mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the persistent deployments. Persistent deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-persistent-deployments.skip properties are set to false . Migrating persistent deployments in interactive mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the persistent deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of persistent deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 7.1.12.2. Migrate Deployment scanner deployments for a standalone server Deployment scanners, which are only used in standalone server configurations, monitor a directory for new files and manage their deployment automatically or through special deployment marker files. To enable migration of deployments that are located in directories watched by a deployment scanner when running in non-interactive mode, set the deployments.migrate-deployment-scanner-deployments.skip environment property to false . When migrating a standalone server configuration, the JBoss Server Migration Tool first searches for any configured deployment scanners. For each scanner found, it searches its monitored directories for deployments marked as deployed and prints the results to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode, as described below. Migrating Deployment scanner deployments in non-interactive mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the deployment scanner deployments. Deployment scanner deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-deployment-scanner-deployments.skip properties are set to false . Migrating Deployment scanner deployments in interactive mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the deployment scanner deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of deployment scanner deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 7.1.12.3. Migrate deployment overlays for a standalone server The migration of deployment overlays is a fully automated process. If you have enabled migration of deployments by setting the deployments.migrate-deployments.skip environment property to false , the JBoss Server Migration Tool searches for deployment overlays referenced in the standalone server configuration that are linked to migrated deployments. It automatically migrates those that are found, removes those that are not referenced, and logs the results to its log file and to the console. 7.2. Migrating a JBoss EAP 7.x managed domain to JBoss EAP 8.0 Warning When you use the JBoss Server Migration Tool, migrate your domain controller before you migrate your hosts to ensure your domain controller must use the later version of JBoss EAP when compared to the version used by hosts. For example, a domain controller running on JBoss EAP 7 cannot handle a host running on JBoss EAP 8.0. By default, the JBoss Server Migration Tool performs the following tasks when migrating a managed domain configuration from JBoss EAP 7 to JBoss EAP 8.0. 7.2.1. Migrate JBoss Domain Properties The words master and slave on Domain related property names were replaced with the words 'primary' and 'secondary', and the migration automatically fixes any usage of the old property names. The console logs any property renamed by the migration. If the properties are successfully renamed, the following message is displayed: 7.2.2. Remove unsupported subsystems The JBoss Server Migration Tool removes all unsupported subsystem configurations and extensions from migrated server configurations. The tool logs each subsystem and extension to its log file and to the console as it is removed. Note Any subsystem that was not supported in JBoss EAP 7.x, but was added by an administrator to that server, is also not supported in JBoss EAP 8.0 and will be removed. To skip removal of the unsupported subsystems, set the subsystems.remove-unsupported-subsystems.skip environment property to true . You can override the default behavior of the JBoss Server Migration Tool and specify which subsystems and extensions should be included or excluded during the migration using the following environment properties. Table 7.2. Server migration environment properties Property name Property description extensions.excludes A list of module names of extensions that should never be migrated, for example, com.example.extension1 , com.example.extension3 . extensions.includes A list of module names of extensions that should always be migrated, for example, com.example.extension2 , com.example.extension4 . subsystems.excludes A list of subsystem namespaces, stripped of the version, that should never be migrated, for example, urn:jboss:domain:logging , urn:jboss:domain:ejb3 . subsystems.includes A list of subsystem namespaces, stripped of the version, that should always be migrated, for example, urn:jboss:domain:security , urn:jboss:domain:ee . 7.2.3. Migrate referenced modules for a managed domain A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a managed domain configuration is migrated using the following process. A module referenced by the datasource subsystem configuration is migrated as a datasource driver module. A module referenced by the ee subsystem configuration is migrated as a global module. A module referenced by the naming subsystem configuration is migrated as an object factory module. A module referenced by the messaging subsystem configuration is migrated as a Jakarta Messaging bridge module. Any extension that is not installed on the target configuration is migrated to the target server configuration. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. 7.2.4. Migrate referenced paths for a managed domain A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. 7.2.5. Migrate legacy Security Realms JBoss EAP 8 does not support the legacy Security Realms framework. The JBoss Server Migration Tool migrates the configuration to using the default JBoss EAP 8 Elytron replacements. If the default legacy security domain is not used, you may need to manually configure Elytron. The console logs configuration resources migrated to the default JBoss EAP 8 Elytron replacements. 7.2.6. Migrate legacy Security Domains JBoss EAP 8 does not support the legacy Security Domains framework. The JBoss Server Migration Tool migrates the configuration to using the default JBoss EAP 8 Elytron replacements. If the default legacy security domain was not used, you may need to manually configure Elytron. The console logs all configuration resources migrated to the default JBoss EAP 8 Elytron replacements. 7.2.7. Migrate keycloak subsystem The keycloak subsystem is not supported in JBoss EAP 8 and is replaced by the elytron-oidc-client subsystem. By default, the JBoss Server Migration Tool automatically migrates any legacy subsystem configuration. To skip this migration task, set the subsystem.keycloak.migrate.skip environment property value to true . The legacy subsystem migration is performed without any interaction from the user. When the legacy keycloak subsystem migration is completed, the following message is displayed in the migration console: Any issues encountered during the migration are written to the log files and displayed in the migration console. 7.2.8. Migrate picketlink-federation subsystem The picketlink-federation subsystem is deprecated in JBoss EAP 8 and is replaced by the keycloak-saml subsystem. By default, the JBoss Server Migration Tool automatically migrates any legacy subsystem configuration. To skip this migration task, set the subsystem.picketlink-federation.migrate.skip environment property value to true . The legacy subsystem migration is performed without any interaction from the user. The legacy subsystem migration can fail due to the following reasons: The legacy picketlink-federation subsystem cannot be migrated to the keycloak-saml subsystem due to the target server missing the Keycloak client SAML adapter. Non-empty legacy picketlink-federation subsystem configurations that must be migrated manually. When the legacy picketlink-federation subsystem migration is completed, the following message is displayed in the migration console: Any issues encountered during the migration are written to the log files and displayed in the migration console. For more information, see the Migration Guide . 7.2.9. Update jgroups subsystem configuration The JBoss Server Migration Tool does not automate the migration of the jgroups subsystem configuration. The JBoss Server Migration Tool reverts the configuration to the default JBoss EAP 8 jgroups configuration. If the default JBoss EAP 8 jgroups subsystem configuration was not used, you may need to manually configure the jgroups subsystem configuration. The console logs a message when the jgroups subsystem configuration is updated: 7.2.10. Add host excludes for managed domain migration The JBoss EAP 8.0 domain controller can potentially include functionality that is not supported by hosts running on older versions of the server. The host-exclude configuration specifies the resources that should be hidden from those older versions. When migrating a domain controller configuration, the JBoss Server Migration Tool adds to or replaces the source server's host-exclude configuration with the configuration of the target JBoss EAP 8.0 server. The JBoss Server Migration Tool automatically updates the host-exclude configuration and logs the results to its log file and to the console. 7.2.11. Migrate deployments for a managed domain The JBoss Server Migration Tool can migrate the following types of managed domain deployment configurations. Deployments it references, also known as persistent deployments . Deployment overlays it references. The migration of a deployment consists of installing related file resources on the target server, and possibly updating the migrated configuration. The JBoss Server Migration Tool is preconfigured to skip deployments by default when running in non-interactive mode. To enable migration of deployments, set the deployments.migrate-deployments.skip environment property to false . Important Be aware that when you run the JBoss Server Migration Tool in interactive mode and enter invalid input, the resulting behavior depends on the value of the deployments.migrate-deployments environment property. If deployments.migrate-deployments.skip is set to false and you enter invalid input, the tool will try to migrate the deployments. If deployments.migrate-deployments.skip is set to true and you enter invalid input, the tool will skip the deployments migration. Warning The JBoss Server Migration Tool does not determine whether deployed resources are compatible with the target server. This means that applications or resources might not deploy, might not work as expected, or might not work at all. Also be aware that artifacts such as JBoss EAP 7.3 *-jms.xml configuration files are copied without modification and can cause the JBoss EAP server to boot with errors. Red Hat recommends that you use the Migration Toolkit for Runtimes (MTR) to analyze deployments to determine compatibility among different JBoss EAP servers. For more information, see Product Documentation for Migration Toolkit for Runtimes . 7.2.11.1. Migrate persistent deployments for a managed domain To enable migration of persistent deployments when running in non-interactive mode, set the deployments.migrate-persistent-deployments.skip environment property to false . The JBoss Server Migration Tool searches for any persistent deployment references and lists them to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode, as described below. Migrating persistent deployments in non-interactive mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the persistent deployments. Persistent deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-persistent-deployments.skip properties are set to false . Migrating persistent deployments in interactive mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the persistent deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of persistent deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 7.2.11.2. Migrate deployment overlays for a managed domain The migration of deployment overlays is a fully automated process. If you have enabled migration of deployments by setting the deployments.migrate-deployments.skip environment property to false , the JBoss Server Migration Tool searches for deployment overlays referenced in the standalone server configuration that are linked to migrated deployments. It automatically migrates those that are found, removes those that are not referenced, and logs the results to its log file and to the console. 7.3. Migrating a JBoss EAP 7.x host configuration to JBoss EAP 8.0 By default, the JBoss Server Migration Tool performs the following tasks when migrating a host server configuration from JBoss EAP 7.x to JBoss EAP 8.0. 7.3.1. Migrate JBoss Domain Properties The words master and slave on Domain related property names were replaced with the words 'primary' and 'secondary', and the migration automatically fixes any usage of the old property names. The console logs any property renamed by the migration. If the properties are successfully renamed, the following message is displayed: 7.3.2. Migrate referenced modules for a host configuration A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a host server configuration is migrated using the following process. A module referenced by a security realm configuration is migrated as a plug-in module. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. 7.3.3. Migrate referenced paths for a host configuration A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated.
[ "INFO JBoss domain property jboss.domain.master.address migrated to jboss.domain.primary.address INFO JBoss domain property jboss.domain.master.port migrated to jboss.domain.primary.port INFO JBoss domain property jboss.domain.master.protocol migrated to jboss.domain.primary.protocol", "INFO JBoss domain properties migrated.", "INFO Legacy security XML configuration retrieved. WARN Migrated Remoting subsystem's http connector resource /subsystem/remoting/http-connector/http-remoting-connector using a legacy security-realm, to Elytron's default application SASL Authentication Factory migration-defaultApplicationSaslAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem https-listener resource /subsystem/undertow/server/default-server/https-listener/https using a legacy security-realm, to Elytron's default TLS ServerSSLContext migration-defaultTLSServerSSLContext. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem http-invoker resource /subsystem/undertow/server/default-server/host/default-host/setting/http-invoker using a legacy security-realm, to Elytron's default Application HTTP AuthenticationFactory migration-defaultApplicationHttpAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! INFO Legacy security realms migrated to Elytron.", "WARN Migrated ejb3 subsystem resource /subsystem/ejb3/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated undertow subsystem resource /subsystem/undertow/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration!", "INFO Subsystem keycloak migrated.", "INFO Subsystem picketlink-federation migrated.", "WARN Configuration of JGroups protocols has been changed to match the default protocols of the target server. Please note that further manual configuration may be needed if the legacy configuration being used was not the source server's default configuration!", "INFO Subsystem health added.", "INFO Subsystem metrics added.", "INFO [ServerMigrationTask#67] Persistent deployments found: [cmtool-helloworld3.war, cmtool-helloworld4.war, cmtool-helloworld2.war, cmtool-helloworld1.war]", "This tool is not able to assert if persistent deployments found are compatible with the target server, skip persistent deployments migration? yes/no?", "Migrate all persistent deployments found? yes/no?", "Migrate persistent deployment 'helloworld01.war'? yes/no?", "INFO [ServerMigrationTask#68] Removed persistent deployment from configuration /deployment=helloworld01.war", "This tool is not able to assert if the scanner's deployments found are compatible with the target server, skip scanner's deployments migration? yes/no?", "Migrate all scanner's deployments found? yes/no?", "Migrate scanner's deployment 'helloworld02.war'? yes/no?", "INFO [ServerMigrationTask#69] Resource with path EAP_NEW_HOME/standalone/deployments/helloworld02.war migrated.", "INFO JBoss domain property jboss.domain.master.address migrated to jboss.domain.primary.address INFO JBoss domain property jboss.domain.master.port migrated to jboss.domain.primary.port INFO JBoss domain property jboss.domain.master.protocol migrated to jboss.domain.primary.protocol", "INFO JBoss domain properties migrated.", "INFO Legacy security XML configuration retrieved. WARN Migrated Remoting subsystem's http connector resource /profile/full-ha/subsystem/remoting/http-connector/http-remoting-connector using a legacy security-realm, to Elytron's default application SASL Authentication Factory migration-defaultApplicationSaslAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem https-listener resource /profile/full-ha/subsystem/undertow/server/default-server/https-listener/https using a legacy security-realm, to Elytron's default TLS ServerSSLContext migration-defaultTLSServerSSLContext. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem http-invoker resource /profile/full-ha/subsystem/undertow/server/default-server/host/default-host/setting/http-invoker using a legacy security-realm, to Elytron's default Application HTTP AuthenticationFactory migration-defaultApplicationHttpAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! INFO Legacy security realms migrated to Elytron. WARN Migrated Remoting subsystem's http connector resource /profile/full/subsystem/remoting/http-connector/http-remoting-connector using a legacy security-realm, to Elytron's default application SASL Authentication Factory migration-defaultApplicationSaslAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem https-listener resource /profile/full/subsystem/undertow/server/default-server/https-listener/https using a legacy security-realm, to Elytron's default TLS ServerSSLContext migration-defaultTLSServerSSLContext. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem http-invoker resource /profile/full/subsystem/undertow/server/default-server/host/default-host/setting/http-invoker using a legacy security-realm, to Elytron's default Application HTTP AuthenticationFactory migration-defaultApplicationHttpAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! INFO Legacy security realms migrated to Elytron. WARN Migrated Remoting subsystem's http connector resource /profile/ha/subsystem/remoting/http-connector/http-remoting-connector using a legacy security-realm, to Elytron's default application SASL Authentication Factory migration-defaultApplicationSaslAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem https-listener resource /profile/ha/subsystem/undertow/server/default-server/https-listener/https using a legacy security-realm, to Elytron's default TLS ServerSSLContext migration-defaultTLSServerSSLContext. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem http-invoker resource /profile/ha/subsystem/undertow/server/default-server/host/default-host/setting/http-invoker using a legacy security-realm, to Elytron's default Application HTTP AuthenticationFactory migration-defaultApplicationHttpAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! INFO Legacy security realms migrated to Elytron. WARN Migrated Remoting subsystem's http connector resource /profile/default/subsystem/remoting/http-connector/http-remoting-connector using a legacy security-realm, to Elytron's default application SASL Authentication Factory migration-defaultApplicationSaslAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem https-listener resource /profile/default/subsystem/undertow/server/default-server/https-listener/https using a legacy security-realm, to Elytron's default TLS ServerSSLContext migration-defaultTLSServerSSLContext. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! WARN Migrated Undertow subsystem http-invoker resource /profile/default/subsystem/undertow/server/default-server/host/default-host/setting/http-invoker using a legacy security-realm, to Elytron's default Application HTTP AuthenticationFactory migration-defaultApplicationHttpAuthenticationFactory. Please note that further manual Elytron configuration may be needed if the legacy security realm being used was not the source server's default Application Realm configuration! INFO Legacy security realms migrated to Elytron.", "WARN Migrated ejb3 subsystem resource /profile/default/subsystem/ejb3/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated undertow subsystem resource /profile/default/subsystem/undertow/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated ejb3 subsystem resource /profile/full/subsystem/ejb3/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated undertow subsystem resource /profile/full/subsystem/undertow/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated messaging-activemq subsystem server resource /profile/full/subsystem/messaging-activemq/server/default, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated iiop-openjdk subsystem resource using legacy security domain to Elytron defaults. Please note that further manual Elytron configuration should be needed! WARN Migrated ejb3 subsystem resource /profile/ha/subsystem/ejb3/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated undertow subsystem resource /profile/ha/subsystem/undertow/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated ejb3 subsystem resource /profile/full-ha/subsystem/ejb3/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated undertow subsystem resource /profile/full-ha/subsystem/undertow/application-security-domain/other using legacy security domain other, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated messaging-activemq subsystem server resource /profile/full-ha/subsystem/messaging-activemq/server/default, to Elytron's default application Security Domain. Please note that further manual Elytron configuration may be needed if the legacy security domain being used was not the source server's default Application Domain configuration! WARN Migrated iiop-openjdk subsystem resource using legacy security domain to Elytron defaults. Please note that further manual Elytron configuration should be needed!", "INFO Subsystem keycloak migrated.", "INFO Subsystem picketlink-federation migrated.", "WARN Configuration of JGroups protocols has been changed to match the default protocols of the target server. Please note that further manual configuration may be needed if the legacy configuration being used was not the source server's default configuration!", "INFO Host-excludes configuration added.", "INFO [ServerMigrationTask#67] Persistent deployments found: [cmtool-helloworld3.war, cmtool-helloworld4.war, cmtool-helloworld2.war, cmtool-helloworld1.war]", "This tool is not able to assert if persistent deployments found are compatible with the target server, skip persistent deployments migration? yes/no?", "Migrate all persistent deployments found? yes/no?", "Migrate persistent deployment 'helloworld01.war'? yes/no?", "INFO [ServerMigrationTask#68] Removed persistent deployment from configuration /deployment=helloworld01.war", "INFO JBoss domain property jboss.domain.master.address migrated to jboss.domain.primary.address INFO JBoss domain property jboss.domain.master.port migrated to jboss.domain.primary.port INFO JBoss domain property jboss.domain.master.protocol migrated to jboss.domain.primary.protocol", "INFO JBoss domain properties migrated." ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_the_jboss_server_migration_tool/assembly_migrate-configs-to-current-version-server-migration-tool_server-migration-tool
Chapter 40. Clustering
Chapter 40. Clustering The pcs tool now manages bundle resources in Pacemaker As a Technology Preview starting with Red Hat Enterprise Linux 7.4, the pcs tool supports bundle resources. You can now use the pcs resource bundle create and the pcs resource bundle update commands to create and modify a bundle. You can add a resource to an existing bundle with the pcs resource create command. For information on the parameters you can set for a bundle resource, run the pcs resource bundle --help command. (BZ# 1433016 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now supports the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1476401) Heuristics supported in corosync-qdevice as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. (BZ# 1413573 , BZ# 1389209 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_clustering
22.5.2. The Channel Bonding Module
22.5.2. The Channel Bonding Module Red Hat Enterprise Linux allows administrators to bind NICs together into a single channel using the bonding kernel module and a special network interface, called a channel bonding interface . Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. To channel bond multiple network interfaces, the administrator must perform the following steps: Add the following line to /etc/modprobe.conf : alias bond <N> bonding Replace <N> with the interface number, such as 0 . For each configured channel bonding interface, there must be a corresponding entry in /etc/modprobe.conf . Configure a channel bonding interface as outlined in Section 8.2.3, "Channel Bonding Interfaces" . To enhance performance, adjust available module options to ascertain what combination works best. Pay particular attention to the miimon or arp_interval and the arp_ip_target parameters. Refer to Section 22.5.2.1, " bonding Module Directives" for a listing of available options. After testing, place preferred module options in /etc/modprobe.conf . 22.5.2.1. bonding Module Directives Before finalizing the settings for the bonding module, it is a good idea to test which settings work best. To do this, open a shell prompt as root and type: Open another shell prompt and use the /sbin/insmod command to load the bonding module with different parameters while observing the kernel messages for errors. The /sbin/insmod command is issued in the following format: Replace <N> with the number for the bonding interface. Replace <parameter=value> with a space separated list of desired parameters for the interface. Once satisfied that there are no errors and after verifying the performance of the bonding interface, add the appropriate bonding module parameters to /etc/modprobe.conf . The following is a list of available parameters for the bonding module: mode= - Specifies one of four policies allowed for the bonding module. Acceptable values for this parameter are: 0 - Sets a round-robin policy for fault tolerance and load balancing. Transmissions are received and sent out sequentially on each bonded slave interface beginning with the first one available. 1 - Sets an active-backup policy for fault tolerance. Transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails. 2 - Sets an XOR (exclusive-or) policy for fault tolerance and load balancing. Using this method, the interface matches up the incoming request's MAC address with the MAC address for one of the slave NICs. Once this link is established, transmissions are sent out sequentially beginning with the first available interface. 3 - Sets a broadcast policy for fault tolerance. All transmissions are sent on all slave interfaces. 4 - Sets an IEEE 802.3ad dynamic link aggregation policy. Creates aggregation groups that share the same speed and duplex settings. Transmits and receives on all slaves in the active aggregator. Requires a switch that is 802.3ad compliant. 5 - Sets a Transmit Load Balancing (TLB) policy for fault tolerance and load balancing. The outgoing traffic is distributed according to the current load on each slave interface. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed slave. 6 - Sets an Active Load Balancing (ALB) policy for fault tolerance and load balancing. Includes transmit and receive load balancing for IPV4 traffic. Receive load balancing is achieved through ARP negotiation. miimon= - Specifies (in milliseconds) how often MII link monitoring occurs. This is useful if high availability is required because MII is used to verify that the NIC is active. To verify that the driver for a particular NIC supports the MII tool, type the following command as root: ethtool <interface-name> | grep "Link detected:" In this command, replace <interface-name> with the name of the device interface, such as eth0 , not the bond interface. If MII is supported, the command returns: Link detected: yes If using a bonded interface for high availability, the module for each NIC must support MII. Setting the value to 0 (the default), turns this feature off. When configuring this setting, a good starting point for this parameter is 100 . downdelay= - Specifies (in milliseconds) how long to wait after link failure before disabling the link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. updelay= - Specifies (in milliseconds) how long to wait before enabling a link. The value must be a multiple of the value specified in the miimon parameter. The value is set to 0 by default, which disables it. arp_interval= - Specifies (in milliseconds) how often ARP monitoring occurs. If using this setting while in mode 0 or 2 (the two load-balancing modes), the network switch must be configured to distribute packets evenly across the NICs. For more information on how to accomplish this, refer to /usr/share/doc/kernel-doc- <kernel-version> /Documentation/networking/ bonding.txt The value is set to 0 by default, which disables it. arp_ip_target= - Specifies the target IP address of ARP requests when the arp_interval parameter is enabled. Up to 16 IP addresses can be specified in a comma separated list. primary= - Specifies the interface name, such as eth0 , of the primary device. The primary device is the first of the bonding interfaces to be used and is not abandoned unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode. Refer to /usr/share/doc/kernel-doc- <kernel-version> /Documentation/networking/ bonding.txt for more information. Important It is essential that either the arp_interval and arp_ip_target or miimon parameters are specified. Failure to due so can cause degradation of network performance in the event a link fails. Refer to the following file for more information (note that you must have the kernel-doc package installed to read this file): for detailed instructions regarding bonding interfaces.
[ "tail -f /var/log/messages", "/sbin/insmod bond <N> <parameter=value>", "/usr/share/doc/kernel-doc- <kernel-version> /Documentation/networking/bonding.txt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-modules-bonding
Chapter 5. Red Hat Decision Manager Spring Boot configuration
Chapter 5. Red Hat Decision Manager Spring Boot configuration After you create your Spring Boot project, you can configure several components to customize your application. 5.1. Configuring REST endpoints for Spring Boot applications After you create your Spring Boot project, you can configure the host, port, and path for the REST endpoint for your Spring Boot application. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. Configure the host, port, and path for the REST endpoints, where <ADDRESS> is the server address and <PORT> is the server port: server.address=<ADDRESS> server.port=<PORT> cxf.path=/rest The following example adds the REST endpoint to the address localhost on port 8090 . server.address=localhost server.port=8090 cxf.path=/rest 5.2. Configuring the KIE Server identity After you create your Spring Boot project, you can configure KIE Server so that it can be easily identified. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. Configure the KIE Server parameters as shown in the following example: kieserver.serverId=<BUSINESS-APPLICATION>-service kieserver.serverName=<BUSINESS-APPLICATION>-service kieserver.location=http://localhost:8090/rest/server kieserver.controllers=http://localhost:8080/business-central/rest/controller The following table describes the KIE Server parameters that you can configure in your business project: Table 5.1. kieserver parameters Parameter Values Description kieserver.serverId string The ID used to identify the business application when connecting to the Process Automation Manager controller. kieserver.serverName string The name used to identify the business application when it connects to the Process Automation Manager controller. Can be the same string used for the kieserver.serverId parameter. kieserver.location URL Used by other components that use the REST API to identify the location of this server. Do not use the location as defined by server.address and server.port . kieserver.controllers URLs A comma-separated list of controller URLs. 5.3. Configuring KIE Server components to start at runtime If you selected Business Automation when you created your Spring Boot business application, you can specify which KIE Server components must start at runtime. Prerequisites You have a Spring Boot business application service file that you created using the Maven archetype command. For more information, see Section 3.1, "Creating a Spring Boot business application from Maven archetypes" . Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the application.properties file in a text editor. To set a component to start at runtime, set the value of the component to true. The following table lists the components that you can set to start at runtime: Table 5.2. kieserver capabilities parameters Parameter Values Description kieserver.drools.enabled true, false Enables or disables the Decision Manager component. kieserver.dmn.enabled true, false Enables or disables the Decision Model and Notation (DMN) component. 5.4. Configuring business application user group providers With Red Hat Decision Manager, you can manage human-centric activities. To provide integration with user and group repositories, you can use two KIE API entry points: UserGroupCallback : Responsible for verifying whether a user or group exists and for collecting groups for a specific user UserInfo : Responsible for collecting additional information about users and groups, for example email addresses and preferred language You can configure both of these components by providing alternative code, either code provided out of the box or custom developed code. For the UserGroupCallback component, retain the default implementation because it is based on the security context of the application. For this reason, it does not matter which backend store is used for authentication and authorisation (for example, RH-SSO). It will be automatically used as a source of information for collecting user and group information. The UserInfo component is a separate component because it collects more advanced information. Prerequisites You have a Spring Boot business application. Procedure To provide an alternative implementation of UserGroupCallback , add the following code to the Application class or a separate class annotated with @Configuration : @Bean(name = "userGroupCallback") public UserGroupCallback userGroupCallback(IdentityProvider identityProvider) throws IOException { return new MyCustomUserGroupCallback(identityProvider); } To provide an alternative implementation of UserInfo , add the following code to the Application class or a separate class annotated with @Configuration : @Bean(name = "userInfo") public UserInfo userInfo() throws IOException { return new MyCustomUserInfo(); } 5.5. Enabling Swagger documentation You can enable Swagger-based documentation for all endpoints available in the service project of your Red Hat Decision Manager business application. Prerequisites You have a Spring Boot business application. Procedure Navigate to the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service folder, where <BUSINESS-APPLICATION> is the name of your Spring Boot project. Open the service project pom.xml file in a text editor. Add the following dependencies to the service project pom.xml file and save the file. <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-rs-service-description-swagger</artifactId> <version>3.2.6</version> </dependency> <dependency> <groupId>io.swagger</groupId> <artifactId>swagger-jaxrs</artifactId> <version>1.5.15</version> <exclusions> <exclusion> <groupId>javax.ws.rs</groupId> <artifactId>jsr311-api</artifactId> </exclusion> </exclusions> </dependency> To enable the Swagger UI (optional), add the following dependency to the pom.xml file and save the file. <dependency> <groupId>org.webjars</groupId> <artifactId>swagger-ui</artifactId> <version>2.2.10</version> </dependency> Open the <BUSINESS-APPLICATION>/<BUSINESS-APPLICATION>-service/src/main/resources/application.properties file in a text editor. Add the following line to the application.properties file to enable Swagger support: kieserver.swagger.enabled=true After you start the business application, you can view the Swagger document at http://localhost:8090/rest/swagger.json . The complete set of endpoints is available at http://localhost:8090/rest/api-docs?url=http://localhost:8090/rest/swagger.json .
[ "server.address=<ADDRESS> server.port=<PORT> cxf.path=/rest", "server.address=localhost server.port=8090 cxf.path=/rest", "kieserver.serverId=<BUSINESS-APPLICATION>-service kieserver.serverName=<BUSINESS-APPLICATION>-service kieserver.location=http://localhost:8090/rest/server kieserver.controllers=http://localhost:8080/business-central/rest/controller", "@Bean(name = \"userGroupCallback\") public UserGroupCallback userGroupCallback(IdentityProvider identityProvider) throws IOException { return new MyCustomUserGroupCallback(identityProvider); }", "@Bean(name = \"userInfo\") public UserInfo userInfo() throws IOException { return new MyCustomUserInfo(); }", "<dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-rs-service-description-swagger</artifactId> <version>3.2.6</version> </dependency> <dependency> <groupId>io.swagger</groupId> <artifactId>swagger-jaxrs</artifactId> <version>1.5.15</version> <exclusions> <exclusion> <groupId>javax.ws.rs</groupId> <artifactId>jsr311-api</artifactId> </exclusion> </exclusions> </dependency>", "<dependency> <groupId>org.webjars</groupId> <artifactId>swagger-ui</artifactId> <version>2.2.10</version> </dependency>", "kieserver.swagger.enabled=true" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/bus-app-configure-con_business-applications
Storage
Storage OpenShift Dedicated 4 Configuring storage for OpenShift Dedicated clusters Red Hat OpenShift Documentation Team
[ "apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: \"2Gi\" 1 limits: ephemeral-storage: \"4Gi\" 2 volumeMounts: - name: ephemeral mountPath: \"/tmp\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: \"2Gi\" limits: ephemeral-storage: \"4Gi\" volumeMounts: - name: ephemeral mountPath: \"/tmp\" volumes: - name: ephemeral emptyDir: {}", "df -h /var/lib", "Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% /", "oc delete pv <pv-name>", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:", "oc get pv <pv-name> -o jsonpath='{.spec.claimRef.name}'", "oc get pv <pv-name> -o json | jq '.status.lastPhaseTransitionTime' 1", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3", "apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi", "apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3", "securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1", "cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF", "cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF", "cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF", "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF", "oc new-app mysql-persistent", "--> Deploying template \"openshift/mysql-persistent\" to project default", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s", "spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1", "patch clustercsidriver USDDRIVERNAME --type=merge -p \"{\\\"spec\\\":{\\\"storageClassState\\\":\\\"USD{STATE}\\\"}}\" 1", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:DescribeAccessPoints\", \"elasticfilesystem:DescribeFileSystems\", \"elasticfilesystem:DescribeMountTargets\", \"ec2:DescribeAvailabilityZones\", \"elasticfilesystem:TagResource\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:CreateAccessPoint\" ], \"Resource\": \"*\", \"Condition\": { \"StringLike\": { \"aws:RequestTag/efs.csi.aws.com/cluster\": \"true\" } } }, { \"Effect\": \"Allow\", \"Action\": \"elasticfilesystem:DeleteAccessPoint\", \"Resource\": \"*\", \"Condition\": { \"StringEquals\": { \"aws:ResourceTag/efs.csi.aws.com/cluster\": \"true\" } } } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::<your_aws_account_ID>:oidc-provider/<openshift_oidc_provider>\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"<openshift_oidc_provider>:sub\": [ 2 \"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-operator\", \"system:serviceaccount:openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa\" ] } } } ] }", "aws sts get-caller-identity --query Account --output text", "openshift_oidc_provider=`oc get authentication.config.openshift.io cluster -o json | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\"`; echo USDopenshift_oidc_provider", "ROLE_ARN=USD(aws iam create-role --role-name \"<your_cluster_name>-aws-efs-csi-operator\" --assume-role-policy-document file://<your_trust_file_name>.json --query \"Role.Arn\" --output text); echo USDROLE_ARN", "POLICY_ARN=USD(aws iam create-policy --policy-name \"<your_cluster_name>-aws-efs-csi\" --policy-document file://<your_policy_file_name>.json --query 'Policy.Arn' --output text); echo USDPOLICY_ARN", "aws iam attach-role-policy --role-name \"<your_cluster_name>-aws-efs-csi-operator\" --policy-arn USDPOLICY_ARN", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi", "apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3", "spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10", "oc edit clustercsidriver efs.csi.aws.com", "spec: driverConfig: driverType: AWS aws: efsVolumeMetrics: state: RecursiveWalk recursiveWalk: refreshPeriodMinutes: 100 fsRateLimit: 10", "oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created", "oc get clustercsidriver efs.csi.aws.com -o yaml", "oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: storageClassName: hyperdisk-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: 2048Gi 2", "apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: nodeSelector: cloud.google.com/machine-family: n4 1 containers: - name: postgres image: postgres:14-alpine args: [ \"sleep\", \"3600\" ] volumeMounts: - name: sdk-volume mountPath: /usr/share/data/ volumes: - name: sdk-volume persistentVolumeClaim: claimName: my-pvc 2", "oc get deployment", "NAME READY UP-TO-DATE AVAILABLE AGE postgres 0/1 1 0 42s", "oc get pvc my-pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE my-pvc Bound pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 2Ti RWO hyperdisk-sc <unset> 2m24s", "gcloud compute disks list", "NAME LOCATION LOCATION_SCOPE SIZE_GB TYPE STATUS instance-20240914-173145-boot us-central1-a zone 150 pd-standard READY instance-20240914-173145-data-workspace us-central1-a zone 100 pd-balanced READY c4a-rhel-vm us-central1-a zone 50 hyperdisk-balanced READY 1", "gcloud compute storage-pools list-disks pool-us-east4-c --zone=us-east4-c", "NAME STATUS PROVISIONED_IOPS PROVISIONED_THROUGHPUT SIZE_GB pvc-1ff52479-4c81-4481-aa1d-b21c8f8860c6 READY 3000 140 2048", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1", "oc describe storageclass csi-gce-pd-cmek", "Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi", "oc apply -f pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s", "export PROJECT_ID=USD(oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.gcp.projectID}')", "gcloud projects describe USDPROJECT_ID --format=\"value(projectNumber)\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-gcp-filestore-csi-driver-operator namespace: openshift-cloud-credential-operator annotations: include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" spec: serviceAccountNames: - gcp-filestore-csi-driver-operator - gcp-filestore-csi-driver-controller-sa secretRef: name: gcp-filestore-cloud-credentials namespace: openshift-cluster-csi-drivers providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/file.editor - roles/resourcemanager.tagUser skipServiceCheck: true", "./ccoctl gcp create-service-accounts --name=<filestore-service-account> \\ 1 --workload-identity-pool=<workload-identity-pool> \\ 2 --workload-identity-provider=<workload-identity-provider> \\ 3 --project=<project-id> \\ 4 --credentials-requests-dir=/tmp/credreq 5", "2025/02/10 17:47:39 Credentials loaded from gcloud CLI defaults 2025/02/10 17:47:42 IAM service account filestore-service-account-openshift-gcp-filestore-csi-driver-operator created 2025/02/10 17:47:44 Unable to add predefined roles to IAM service account, retrying 2025/02/10 17:47:59 Updated policy bindings for IAM service account filestore-service-account-openshift-gcp-filestore-csi-driver-operator 2025/02/10 17:47:59 Saved credentials configuration to: /tmp/install-dir/ 1 openshift-cluster-csi-drivers-gcp-filestore-cloud-credentials-credentials.yaml", "cat /tmp/install-dir/manifests/openshift-cluster-csi-drivers-gcp-filestore-cloud-credentials-credentials.yaml | yq '.data[\"service_account.json\"]' | base64 -d | jq '.service_account_impersonation_url'", "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/filestore-se-openshift-g-ch8cm@openshift-gce-devel.iam.gserviceaccount.com:generateAccessToken", "gcloud services enable file.googleapis.com --project <my_gce_project> 1", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: filestore.csi.storage.gke.io spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: filestore-csi provisioner: filestore.csi.storage.gke.io parameters: connect-mode: DIRECT_PEERING 1 network: network-name 2 allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc -n openshift-machine-api get machinesets -o yaml | grep \"network:\" - network: gcp-filestore-network (...)", "kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: \"/mnt/storage\" name: data command: [ \"sleep\", \"1000000\" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: \"gp2-csi\" resources: requests: storage: 1Gi", "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html-single/storage/index
20.39. Managing Snapshots
20.39. Managing Snapshots The sections that follow describe actions that can be done in order to manipulate guest virtual machine snapshots. Snapshots take the disk, memory, and device state of a guest virtual machine at a specified point in time, and save it for future use. Snapshots have many uses, from saving a "clean" copy of an OS image to saving a guest virtual machine's state before what may be a potentially destructive operation. Snapshots are identified with a unique name. See the libvirt upstream website for documentation of the XML format used to represent properties of snapshots. Important Red Hat Enterprise Linux 7 only supports creating snapshots while the guest virtual machine is paused or powered down. Creating snapshots of running guests (also known as live snapshots ) is available on Red Hat Virtualization . For details, call your service representative. 20.39.1. Creating Snapshots The virsh snapshot-create command creates a snapshot for guest virtual machine with the properties specified in the guest virtual machine's XML file (such as <name> and <description> elements, as well as <disks> ). To create a snapshot run: The guest virtual machine name, id, or uid may be used as the guest virtual machine requirement. The XML requirement is a string that must in the very least contain the name , description , and disks elements. The remaining optional arguments are as follows: --disk-only - the memory state of the guest virtual machine is not included in the snapshot. If the XML file string is completely omitted, libvirt will choose a value for all fields. The new snapshot will become current, as listed by snapshot-current. In addition, the snapshot will only include the disk state rather than the usual system checkpoint with guest virtual machine state. Disk snapshots are faster than full system checkpoints, but reverting to a disk snapshot may require fsck or journal replays, since it is like the disk state at the point when the power cord is abruptly pulled. Note that mixing --halt and --disk-only loses any data that was not flushed to disk at the time. --halt - causes the guest virtual machine to be left in an inactive state after the snapshot is created. Mixing --halt and --disk-only loses any data that was not flushed to disk at the time as well as the memory state. --redefine specifies that if all XML elements produced by virsh snapshot-dumpxml are valid; it can be used to migrate snapshot hierarchy from one machine to another, to recreate hierarchy for the case of a transient guest virtual machine that goes away and is later recreated with the same name and UUID, or to make slight alterations in the snapshot metadata (such as host-specific aspects of the guest virtual machine XML embedded in the snapshot). When this flag is supplied, the xmlfile argument is mandatory, and the guest virtual machine's current snapshot will not be altered unless the --current flag is also given. --no-metadata creates the snapshot, but any metadata is immediately discarded (that is, libvirt does not treat the snapshot as current, and cannot revert to the snapshot unless --redefine is later used to teach libvirt about the metadata again). --reuse-external , if used and snapshot XML requests an external snapshot with a destination of an existing file, the destination must exist, and is reused; otherwise, a snapshot is refused to avoid losing contents of the existing files. --quiesce libvirt will try to freeze and unfreeze the guest virtual machine's mounted file system(s), using the guest agent. However, if the guest virtual machine does not have a guest agent, snapshot creation will fail. The snapshot can contain the memory state of the virtual guest machine. The snapshot must be external. --atomic causes libvirt to guarantee that the snapshot either succeeds, or fails with no changes. Note that not all hypervisors support this. If this flag is not specified, then some hypervisors may fail after partially performing the action, and virsh dumpxml must be used to see whether any partial changes occurred. Existence of snapshot metadata will prevent attempts to undefine a persistent guest virtual machine. However, for transient guest virtual machines, snapshot metadata is silently lost when the guest virtual machine quits running (whether by a command such as destroy or by an internal guest action). 20.39.2. Creating a Snapshot for the Current Guest Virtual Machine The virsh snapshot-create-as command creates a snapshot for guest virtual machine with the properties specified in the domain XML file (such as name and description elements). If these values are not included in the XML string, libvirt will choose a value. To create a snapshot run: The remaining optional arguments are as follows: --print-xml creates appropriate XML for snapshot-create as output, rather than actually creating a snapshot. --halt keeps the guest virtual machine in an inactive state after the snapshot is created. --disk-only creates a snapshot that does not include the guest virtual machine state. --memspec can be used to control whether a checkpoint is internal or external. The flag is mandatory, followed by a memspec of the form [file=]name[,snapshot=type] , where type can be none, internal, or external. To include a literal comma in file=name, escape it with a second comma. --diskspec option can be used to control how --disk-only and external checkpoints create external files. This option can occur multiple times, according to the number of <disk> elements in the domain XML. Each <diskspec> is in the form disk [,snapshot=type][,driver=type][,file=name] . If --diskspec is omitted for a specific disk, the default behavior in the virtual machine configuraition is used. To include a literal comma in disk or in file=name , escape it with a second comma. A literal --diskspec must precede each diskspec unless all three of domain , name , and description are also present. For example, a diskspec of vda,snapshot=external,file=/path/to,,new results in the following XML: Important Red Hat recommends the use of external snapshots, as they are more flexible and reliable when handled by other virtualization tools. To create an external snapshot, use the virsh-create-as command with the --diskspec vda,snapshot=external option If this option is not used, virsh creates internal snapshots, which are not recommended for use due to their lack of stability and optimization. For more information, see Section A.13, "Workaround for Creating External Snapshots with libvirt" . --reuse-external is specified, and the domain XML or diskspec option requests an external snapshot with a destination of an existing file, then the destination must exist, and is reused; otherwise, a snapshot is refused to avoid losing contents of the existing files. --quiesce is specified, libvirt will try to use guest agent to freeze and unfreeze guest virtual machine's mounted file systems. However, if domain has no guest agent, snapshot creation will fail. Currently, this requires --disk-only to be passed as well. --no-metadata creates snapshot data but any metadata is immediately discarded (that is, libvirt does not treat the snapshot as current, and cannot revert to the snapshot unless snapshot-create is later used to teach libvirt about the metadata again). This flag is incompatible with --print-xml --atomic will cause libvirt to guarantee that the snapshot either succeeds, or fails with no changes. Note that not all hypervisors support this. If this flag is not specified, then some hypervisors may fail after partially performing the action, and virsh dumpxml must be used to see whether any partial changes occurred. Warning Creating snapshots of KVM guests running on a 64-bit ARM platform host currently does not work. Note that KVM on 64-bit ARM is not supported by Red Hat. 20.39.3. Displaying the Snapshot Currently in Use The virsh snapshot-current command is used to query which snapshot is currently in use. If snapshotname is not used, snapshot XML for the guest virtual machine's current snapshot (if there is one) will be displayed as output. If --name is specified, just the current snapshot name instead of the full XML will be sent as output. If --security-info is supplied security sensitive information will be included in the XML. Using snapshotname , generates a request to make the existing named snapshot become the current snapshot, without reverting it to the guest virtual machine. 20.39.4. snapshot-edit This command is used to edit the snapshot that is currently in use: If both snapshotname and --current are specified, it forces the edited snapshot to become the current snapshot. If snapshotname is omitted, then --current must be supplied, in order to edit the current snapshot. This is equivalent to the following command sequence below, but it also includes some error checking: If the --rename is specified, then the snapshot is renamed. If --clone is specified, then changing the snapshot name will create a clone of the snapshot metadata. If neither is specified, then the edits will not change the snapshot name. Note that changing a snapshot name must be done with care, since the contents of some snapshots, such as internal snapshots within a single qcow2 file, are accessible only from the original snapshot name. 20.39.5. snapshot-info The snapshot-info domain command displays information about the snapshots. To use, run: Outputs basic information about a specified snapshot , or the current snapshot with --current . 20.39.6. snapshot-list List all of the available snapshots for the given guest virtual machine, defaulting to show columns for the snapshot name, creation time, and guest virtual machine state. To use, run: The optional arguments are as follows: --parent adds a column to the output table giving the name of the parent of each snapshot. This option may not be used with --roots or --tree . --roots filters the list to show only the snapshots that have no parents. This option may not be used with --parent or --tree . --tree displays output in a tree format, listing just snapshot names. This option may not be used with --roots or --parent . --from filters the list to snapshots which are children of the given snapshot or, if --current is provided, will cause the list to start at the current snapshot. When used in isolation or with --parent , the list is limited to direct children unless --descendants is also present. When used with --tree , the use of --descendants is implied. This option is not compatible with --roots . Note that the starting point of --from or --current is not included in the list unless the --tree option is also present. --leaves is specified, the list will be filtered to just snapshots that have no children. Likewise, if --no-leaves is specified, the list will be filtered to just snapshots with children. (Note that omitting both options does no filtering, while providing both options will either produce the same list or error out depending on whether the server recognizes the flags) Filtering options are not compatible with --tree . --metadata is specified, the list will be filtered to just snapshots that involve libvirt metadata, and thus would prevent the undefining of a persistent guest virtual machine, or be lost on destroy of a transient guest virtual machine. Likewise, if --no-metadata is specified, the list will be filtered to just snapshots that exist without the need for libvirt metadata. --inactive is specified, the list will be filtered to snapshots that were taken when the guest virtual machine was shut off. If --active is specified, the list will be filtered to snapshots that were taken when the guest virtual machine was running, and where the snapshot includes the memory state to revert to that running state. If --disk-only is specified, the list will be filtered to snapshots that were taken when the guest virtual machine was running, but where the snapshot includes only disk state. --internal is specified, the list will be filtered to snapshots that use internal storage of existing disk images. If --external is specified, the list will be filtered to snapshots that use external files for disk images or memory state. 20.39.7. snapshot-dumpxml The virsh snapshot-dumpxml domain snapshot command outputs the snapshot XML for the guest virtual machine's snapshot named snapshot. To use, run: The --security-info option will also include security sensitive information. Use virsh snapshot-current to easily access the XML of the current snapshot. 20.39.8. snapshot-parent Outputs the name of the parent snapshot, if any, for the given snapshot, or for the current snapshot with --current . To use, run: 20.39.9. snapshot-revert Reverts the given domain to the snapshot specified by snapshot , or to the current snapshot with --current . Warning Be aware that this is a destructive action; any changes in the domain since the last snapshot was taken will be lost. Also note that the state of the domain after snapshot-revert is complete will be the state of the domain at the time the original snapshot was taken. To revert the snapshot, run: Normally, reverting to a snapshot leaves the domain in the state it was at the time the snapshot was created, except that a disk snapshot with no guest virtual machine state leaves the domain in an inactive state. Passing either the --running or --paused option will perform additional state changes (such as booting an inactive domain, or pausing a running domain). Since transient domains cannot be inactive, it is required to use one of these flags when reverting to a disk snapshot of a transient domain. There are two cases where a snapshot revert involves extra risk, which requires the use of --force to proceed. One is the case of a snapshot that lacks full domain information for reverting configuration; since libvirt cannot prove that the current configuration matches what was in use at the time of the snapshot, supplying --force assures libvirt that the snapshot is compatible with the current configuration (and if it is not, the domain will likely fail to run). The other is the case of reverting from a running domain to an active state where a new hypervisor has to be created rather than reusing the existing hypervisor, because it implies drawbacks such as breaking any existing VNC or Spice connections; this condition happens with an active snapshot that uses a provably incompatible configuration, as well as with an inactive snapshot that is combined with the --start or --pause flag. 20.39.10. snapshot-delete The virsh snapshot-delete domain command deletes the snapshot for the specified domain. To do this, run: This command deletes the snapshot for the domain named snapshot , or the current snapshot with --current . If this snapshot has child snapshots, changes from this snapshot will be merged into the children. If the option --children is used, then it will delete this snapshot and any children of this snapshot. If --children-only is used, then it will delete any children of this snapshot, but leave this snapshot intact. These two flags are mutually exclusive. The --metadata is used it will delete the snapshot's metadata maintained by libvirt , while leaving the snapshot contents intact for access by external tools; otherwise deleting a snapshot also removes its data contents from that point in time.
[ "virsh snapshot-create domain XML file [--redefine [--current] [--no-metadata] [--halt] [--disk-only] [--reuse-external] [--quiesce] [--atomic]", "snapshot-create-as domain {[--print-xml] | [--no-metadata] [--halt] [--reuse-external]} [name] [description] [--disk-only [--quiesce]] [--atomic] [[--memspec memspec]] [--diskspec] diskspec]", "<disk name='vda' snapshot='external'> <source file='/path/to,new'/> </disk>", "virsh snapshot-current domain {[--name] | [--security-info] | [snapshotname]}", "virsh snapshot-edit domain [snapshotname] [--current] {[--rename] [--clone]}", "virsh snapshot-dumpxml dom name > snapshot.xml vi snapshot.xml [note - this can be any editor] virsh snapshot-create dom snapshot.xml --redefine [--current]", "snapshot-info domain { snapshot | --current}", "virsh snapshot-list domain [{--parent | --roots | --tree}] [{[--from] snapshot | --current} [--descendants]] [--metadata] [--no-metadata] [--leaves] [--no-leaves] [--inactive] [--active] [--disk-only] [--internal] [--external]", "virsh snapshot-dumpxml domain snapshot [--security-info]", "virsh snapshot-parent domain { snapshot | --current}", "virsh snapshot-revert domain { snapshot | --current} [{--running | --paused}] [--force]", "virsh snapshot-delete domain { snapshot | --current} [--metadata] [{--children | --children-only}]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Managing_snapshots
3.9. Creating a Virtual Machine
3.9. Creating a Virtual Machine This Ruby example creates a virtual machine. This example uses a hash with symbols and nested hashes as their values. Another method, more verbose, is to use the constructors of the corresponding objects directly. See Creating a Virtual Machine Instance with Attributes for more information. # Get the reference to the "vms" service: vms_service = connection.system_service.vms_service # Use the "add" method to create a new virtual machine: vms_service.add( OvirtSDK4::Vm.new( name: 'myvm', cluster: { name: 'mycluster' }, template: { name: 'Blank' } ) ) After creating a virtual machine, it is recommended to poll the virtual machine's status , to ensure that all the disks have been created. For more information, see VmsService:add .
[ "Get the reference to the \"vms\" service: vms_service = connection.system_service.vms_service Use the \"add\" method to create a new virtual machine: vms_service.add( OvirtSDK4::Vm.new( name: 'myvm', cluster: { name: 'mycluster' }, template: { name: 'Blank' } ) )" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/creating_a_virtual_machine
1.4. SELinux States and Modes
1.4. SELinux States and Modes SELinux can run in one of three modes: disabled, permissive, or enforcing. Disabled mode is strongly discouraged; not only does the system avoid enforcing the SELinux policy, it also avoids labeling any persistent objects such as files, making it difficult to enable SELinux in the future. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not recommended for production systems, permissive mode can be helpful for SELinux policy development. Enforcing mode is the default, and recommended, mode of operation; in enforcing mode SELinux operates normally, enforcing the loaded security policy on the entire system. Use the setenforce utility to change between enforcing and permissive mode. Changes made with setenforce do not persist across reboots. To change to enforcing mode, enter the setenforce 1 command as the Linux root user. To change to permissive mode, enter the setenforce 0 command. Use the getenforce utility to view the current SELinux mode: In Red Hat Enterprise Linux, you can set individual domains to permissive mode while the system runs in enforcing mode. For example, to make the httpd_t domain permissive: See Section 11.3.4, "Permissive Domains" for more information. Note Persistent states and modes changes are covered in Section 4.4, "Permanent Changes in SELinux States and Modes" .
[ "~]# getenforce Enforcing", "~]# setenforce 0 ~]# getenforce Permissive", "~]# setenforce 1 ~]# getenforce Enforcing", "~]# semanage permissive -a httpd_t" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Introduction-SELinux_Modes
Chapter 4. Supported configurations for running JBoss EAP in Microsoft Azure
Chapter 4. Supported configurations for running JBoss EAP in Microsoft Azure This section describes the supported configurations for running JBoss EAP in Microsoft Azure. 4.1. Supported virtual machine operating systems for using JBoss EAP The only virtual machine operating systems supported for using JBoss EAP in Microsoft Azure are: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Microsoft Windows Server 2019 Microsoft Windows Server 2022 The Red Hat Cloud Access program allows you to use a JBoss EAP subscription to install JBoss EAP on your own Azure virtual machine or one of the above On-Demand operating systems from the Microsoft Azure Marketplace. Note that virtual machine operating system subscriptions are separate from a JBoss EAP subscription. Other than the above operating system restrictions, see the Customer Portal for further information on supported configurations for JBoss EAP , such as supported Java Development Kit (JDK) vendors and versions.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_red_hat_jboss_enterprise_application_platform_in_microsoft_azure/supported-configurations-for-running-server-in-microsoft-azure_default
Chapter 119. AclRule schema reference
Chapter 119. AclRule schema reference Used in: KafkaUserAuthorizationSimple Full list of AclRule schema properties Configures access control rules for a KafkaUser when brokers are using simple authorization. Example KafkaUser configuration with simple authorization apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: - resource: type: topic name: "*" patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read Use the resource property to specify the resource that the rule applies to. Simple authorization supports four resource types, which are specified in the type property: Topics ( topic ) Consumer Groups ( group ) Clusters ( cluster ) Transactional IDs ( transactionalId ) For Topic, Group, and Transactional ID resources you can specify the name of the resource the rule applies to in the name property. Cluster type resources have no name. A name is specified as a literal or a prefix using the patternType property. Literal names are taken exactly as they are specified in the name field. Prefix names use the name value as a prefix and then apply the rule to all resources with names starting with that value. When patternType is set as literal , you can set the name to * to indicate that the rule applies to all resources. For more details about simple authorization, ACLs, and supported combinations of resources and operations, see Authorization and ACLs . 119.1. AclRule schema properties Property Property type Description type string (one of [allow, deny]) The type of the rule. Currently the only supported type is allow . ACL rules with type allow are used to allow user to execute the specified operations. Default value is allow . resource AclRuleTopicResource , AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource Indicates the resource for which given ACL rule applies. host string The host from which the action described in the ACL rule is allowed or denied. If not set, it defaults to * , allowing or denying the action from any host. operation string (one of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) The operation property has been deprecated, and should now be configured using spec.authorization.acls[*].operations . Operation which will be allowed or denied. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. operations string (one or more of [Read, Write, Delete, Alter, Describe, All, IdempotentWrite, ClusterAction, Create, AlterConfigs, DescribeConfigs]) array List of operations to allow or deny. Supported operations are: Read, Write, Create, Delete, Alter, Describe, ClusterAction, AlterConfigs, DescribeConfigs, IdempotentWrite and All. Only certain operations work with the specified resource.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: - resource: type: topic name: \"*\" patternType: literal operations: - Read - Describe - resource: type: group name: my-group patternType: prefix operations: - Read" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-AclRule-reference
function::modname
function::modname Name function::modname - Return the kernel module name loaded at the address Synopsis Arguments addr The address to map to a kernel module name Description Returns the module name associated with the given address if known. If not known it will raise an error. If the address was not in a kernel module, but in the kernel itself, then the string " kernel " will be returned.
[ "modname:string(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-modname
Chapter 36. Disabling Anonymous Binds
Chapter 36. Disabling Anonymous Binds Accessing domain resources and running client tools always require Kerberos authentication. However, the back end LDAP directory used by the IdM server allows anonymous binds by default. This potentially opens up all of the domain configuration to unauthorized users, including information about users, machines, groups, services, netgroups, and DNS configuration. It is possible to disable anonymous binds on the 389 Directory Server instance by using LDAP tools to reset the nsslapd-allow-anonymous-access attribute. Warning Certain clients rely on anonymous binds to discover IdM settings. Additionally, the compat tree can break for legacy clients that are not using authentication. Change the nsslapd-allow-anonymous-access attribute to rootdse . Important Anonymous access can be completely allowed (on) or completely blocked (off). However, completely blocking anonymous access also blocks external clients from checking the server configuration. LDAP and web clients are not necessarily domain clients, so they connect anonymously to read the root DSE file to get connection information. The rootdse allows access to the root DSE and server configuration without any access to the directory data. Restart the 389 Directory Server instance to load the new setting. Additional Resources: The Managing Entries Using the Command Line section in the Red Hat Directory Server Administration Guide .
[ "ldapmodify -x -D \"cn=Directory Manager\" -W -h server.example.com -p 389 -ZZ Enter LDAP Password: dn: cn=config changetype: modify replace: nsslapd-allow-anonymous-access nsslapd-allow-anonymous-access: rootdse modifying entry \"cn=config\"", "systemctl restart dirsrv.target" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/disabling-anon-binds
12.3.4. Restarting a Service
12.3.4. Restarting a Service To restart the service, type the following at a shell prompt as root : service service_name restart For example, to restart the httpd service, type:
[ "~]# service httpd restart Stopping httpd: [ OK ] Starting httpd: [ OK ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s3-services-running-restarting
8.247. yum-rhn-plugin
8.247. yum-rhn-plugin 8.247.1. RHBA-2013:1086 - yum-rhn-plugin bug fix update Updated yum-rhn-plugin packages that fix one bug are now available. The yum-rhn-plugin package provides support for connecting to Red Hat Network (RHN). Systems registered with RHN are able to update and install packages from Red Hat Network. Bug Fix BZ#949649 The RHN Proxy did not work properly if separated from a parent by a slow enough network. Consequently, users who attempted to download larger repodata files and RPMs experienced timeouts. This update changes both RHN Proxy and Red Hat Enterprise Linux RHN Client to allow all communications to obey a configured timeout value for connections. Users of yum-rhn-plugin are advised to upgrade to these updated packages, which fix this bug. 8.247.2. RHBA-2013:1703 - yum-rhn-plugin bug fix update Updated yum-rhn-plugin packages that fix one bug are now available for Red Hat Enterprise Linux 6. The yum-rhn-plugin packages allow the Yum package manager to access content from Red Hat Network. Bug Fix BZ# 960524 , BZ# 988895 Prior to this update, an attempt to install an already-installed package led to an empty transaction that was incorrectly identified as an error. Consequently, the yum-rhn-plugin reported a failed installation action. With this update, yum-rhn-plugin has been modified to return a success code for empty transactions. As a result, a successful installation action is now reported when the package is already installed. Users of yum-rhn-plugin are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/yum-rhn-plugin
Chapter 1. Introduction
Chapter 1. Introduction 1.1. About Red Hat JBoss Enterprise Application Platform 7 Red Hat JBoss Enterprise Application Platform 7 (JBoss EAP) is a middleware platform built on open standards that is compatible with the Jakarta Enterprise Edition 8 specification. The 7.4 release of JBoss EAP is a Jakarta EE 8 compatible implementation for both Web Profile and Full Platform specifications. The 7.4 release is also a certified implementation of the Jakarta EE 8 Web Profile and the Full Platform specifications. JBoss EAP provides two operating modes for server instances. Standalone server The standalone server operating mode represents running JBoss EAP as a single server instance. Managed domain The managed domain operating mode allows for the management of multiple JBoss EAP instances from a single control point. JBoss EAP includes APIs and development frameworks for quickly developing secure and scalable Jakarta EE applications. Many of the APIs and capabilities that are exposed to applications deployed to JBoss EAP servers are organized into subsystems that are configured in the server configuration files. For example, you configure database access information in the datasources subsystem so that it can be accessed by applications deployed to JBoss EAP standalone servers or managed domains. The introduction of new features and deprecation of other features can require modification of the server configurations from one release of JBoss EAP to another. For more information about Red Hat JBoss Enterprise Application Platform, see the Product Documentation for JBoss EAP located on the Red Hat Customer Portal. 1.2. About the JBoss Server Migration Tool Migrating an existing application server configuration from one release to another is a complex task. To plan and execute a successful migration not only requires a complete understanding the current server configuration, but also knowledge of features and changes in the target server configuration. With a manual migration, you generally copy and edit several configuration files, and then make the updates needed to keep the same behavior in the target release. If this is not done correctly, the target server does not work as expected. This is often because some functionality is not supported by the target server. The JBoss Server Migration Tool is a Java application that automatically migrates JBoss EAP server configurations with minimal or no interaction required. It is the preferred method to update your JBoss EAP server configuration to include the new features and settings in JBoss EAP 7 while keeping your existing configuration. The JBoss Server Migration Tool reads your existing source server configuration files and adds configurations for any new subsystems, updates the existing subsystem configurations with new features, and removes any obsolete subsystem configurations. The JBoss Server Migration Tool supports the migration of standalone servers and managed domains for the following configurations. Migrating to JBoss EAP 7.4 The JBoss Server Migration Tool ships with JBoss EAP 7.4, so there is no separate download or installation required. This tool supports migration to JBoss EAP 7.4 from the major release of the product, which is JBoss EAP 6.4 and from the minor release of the product, which is JBoss EAP 7.3. You run the tool by executing the jboss-server-migration script located in the EAP_HOME /bin directory. For more information about how to run the tool, see Running the JBoss Server Migration Tool . It is recommended that you use this version of the JBoss Server Migration Tool to migrate your server configuration to JBoss EAP 7.4 as this version of the tool is supported . Migrating from WildFly to JBoss EAP If you want to migrate from the WildFly server to JBoss EAP, you must download the latest binary distribution of the JBoss Server Migration Tool from the wildfly-server-migration GitHub repository. This open source, standalone version of the tool supports migration from several versions of the WildFly server to JBoss EAP. For information about how to install and run this version of the tool, see the JBoss Server Migration Tool User Guide . Important The binary distribution of the JBoss Server Migration Tool is not supported. If you are migrating from a release of JBoss EAP, it is recommended that you use this supported version of the tool to migrate your server configuration to JBoss EAP 7.4 instead. 1.3. About the Use of EAP_HOME in this Document In this document, the variable EAP_HOME is used to denote the path to the target server installation. Replace this variable with the actual path to your server installation. Note EAP_HOME is a replaceable variable, not an environment variable. JBOSS_HOME is the environment variable used in scripts. JBoss EAP Installation Path If you installed JBoss EAP using the ZIP install method, the install directory is the jboss-eap-7.4 directory where you extracted the ZIP archive. If you installed JBoss EAP using the RPM install method, the install directory is /opt/rh/eap7/root/usr/share/wildfly/ . If you used the installer to install JBoss EAP, the default path for EAP_HOME is USD{user.home}/EAP-7.4.0 : For Red Hat Enterprise Linux, Solaris, and HP-UX: /home/ USER_NAME /EAP-7.4.0/ For Microsoft Windows: C:\Users\ USER_NAME \EAP-7.4.0\ If you used the JBoss Developer Studio installer to install and configure the JBoss EAP server, the default path for EAP_HOME is USD{user.home}/jbdevstudio/runtimes/jboss-eap : For Red Hat Enterprise Linux: /home/ USER_NAME /jbdevstudio/runtimes/jboss-eap/ For Microsoft Windows: C:\Users\ USER_NAME \jbdevstudio\runtimes\jboss-eap or C:\Documents and Settings\ USER_NAME \jbdevstudio\runtimes\jboss-eap\
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_the_jboss_server_migration_tool/migration_introduction
Chapter 2. Understanding process management for Ceph
Chapter 2. Understanding process management for Ceph As a storage administrator, you can manipulate the various Ceph daemons by type or instance in a Red Hat Ceph Storage cluster. Manipulating these daemons allows you to start, stop and restart all of the Ceph services as needed. 2.1. Ceph process management In Red Hat Ceph Storage, all process management is done through the Systemd service. Each time you want to start , restart , and stop the Ceph daemons, you must specify the daemon type or the daemon instance. Additional Resources For more information on using systemd , see Managing system services with systemctl . 2.2. Starting, stopping, and restarting all Ceph daemons using systemctl command You can start, stop, and restart all Ceph daemons as the root user from the host where you want to stop the Ceph daemons. Prerequisites A running Red Hat Ceph Storage cluster. Having root access to the node. Procedure On the host where you want to start, stop, and restart the daemons, run the systemctl service to get the SERVICE_ID of the service. Example Starting all Ceph daemons: Syntax Example Stopping all Ceph daemons: Syntax Example Restarting all Ceph daemons: Syntax Example 2.3. Starting, stopping, and restarting all Ceph services Ceph services are logical groups of Ceph daemons of the same type, configured to run in the same Red Hat Ceph Storage cluster. The orchestration layer in Ceph allows the user to manage these services in a centralized way, making it easy to execute operations that affect all the Ceph daemons that belong to the same logical service. The Ceph daemons running in each host are managed through the Systemd service. You can start, stop, and restart all Ceph services from the host where you want to manage the Ceph services. Important If you want to start,stop, or restart a specific Ceph daemon in a specific host, you need to use the SystemD service. To obtain a list of the SystemD services running in a specific host, connect to the host, and run the following command: Example The output will give you a list of the service names that you can use, to manage each Ceph daemon. Prerequisites A running Red Hat Ceph Storage cluster. Having root access to the node. Procedure Log into the Cephadm shell: Example Run the ceph orch ls command to get a list of Ceph services configured in the Red Hat Ceph Storage cluster and to get the specific service ID. Example To start a specific service, run the following command: Syntax Example To stop a specific service, run the following command: Important The ceph orch stop SERVICE_ID command results in the Red Hat Ceph Storage cluster being inaccessible, only for the MON and MGR service. It is recommended to use the systemctl stop SERVICE_ID command to stop a specific daemon in the host. Syntax Example In the example the ceph orch stop node-exporter command removes all the daemons of the node exporter service. To restart a specific service, run the following command: Syntax Example 2.4. Viewing log files of Ceph daemons that run in containers Use the journald daemon from the container host to view a log file of a Ceph daemon from a container. Prerequisites Installation of the Red Hat Ceph Storage software. Root-level access to the node. Procedure To view the entire Ceph log file, run a journalctl command as root composed in the following format: Syntax Example In the above example, you can view the entire log for the OSD with ID osd.8 . To show only the recent journal entries, use the -f option. Syntax Example Note You can also use the sosreport utility to view the journald logs. For more details about SOS reports, see the What is an sosreport and how to create one in Red Hat Enterprise Linux? solution on the Red Hat Customer Portal. Additional Resources The journalctl manual page. 2.5. Powering down and rebooting Red Hat Ceph Storage cluster You can power down and reboot the Red Hat Ceph Storage cluster using two different approaches: systemctl commands and the Ceph Orchestrator. You can choose either approach to power down and reboot the cluster. Note When powering down or rebooting a Red Hat Ceph Storage cluster with the Ceph Object gateway multi-site, ensure that no IOs are in progress. Also, power off/on the sites one at a time. 2.5.1. Powering down and rebooting the cluster using the systemctl commands You can use the systemctl commands approach to power down and reboot the Red Hat Ceph Storage cluster. This approach follows the Linux way of stopping the services. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access. Procedure Powering down the Red Hat Ceph Storage cluster Stop the clients from using the Block Device images RADOS Gateway - Ceph Object Gateway on this cluster and any other clients. Log into the Cephadm shell: Example The cluster must be in healthy state ( Health_OK and all PGs active+clean ) before proceeding. Run ceph status on the host with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example If you use the Ceph File System ( CephFS ), bring down the CephFS cluster: Syntax Example Set the noout , norecover , norebalance , nobackfill , nodown , and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example Important The above example is only for stopping the service and each OSD in the OSD node and it needs to be repeated on each OSD node. If the MDS and Ceph Object Gateway nodes are on their own dedicated nodes, power them off. Get the systemd target of the daemons: Example Disable the target that includes the cluster FSID: Example Stop the target: Example This stops all the daemons on the host that needs to be stopped. Shutdown the node: Example Repeat the above steps for all the nodes of the cluster. Rebooting the Red Hat Ceph Storage cluster If network equipment was involved, ensure it is powered ON and stable prior to powering ON any Ceph hosts or nodes. Power ON the administration node. Enable the systemd target to get all the daemons running: Example Start the systemd target: Example Wait for all the nodes to come up. Verify all the services are up and there are no connectivity issues between the nodes. Unset the noout , norecover , norebalance , nobackfill , nodown and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example If you use the Ceph File System ( CephFS ), bring the CephFS cluster back up by setting the joinable flag to true : Syntax Example Verification Verify the cluster is in healthy state ( Health_OK and all PGs active+clean ). Run ceph status on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example Additional Resources For more information on installing Ceph, see the Red Hat Ceph Storage Installation Guide . 2.5.2. Powering down and rebooting the cluster using the Ceph Orchestrator You can also use the capabilities of the Ceph Orchestrator to power down and reboot the Red Hat Ceph Storage cluster. In most cases, it is a single system login that can help in powering off the cluster. The Ceph Orchestrator supports several operations, such as start , stop , and restart . You can use these commands with systemctl , for some cases, in powering down or rebooting the cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Powering down the Red Hat Ceph Storage cluster Stop the clients from using the user Block Device Image and Ceph Object Gateway on this cluster and any other clients. Log into the Cephadm shell: Example The cluster must be in healthy state ( Health_OK and all PGs active+clean ) before proceeding. Run ceph status on the host with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example If you use the Ceph File System ( CephFS ), bring down the CephFS cluster: Syntax Example Set the noout , norecover , norebalance , nobackfill , nodown , and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example Stop the MDS service. Fetch the MDS service name: Example Stop the MDS service using the fetched name in the step: Syntax Stop the Ceph Object Gateway services. Repeat for each deployed service. Fetch the Ceph Object Gateway service names: Example Stop the Ceph Object Gateway service using the fetched name: Syntax Stop the Alertmanager service: Example Stop the node-exporter service which is a part of the monitoring stack: Example Stop the Prometheus service: Example Stop the Grafana dashboard service: Example Stop the crash service: Example Shut down the OSD nodes from the cephadm node, one by one. Repeat this step for all the OSDs in the cluster. Fetch the OSD ID: Example Shut down the OSD node using the OSD ID you fetched: Example Stop the monitors one by one. Identify the hosts hosting the monitors: Example On each host, stop the monitor. Identify the systemctl unit name: Example Stop the service: Syntax Shut down all the hosts. Rebooting the Red Hat Ceph Storage cluster If network equipment was involved, ensure it is powered ON and stable prior to powering ON any Ceph hosts or nodes. Power ON all the Ceph hosts. Log into the administration node from the Cephadm shell: Example Verify all the services are in running state: Example Ensure the cluster health is `Health_OK`status: Example Unset the noout , norecover , norebalance , nobackfill , nodown and pause flags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node: Example If you use the Ceph File System ( CephFS ), bring the CephFS cluster back up by setting the joinable flag to true : Syntax Example Verification Verify the cluster is in healthy state ( Health_OK and all PGs active+clean ). Run ceph status on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. Example Additional Resources For more information on installing Ceph see the Red Hat Ceph Storage Installation Guide
[ "systemctl --type=service [email protected]", "systemctl start SERVICE_ID", "systemctl start [email protected]", "systemctl stop SERVICE_ID", "systemctl stop [email protected]", "systemctl restart SERVICE_ID", "systemctl restart [email protected]", "systemctl list-units \"ceph*\"", "cephadm shell", "ceph orch ls NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID alertmanager 1/1 4m ago 4M count:1 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5 b7bae610cd46 crash 3/3 4m ago 4M * registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 grafana 1/1 4m ago 4M count:1 registry.redhat.io/rhceph-alpha/rhceph-6-dashboard-rhel9:latest bd3d7748747b mgr 2/2 4m ago 4M count:2 registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 mon 2/2 4m ago 10w count:2 registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 nfs.foo 0/1 - - count:1 <unknown> <unknown> node-exporter 1/3 4m ago 4M * registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5 mix osd.all-available-devices 5/5 4m ago 3M * registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510 prometheus 1/1 4m ago 4M count:1 registry.redhat.io/openshift4/ose-prometheus:v4.6 bebb0ddef7f0 rgw.test_realm.test_zone 2/2 4m ago 3M count:2 registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest c88a5d60f510", "ceph orch start SERVICE_ID", "ceph orch start node-exporter", "ceph orch stop SERVICE_ID", "ceph orch stop node-exporter", "ceph orch restart SERVICE_ID", "ceph orch restart node-exporter", "journalctl -u ceph SERVICE_ID", "journalctl -u [email protected]", "journalctl -fu SERVICE_ID", "journalctl -fu [email protected]", "cephadm shell", "ceph -s", "ceph fs set FS_NAME max_mds 1 ceph fs fail FS_NAME ceph status ceph fs set FS_NAME joinable false", "ceph fs set cephfs max_mds 1 ceph fs fail cephfs ceph status ceph fs set cephfs joinable false", "ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause", "systemctl list-units --type target | grep ceph ceph-0b007564-ec48-11ee-b736-525400fd02f8.target loaded active active Ceph cluster 0b007564-ec48-11ee-b736-525400fd02f8 ceph.target loaded active active All Ceph clusters and services", "systemctl disable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target Removed \"/etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target\". Removed \"/etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target\".", "systemctl stop ceph-0b007564-ec48-11ee-b736-525400fd02f8.target", "shutdown Shutdown scheduled for Wed 2024-03-27 11:47:19 EDT, use 'shutdown -c' to cancel.", "systemctl enable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target Created symlink /etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target. Created symlink /etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target.", "systemctl start ceph-0b007564-ec48-11ee-b736-525400fd02f8.target", "ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause", "ceph fs set FS_NAME joinable true", "ceph fs set cephfs joinable true", "ceph -s", "cephadm shell", "ceph -s", "ceph fs set FS_NAME max_mds 1 ceph fs fail FS_NAME ceph status ceph fs set FS_NAME joinable false ceph mds fail FS_NAME : N", "ceph fs set cephfs max_mds 1 ceph fs fail cephfs ceph status ceph fs set cephfs joinable false ceph mds fail cephfs:1", "ceph osd set noout ceph osd set norecover ceph osd set norebalance ceph osd set nobackfill ceph osd set nodown ceph osd set pause", "ceph orch ls --service-type mds", "ceph orch stop SERVICE-NAME", "ceph orch ls --service-type rgw", "ceph orch stop SERVICE-NAME", "ceph orch stop alertmanager", "ceph orch stop node-exporter", "ceph orch stop prometheus", "ceph orch stop grafana", "ceph orch stop crash", "ceph orch ps --daemon-type=osd", "ceph orch daemon stop osd.1 Scheduled to stop osd.1 on host 'host02'", "ceph orch ps --daemon-type mon", "systemctl list-units ceph-* | grep mon", "systemct stop SERVICE-NAME", "cephadm shell", "ceph orch ls", "ceph -s", "ceph osd unset noout ceph osd unset norecover ceph osd unset norebalance ceph osd unset nobackfill ceph osd unset nodown ceph osd unset pause", "ceph fs set FS_NAME joinable true", "ceph fs set cephfs joinable true", "ceph -s" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/administration_guide/understanding-process-management-for-ceph
Chapter 2. Red Hat Enterprise Linux AI product architecture
Chapter 2. Red Hat Enterprise Linux AI product architecture Red Hat Enterprise Linux AI contains various distinct features and consists of the following components. 2.1. Bootable Red Hat Enterprise Linux with InstructLab You can install RHEL AI and deploy the InstructLab tooling using a bootable RHEL container image provided by Red Hat. The current supported installation methods for this image are on Amazon Web Services (AWS), IBM Cloud, and bare-metal machines with NVIDIA GPUs. This RHEL AI image includes InstructLab, RHEL 9.4, and various inference and training software, including vLLM and DeepSpeed. After you boot this image, you can download various Red Hat and IBM developed Granite models to serve or train. The image and all the tools are compiled to specific Independent Software Vendor (ISV) hardware. For more information about the architecture of the image, see Installation overview Important RHEL AI currently only includes bootable images for NVIDIA accelerators. 2.1.1. InstructLab model alignment The Red Hat Enterprise Linux AI bootable image contains InstructLab and its tooling. InstructLab uses a novel approach to LLM fine-tuning called LAB (Large-Scale Alignment for ChatBots). The LAB method uses a taxonomy-based system that implements high-quality synthetic data generation (SDG) and multi-phase training. Using the RHEL AI command line interface (CLI), which is built from the InstructLab CLI, you can create your own custom LLM by tuning a Granite base model on synthetic data generated from your own domain-specific knowledge. For general availability, the RHEL AI LLMs customization workflow consists of the following steps: Installing and initializing RHEL AI on your preferred platform. Using a CLI and Git workflow for adding skills and knowledge to your taxonomy tree. Running synthetic data generation (SDG) using the mixtral-8x7B-Instruct teacher model. SDG can generate hundreds or thousands of synthetic question-and-answer pairs for model tuning based on user-provided specific samples. Using the InstructLab to train the base model with the new synthetically generated data. The prometheus-8x7B-V2.0 judge model evaluates the performance of the newly trained model. Using InstructLab with vLLM to serve the new custom model for inferencing. 2.1.2. Open source licensed Granite models With RHEL AI, you can download the open source licensed IBM Granite family of LLMs. Using the granite-7b-starter model as a base, you can create your model using knowledge data. You can keep these custom LLMs private or you can share them with the AI community. Red Hat Enterprise Linux AI also allows you to serve and chat with Granite models created and fine-tuned by Red Hat and IBM.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/getting_started/product_architecture_rhelai
Appendix C. Journaler configuration reference
Appendix C. Journaler configuration reference Reference of the list commands that can be used for journaler configuration. journaler_write_head_interval Description How frequently to update the journal head object. Type Integer Required No Default 15 journaler_prefetch_periods Description How many stripe periods to read ahead on journal replay. Type Integer Required No Default 10 journal_prezero_periods Description How many stripe periods to zero ahead of write position. Type Integer Required No Default 10 journaler_batch_interval Description Maximum additional latency in seconds to incur artificially. Type Double Required No Default .001 journaler_batch_max Description Maximum bytes that will be delayed flushing. Type 64-bit Unsigned Integer Required No Default 0
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/file_system_guide/journaler-configuration-reference_fs
Chapter 1. OpenShift Container Platform installation overview
Chapter 1. OpenShift Container Platform installation overview 1.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 1.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 1.1. OpenShift Container Platform installation targets and dependencies 1.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.13 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 1.1.3. Glossary of common terms for OpenShift Container Platform installing The glossary defines common terms that relate to the installation content. Read the following list of terms to better understand the installation process. Assisted Installer An installer hosted at console.redhat.com that provides a web-based user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs RHCOS and an agent. Together, the Assisted Installer and agent provide preinstallation validation and installation for the cluster. Agent-based Installer An installer similar to the Assisted Installer, but you must download the Agent-based Installer first. The Agent-based Installer is ideal for disconnected environments. Bootstrap node A temporary machine that runs a minimal Kubernetes configuration required to deploy the OpenShift Container Platform control plane. Control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines. Compute node Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes. Disconnected installation In some situations, parts of a data center might not have access to the internet, even through proxy servers. You can still install the OpenShift Container Platform in these environments, but you must download the required software and images and make them available to the disconnected environment. The OpenShift Container Platform installation program A program that provisions the infrastructure and deploys a cluster. Installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. Ignition config files A file that the Ignition tool uses to configure Red Hat Enterprise Linux CoreOS (RHCOS) during operating system initialization. The installation program generates different Ignition configuration files to initialize bootstrap, control plane, and worker nodes. Kubernetes manifests Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets, and so on. Kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Load balancers A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes. Machine Config Operator An Operator that manages and applies configurations and updates of the base operating system and container runtime, including everything between the kernel and kubelet, for the nodes in the cluster. Operators The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers. User-provisioned infrastructure You can install OpenShift Container Platform on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. 1.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.13, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 1.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. 1.1.5. Verifying node state after installation The OpenShift Container Platform installation completes when the following installation health checks are successful: The provisioner can access the OpenShift Container Platform web console. All control plane nodes are ready. All cluster Operators are available. Note After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. Some time is required before all worker nodes report as READY . For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes. After your installation completes, you can continue to monitor the condition of the nodes in your cluster. Prerequisites The installation program resolves successfully in the terminal. Procedure Show the status of all worker nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a Show the phase of all worker machine nodes: USD oc get machines -A Example output NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m Additional resources Getting the BareMetalHost resource Following the progress of the installation Validating an installation Agent-based Installer Assisted Installer for OpenShift Container Platform Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 1.1.6. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 1.2. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.13, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Alibaba Cloud Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) VPC Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud (VMC) on AWS VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.13, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power IBM Z or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. Additional resources See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform. See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources.
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a", "oc get machines -A", "NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installation_overview/ocp-installation-overview
Common object reference
Common object reference OpenShift Container Platform 4.16 Reference guide common API objects Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/common_object_reference/index
Chapter 3. Introduction to Red Hat Virtualization Products and Features
Chapter 3. Introduction to Red Hat Virtualization Products and Features This chapter introduces the main virtualization products and features available in Red Hat Enterprise Linux 7. 3.1. KVM and Virtualization in Red Hat Enterprise Linux KVM (Kernel-based Virtual Machine) is a full virtualization solution for Linux on a variety of architectures. It is built into the standard Red Hat Enterprise Linux 7 kernel and integrated with the Quick Emulator (QEMU), and it can run multiple guest operating systems. The KVM hypervisor in Red Hat Enterprise Linux is managed with the libvirt API, and tools built for libvirt (such as virt-manager and virsh ). Virtual machines are executed and run as multi-threaded Linux processes, controlled by these tools. Warning QEMU and libvirt also support a dynamic translation mode using the QEMU Tiny Code Generator (TCG), which does not require hardware virtualization support. This configuration is not supported by Red Hat. For more information about this limitation, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Figure 3.1. KVM architecture Virtualization features supported by KVM on Red Hat Enterprise 7 include the following: Overcommitting The KVM hypervisor supports overcommitting of system resources. Overcommitting means allocating more virtualized CPUs or memory than the available resources on the system, so the resources can be dynamically swapped when required by one guest and not used by another. This can improve how efficiently guests use the resources of the host, and can make it possible for the user to require fewer hosts. Important Overcommitting involves possible risks to system stability. For more information on overcommitting with KVM, and the precautions that should be taken, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . KSM Kernel Same-page Merging (KSM) , used by the KVM hypervisor, enables KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. Note For more information on KSM, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . QEMU guest agent The QEMU guest agent runs on the guest operating system and makes it possible for the host machine to issue commands to the guest operating system. Note For more information on the QEMU guest agent, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Disk I/O throttling When several virtual machines are running simultaneously, they can interfere with the overall system performance by using excessive disk I/O. Disk I/O throttling in KVM provides the ability to set a limit on disk I/O requests sent from individual virtual machines to the host machine. This can prevent a virtual machine from over-utilizing shared resources, and impacting the performance of other virtual machines. Note For instructions on using disk I/O throttling, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . Automatic NUMA balancing Automatic non-uniform memory access (NUMA) balancing moves tasks, which can be threads or processes closer to the memory they are accessing. This improves the performance of applications running on non-uniform memory access (NUMA) hardware systems, without any manual tuning required for Red Hat Enterprise Linux 7 guests. Note For more information on automatic NUMA balancing, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . Virtual CPU hot add Virtual CPU (vCPU) hot add capability provides the ability to increase processing power on running virtual machines as needed, without shutting down the guests. The vCPUs assigned to a virtual machine can be added to a running guest to either meet the workload's demands, or to maintain the Service Level Agreement (SLA) associated with the workload. Note For more information on virtual CPU hot add, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . Nested virtualization As a Technology Preview, Red Hat Enterprise Linux 7.2 and later offers hardware-assisted nested virtualization. This feature enables KVM guests to act as hypervisors and create their own guests. This can for example be used for debugging hypervisors on a virtual machine or testing larger virtual deployments on a limited amount of physical machines. Note For further information on setting up and using nested virtualization, see Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . KVM guest virtual machine compatibility Red Hat Enterprise Linux 7 servers have certain support limits. The following URLs explain the processor and memory amount limitations for Red Hat Enterprise Linux: For the host system: https://access.redhat.com/site/articles/rhel-limits For the KVM hypervisor: https://access.redhat.com/site/articles/rhel-kvm-limits For a complete chart of supported operating systems and host and guest combinations see Red Hat Customer Portal Note To verify whether your processor supports virtualization extensions and for information on enabling virtualization extensions if they are disabled, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/chap-virtualization_getting_started-products
Managing content
Managing content Red Hat Satellite 6.15 Import content from Red Hat and custom sources, manage applications lifecycle across lifecycle environments, filter content by using Content Views, synchronize content between Satellite Servers, and more Red Hat Satellite Documentation Team [email protected]
[ "scp ~/ manifest_file .zip root@ satellite.example.com :~/.", "hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"", "hammer subscription list --organization-id My_Organization_ID", "hammer host subscription attach --host My_Host_Name --subscription-id My_Subscription_ID", "tree -d -L 11 └── content 1 ├── beta 2 │ └── rhel 3 │ └── server 4 │ └── 7 5 │ └── x86_64 6 │ └── sat-tools 7 └── dist └── rhel └── server └── 7 ├── 7.2 │ └── x86_64 │ └── kickstart └── 7Server └── x86_64 └── os", "hammer alternate-content-source create --alternate-content-source-type custom --base-url \" https://local-repo.example.com:port \" --name \" My_ACS_Name \" --smart-proxy-ids My_Capsule_ID", "hammer alternate-content-source list", "hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID", "hammer alternate-content-source update --id My_Alternate_Content_Source_ID --smart-proxy-ids My_Capsule_ID", "hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID", "hammer alternate-content-source create --alternate-content-source-type simplified --name My_ACS_Name --product-ids My_Product_ID --smart-proxy-ids My_Capsule_ID", "hammer alternate-content-source list", "hammer alternate-content-source refresh --id My_ACS_ID", "rhui-manager repo info --repo_id My_Repo_ID", "hammer alternate-content-source create --alternate-content-source-type rhui --base-url \" https://rhui-cds-node/pulp/content \" --name \" My_ACS_Name \" --smart-proxy-ids My_Capsule_ID --ssl-client-cert-id My_SSL_Client_Certificate_ID --ssl-client-key-id My_SSL_Client_Key_ID --subpaths path/to/repo/1/,path/to/repo/2/ --verify-ssl 1", "hammer alternate-content-source list", "hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID", "hammer alternate-content-source update --id My_Alternate_Content_Source_ID --smart-proxy-ids My_Capsule_ID", "hammer alternate-content-source refresh --id My_Alternate_Content_Source_ID", "scp My_SSL_Certificate [email protected]:~/.", "wget -P ~ http:// upstream-satellite.example.com /pub/katello-server-ca.crt", "hammer content-credential create --content-type cert --name \" My_SSL_Certificate \" --organization \" My_Organization \" --path ~/ My_SSL_Certificate", "hammer product create --name \" My_Product \" --sync-plan \" Example Plan \" --description \" Content from My Repositories \" --organization \" My_Organization \"", "hammer repository create --arch \" My_Architecture \" --content-type \"yum\" --gpg-key-id My_GPG_Key_ID --name \" My_Repository \" --organization \" My_Organization \" --os-version \" My_OS_Version \" --product \" My_Product \" --publish-via-http true --url My_Upstream_URL", "hammer product list --organization \" My_Organization \"", "hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer product synchronize --name \" My_Product \" --organization \" My_Organization \"", "hammer repository synchronize --name \" My_Repository \" --organization \" My_Organization \" --product \" My Product \"", "ORG=\" My_Organization \" for i in USD(hammer --no-headers --csv repository list --organization USDORG --fields Id) do hammer repository synchronize --id USD{i} --organization USDORG --async done", "hammer settings set --name default_redhat_download_policy --value immediate", "hammer settings set --name default_download_policy --value immediate", "hammer repository list --organization-label My_Organization_Label", "hammer repository update --download-policy immediate --name \" My_Repository \" --organization-label My_Organization_Label --product \" My_Product \"", "hammer repository list --organization-label My_Organization_Label", "hammer repository update --id 1 --mirroring-policy mirror_complete", "hammer repository upload-content --id My_Repository_ID --path /path/to/example-package.rpm", "hammer repository upload-content --content-type srpm --id My_Repository_ID --path /path/to/example-package.src.rpm", "semanage port -l | grep ^http_port_t http_port_t tcp 80, 81, 443, 488, 8008, 8009, 8443, 9000", "semanage port -a -t http_port_t -p tcp 10011", "hammer repository list --organization \" My_Organization \"", "hammer repository synchronize --id My_ID", "hammer repository synchronize --id My_ID --skip-metadata-check true", "hammer repository synchronize --id My_ID --validate-contents true", "hammer http-proxy create --name proxy-name --url proxy-URL:port-number", "hammer repository update --http-proxy-policy HTTP_Proxy_Policy --id Repository_ID", "hammer sync-plan create --description \" My_Description \" --enabled true --interval daily --name \" My_Products \" --organization \" My_Organization \" --sync-date \"2023-01-01 01:00:00\"", "hammer sync-plan list --organization \" My_Organization \"", "hammer product set-sync-plan --name \" My_Product_Name \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan_Name \"", "ORG=\" My_Organization \" SYNC_PLAN=\"daily_sync_at_3_a.m\" hammer sync-plan create --name USDSYNC_PLAN --interval daily --sync-date \"2023-04-5 03:00:00\" --enabled true --organization USDORG for i in USD(hammer --no-headers --csv --csv-separator=\"|\" product list --organization USDORG --per-page 999 | grep -vi not_synced | awk -F'|' 'USD5 != \"0\" { print USD1}') do hammer product set-sync-plan --sync-plan USDSYNC_PLAN --organization USDORG --id USDi done", "hammer product list --organization USDORG --sync-plan USDSYNC_PLAN", "hammer repository update --download-concurrency 5 --id Repository_ID --organization \" My_Organization \"", "wget http://www.example.com/9.5/example-9.5-2.noarch.rpm", "rpm2cpio example-9.5-2.noarch.rpm | cpio -idmv", "-----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFy/HE4BEADttv2TCPzVrre+aJ9f5QsR6oWZMm7N5Lwxjm5x5zA9BLiPPGFN 4aTUR/g+K1S0aqCU+ZS3Rnxb+6fnBxD+COH9kMqXHi3M5UNzbp5WhCdUpISXjjpU XIFFWBPuBfyr/FKRknFH15P+9kLZLxCpVZZLsweLWCuw+JKCMmnA =F6VG -----END PGP PUBLIC KEY BLOCK----- -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBFw467UBEACmREzDeK/kuScCmfJfHJa0Wgh/2fbJLLt3KSvsgDhORIptf+PP OTFDlKuLkJx99ZYG5xMnBG47C7ByoMec1j94YeXczuBbynOyyPlvduma/zf8oB9e Wl5GnzcLGAnUSRamfqGUWcyMMinHHIKIc1X1P4I= =WPpI -----END PGP PUBLIC KEY BLOCK-----", "scp ~/etc/pki/rpm-gpg/RPM-GPG-KEY- EXAMPLE-95 [email protected]:~/.", "hammer content-credentials create --content-type gpg_key --name \" My_GPG_Key \" --organization \" My_Organization \" --path ~/RPM-GPG-KEY- EXAMPLE-95", "hammer lifecycle-environment create --name \" Environment Path Name \" --description \" Environment Path Description \" --prior \"Library\" --organization \" My_Organization \"", "hammer lifecycle-environment create --name \" Environment Name \" --description \" Environment Description \" --prior \" Prior Environment Name \" --organization \" My_Organization \"", "hammer lifecycle-environment paths --organization \" My_Organization \"", "hammer capsule list", "hammer capsule info --id My_capsule_ID", "hammer capsule content available-lifecycle-environments --id My_capsule_ID", "hammer capsule content add-lifecycle-environment --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID --organization \" My_Organization \"", "hammer capsule content synchronize --id My_capsule_ID", "hammer capsule content synchronize --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID", "hammer capsule content synchronize --id My_capsule_ID --skip-metadata-check true", "hammer lifecycle-environment list --organization \" My_Organization \"", "hammer lifecycle-environment delete --name \" My_Environment \" --organization \" My_Organization \"", "hammer capsule list", "hammer capsule info --id My_Capsule_ID", "hammer capsule content lifecycle-environments --id My_Capsule_ID", "hammer capsule content remove-lifecycle-environment --id My_Capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID", "hammer capsule content synchronize --id My_Capsule_ID", "hammer repository list --organization \" My_Organization \"", "hammer content-view create --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \" --repository-ids 1,2", "hammer content-view publish --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \"", "hammer content-view add-repository --name \" My_Content_View \" --organization \" My_Organization \" --repository-id repository_ID", "hammer content-view copy --name My_original_CV_name --new-name My_new_CV_name", "hammer content-view copy --id=5 --new-name=\"mixed_copy\" Content view copied.", "hammer organization list", "hammer module-stream list --organization-id My_Organization_ID", "hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"", "ORG=\" My_Organization \" CVV_ID= My_Content_View_Version_ID for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done", "hammer content-view version info --id My_Content_View_Version_ID", "hammer content-view version list --organization \" My_Organization \"", "hammer content-view create --composite --auto-publish yes --name \" Example_Composite_Content_View \" --description \"Example composite content view\" --organization \" My_Organization \"", "hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --latest --organization \" My_Organization \"", "hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --component-content-view-version-id Content_View_Version_ID --organization \" My_Organization \"", "hammer content-view publish --name \" Example_Composite_Content_View \" --description \"Initial version of composite content view\" --organization \" My_Organization \"", "hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"", "hammer content-view filter create --name \" Errata Filter \" --type erratum --content-view \" Example_Content_View \" --description \" My latest filter \" --inclusion false --organization \" My_Organization \"", "hammer content-view filter rule create --content-view \" Example_Content_View \" --content-view-filter \" Errata Filter \" --start-date \" YYYY-MM-DD \" --types enhancement,bugfix --date-type updated --organization \" My_Organization \"", "hammer content-view publish --name \" Example_Content_View \" --description \"Adding errata filter\" --organization \" My_Organization \"", "hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"", "curl http://satellite.example.com/pub/katello-server-ca.crt", "hammer content-export complete library --organization=\" My_Organization \"", "ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/1.0/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 03:35 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 03:35 metadata.json", "hammer content-export complete library --chunk-size-gb=2 --organization=\" My_Organization \" Generated /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/metadata.json ls -lh /var/lib/pulp/exports/ My_Organization /Export-Library/2.0/2021-03-02T04-01-25-00-00/", "hammer content-export complete library --organization=\" My_Organization \" --format=syncable", "du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00", "hammer content-import library --organization=\" My_Organization \" --path=\" My_Path_To_Syncable_Export \"", "hammer content-export incremental library --organization=\" My_Organization \"", "find /var/lib/pulp/exports/ My_Organization /Export-Library/", "hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \" ---|----------|---------|-------------|----------------------- ID | NAME | VERSION | DESCRIPTION | LIFECYCLE ENVIRONMENTS ---|----------|---------|-------------|----------------------- 5 | view 3.0 | 3.0 | | Library 4 | view 2.0 | 2.0 | | 3 | view 1.0 | 1.0 | | ---|----------|---------|-------------|----------------------", "hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \"", "ls -lh /var/lib/pulp/exports/ My_Organization / Content_View_Name /1.0/2021-02-25T18-59-26-00-00/", "hammer content-export complete version --chunk-size-gb=2 --content-view=\" Content_View_Name \" --organization=\" My_Organization \" --version=1.0 ls -lh /var/lib/pulp/exports/ My_Organization /view/1.0/2021-02-25T21-15-22-00-00/", "hammer content-view version list --content-view=\" My_Content_View \" --organization=\" My_Organization \"", "hammer content-export complete version --content-view=\" Content_View_Name \" --version=1.0 --organization=\" My_Organization \" --format=syncable", "ls -lh /var/lib/pulp/exports/ My_Organization / My_Content_View_Name /1.0/2021-02-25T18-59-26-00-00/", "hammer content-export incremental version --content-view=\" My_Content_View \" --organization=\" My_Organization \" --version=\" My_Content_View_Version \"", "find /var/lib/pulp/exports/ My_Organization / My_Exported_Content_View / My_Content_View_Version /", "hammer content-export complete repository --name=\" My_Repository \" --product=\" My_Product \" --organization=\" My_Organization \"", "ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2022-09-02T03-35-24-00-00/", "hammer content-export complete repository --organization=\" My_Organization \" --product=\" My_Product \" --name=\" My_Repository \" --format=syncable", "du -sh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /1.0/2021-03-02T03-35-24-00-00", "hammer content-export incremental repository --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"", "ls -lh /var/lib/pulp/exports/ My_Organization /Export- My_Repository /3.0/2021-03-02T03-35-24-00-00/ total 172K -rw-r--r--. 1 pulp pulp 20M Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:22 export-436882d8-de5a-48e9-a30a-17169318f908-20210302_0422-toc.json -rw-r--r--. 1 root root 492 Mar 2 04:22 metadata.json", "hammer content-export incremental repository --format=syncable --name=\" My_Repository \" --organization=\" My_Organization \" --product=\" My_Product \"", "find /var/lib/pulp/exports/Default_Organization/ My_Product /2.0/2023-03-09T10-55-48-05-00/ -name \"*.rpm\"", "hammer content-export complete library --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0", "hammer content-export complete version --content-view=\" Content_View_Name \" --destination-server= My_Downstream_Server_1 --organization=\" My_Organization \" --version=1.0", "hammer content-export list --organization=\" My_Organization \"", "chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00", "ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json", "hammer content-import library --organization=\" My_Organization \" --path=/var/lib/pulp/imports/2021-03-02T03-35-24-00-00", "hammer content-import library --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/", "chown -R pulp:pulp /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/", "ls -lh /var/lib/pulp/imports/2021-02-25T21-15-22-00-00/", "hammer content-import version --organization= My_Organization --path=/var/lib/pulp/imports/2021-02-25T21-15-22-00-00/", "hammer content-view version list --organization-id= My_Organization_ID", "hammer content-import version --organization= My_Organization --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/", "chown -R pulp:pulp /var/lib/pulp/imports/2021-03-02T03-35-24-00-00", "ls -lh /var/lib/pulp/imports/2021-03-02T03-35-24-00-00 total 68M -rw-r--r--. 1 pulp pulp 68M Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335.tar.gz -rw-r--r--. 1 pulp pulp 333 Mar 2 04:29 export-1e25417c-6d09-49d4-b9a5-23df4db3d52a-20210302_0335-toc.json -rw-r--r--. 1 pulp pulp 443 Mar 2 04:29 metadata.json", "hammer content-import repository --organization=\" My_Organization \" --path=/var/lib/pulp/imports/ 2021-03-02T03-35-24-00-00", "hammer content-import repository --organization=\" My_Organization \" --path=http:// server.example.com /pub/exports/2021-02-25T21-15-22-00-00/", "hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name \" My_Activation_Key \" --value \" name_of_first_key \", \" name_of_second_key \",", "hammer activation-key create --name \" My_Activation_Key \" --unlimited-hosts --description \" Example Stack in the Development Environment \" --lifecycle-environment \" Development \" --content-view \" Stack \" --organization \" My_Organization \"", "hammer activation-key update --organization \" My_Organization \" --name \" My_Activation_Key \" --service-level \" Standard \" --purpose-usage \" Development/Test \" --purpose-role \" Red Hat Enterprise Linux Server \" --purpose-addons \" addons \"", "hammer subscription list --organization \" My_Organization \"", "hammer activation-key add-subscription --name \" My_Activation_Key \" --subscription-id My_Subscription_ID --organization \" My_Organization \"", "hammer activation-key product-content --content-access-mode-all true --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key product-content --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key content-override --name \" My_Activation_Key \" --content-label rhel-7-server-satellite-client-6-rpms --value 1 --organization \" My_Organization \"", "hammer activation-key subscriptions --name My_Activation_Key --organization \" My_Organization \"", "hammer activation-key remove-subscription --name \" My_Activation_Key \" --subscription-id ff808181533518d50152354246e901aa --organization \" My_Organization \"", "hammer activation-key add-subscription --name \" My_Activation_Key \" --subscription-id ff808181533518d50152354246e901aa --organization \" My_Organization \"", "hammer activation-key product-content --name \" My_Activation_Key \" --organization \" My_Organization \"", "hammer activation-key content-override --name \" My_Activation_Key \" --content-label content_label --value 1 --organization \" My_Organization \"", "hammer host-registration generate-command --activation-keys \" My_Activation_Key \"", "hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'", "curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'", "subscription-manager register --activationkey=\"ak-VDC,ak-OpenShift\" --org=\" My_Organization \"", "hammer activation-key update --name \" My_Activation_Key \" --organization \" My_Organization \" --auto-attach true", "hammer activation-key update --name \" My_Activation_Key \" --organization \" My_Organization \" --service-level premium", "parameter operator value", "type = security and package_name = kernel", "hammer erratum list", "hammer erratum info --id erratum_ID", "hammer erratum list --product-id 7 --search \"bug = 1213000 or bug = 1207972\" --errata-restrict-applicable 1 --order \"type desc\"", "hammer content-view filter create --content-view \" My_Content_View \" --description \"Exclude errata items from the YYYY-MM-DD \" --name \" My_Filter_Name \" --organization \" My_Organization \" --type \"erratum\"", "hammer content-view filter rule create --content-view \" My_Content_View \" --content-view-filter=\" My_Content_View_Filter \" --organization \" My_Organization \" --start-date \" YYYY-MM-DD \" --types=security,enhancement,bugfix", "hammer content-view publish --name \" My_Content_View \" --organization \" My_Organization \"", "hammer content-view version promote --content-view \" My_Content_View \" --organization \" My_Organization \" --to-lifecycle-environment \" My_Lifecycle_Environment \"", "hammer erratum list", "hammer content-view version list", "hammer content-view version incremental-update --content-view-version-id 319 --errata-ids 34068b", "hammer host errata list --host client.example.com", "hammer erratum info --id ERRATUM_ID", "dnf upgrade Module_Stream_Name", "hammer host errata list --host client.example.com", "hammer erratum info --id ERRATUM_ID", "dnf upgrade Module_Stream_Name", "hammer host errata list --host client.example.com", "hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 --search-query \"name = client.example.com\"", "hammer erratum list --errata-restrict-installable true --organization \" Default Organization \"", "hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID --search-query \"applicable_errata = ERRATUM_ID \"", "for HOST in hammer --csv --csv-separator \"|\" host list --search \"applicable_errata = ERRATUM_ID\" --organization \"Default Organization\" | tail -n+2 | awk -F \"|\" '{ print USD2 }' ; do echo \"== Applying to USDHOST ==\" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done", "hammer task list", "hammer task progress --id task_ID", "hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 ,... --search-query \"host_collection = HOST_COLLECTION_NAME \"", "hammer product create --description \" My_Description \" --name \"Red Hat Container Catalog\" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"", "hammer repository create --content-type \"docker\" --docker-upstream-name \"rhel7\" --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\" --url \"http://registry.access.redhat.com/\"", "hammer repository synchronize --name \"RHEL7\" --organization \" My_Organization \" --product \"Red Hat Container Catalog\"", "<%= repository.docker_upstream_name %>", "mkdir -p /etc/containers/certs.d/hostname.example.com", "mkdir -p /etc/docker/certs.d/hostname.example.com", "cp rootCA.pem /etc/containers/certs.d/hostname.example.com/ca.crt", "cp rootCA.pem /etc/docker/certs.d/hostname.example.com/ca.crt", "podman login hostname.example.com Username: admin Password: Login Succeeded!", "podman login satellite.example.com", "podman search satellite.example.com/", "podman pull satellite.example.com/my-image:<optional_tag>", "hammer repository-set list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \" | grep \"file\"", "hammer repository-set enable --product \"Red Hat Enterprise Linux Server\" --name \"Red Hat Enterprise Linux 7 Server (ISOs)\" --releasever 7.2 --basearch x86_64 --organization \" My_Organization \"", "hammer repository list --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer repository synchronize --name \"Red Hat Enterprise Linux 7 Server ISOs x86_64 7.2\" --product \"Red Hat Enterprise Linux Server\" --organization \" My_Organization \"", "hammer product create --name \" My_ISOs \" --sync-plan \"Example Plan\" --description \" My_Product \" --organization \" My_Organization \"", "hammer repository create --name \" My_ISOs \" --content-type \"file\" --product \" My_Product \" --organization \" My_Organization \"", "hammer repository upload-content --path ~/bootdisk.iso --id repo_ID --organization \" My_Organization \"", "--- collections: - name: my_namespace.my_collection version: 1.2.3", "subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.15-for-rhel-8-x86_64-rpms", "dnf module enable satellite-utils", "satellite-maintain packages install python3.11-pulp_manifest", "satellite-maintain packages unlock satellite-maintain packages install python39-pulp_manifest satellite-maintain packages lock", "mkdir -p /var/lib/pulp/ local_repos / my_file_repo", "satellite-installer --foreman-proxy-content-pulpcore-additional-import-paths /var/lib/pulp/ local_repos", "touch /var/lib/pulp/ local_repos / my_file_repo / test.txt", "pulp-manifest /var/lib/pulp/ local_repos / my_file_repo", "ls /var/lib/pulp/ local_repos / my_file_repo PULP_MANIFEST test.txt", "subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=satellite-utils-6.15-for-rhel-8-x86_64-rpms", "dnf module enable satellite-utils", "dnf install python3.11-pulp_manifest", "mkdir /var/www/html/pub/ my_file_repo", "touch /var/www/html/pub/ my_file_repo / test.txt", "pulp-manifest /var/www/html/pub/ my_file_repo", "ls /var/www/html/pub/ my_file_repo PULP_MANIFEST test.txt", "hammer product create --description \" My_Files \" --name \" My_File_Product \" --organization \" My_Organization \" --sync-plan \" My_Sync_Plan \"", "hammer repository create --content-type \"file\" --name \" My_Files \" --organization \" My_Organization \" --product \" My_File_Product \"", "hammer repository upload-content --id repo_ID --organization \" My_Organization \" --path example_file", "curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File", "curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File", "hammer repository list --content-type file ---|------------|-------------------|--------------|---- ID | NAME | PRODUCT | CONTENT TYPE | URL ---|------------|-------------------|--------------|---- 7 | My_Files | My_File_Product | file | ---|------------|-------------------|--------------|----", "hammer repository info --name \" My_Files \" --organization-id My_Organization_ID --product \" My_File_Product \"", "Publish Via HTTP: yes Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /", "Publish Via HTTP: no Published At: https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_File_Product_Label / My_Files_Label /", "curl --cacert ./_katello-server-ca.crt --cert ./_My_Organization_key-cert.pem --remote-name https:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File", "curl --remote-name http:// satellite.example.com /pulp/content/ My_Organization_Label /Library/custom/ My_Product_Label / My_Repository_Label / My_File", "satellite-maintain service stop", "satellite-maintain packages install nfs-utils", "mkdir /mnt/temp mount -o rw nfs.example.com:/Satellite/pulp /mnt/temp", "cp -r /var/lib/pulp/* /mnt/temp/.", "umount /mnt/temp", "rm -rf /var/lib/pulp/*", "nfs.example.com:/Satellite/pulp /var/lib/pulp nfs rw,hard,intr,context=\"system_u:object_r:pulpcore_var_lib_t:s0\"", "mount -a", "df Filesystem 1K-blocks Used Available Use% Mounted on nfs.example.com:/Satellite/pulp 309506048 58632800 235128224 20% /var/lib/pulp", "ls /var/lib/pulp", "satellite-maintain service start" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/managing_content/index
Appendix B. Glossary of terms used in Satellite
Appendix B. Glossary of terms used in Satellite Activation key A token for host registration and subscription attachment. Activation keys define subscriptions, products, content views, and other parameters to be associated with a newly created host. Answer file A configuration file that defines settings for an installation scenario. Answer files are defined in the YAML format and stored in the /etc/foreman-installer/scenarios.d/ directory. ARF report The result of an OpenSCAP audit. Summarizes the security compliance of hosts managed by Red Hat Satellite. Audits Provide a report on changes made by a specific user. Audits can be viewed in the Satellite web UI under Monitor > Audits . Baseboard management controller (BMC) Enables remote power management of bare-metal hosts. In Satellite, you can create a BMC interface to manage selected hosts. Boot disk An ISO image used for PXE-less provisioning. This ISO enables the host to connect to Satellite Server, boot the installation media, and install the operating system. There are several kinds of boot disks: host image , full host image , generic image , and subnet image . Capsule An additional server that can be used in a Red Hat Satellite deployment to facilitate content federation and distribution (act as a Pulp mirror), and to run other localized services (Puppet server, DHCP , DNS , TFTP , and more). Capsules are useful for Satellite deployment across various geographical locations. In upstream Foreman terminology, Capsule is referred to as Smart Proxy. Catalog A document that describes the desired system state for one specific host managed by Puppet. It lists all of the resources that need to be managed, as well as any dependencies between those resources. Catalogs are compiled by a Puppet server from Puppet Manifests and data from Puppet Agents. Candlepin A service within Katello responsible for subscription management. Compliance policy Refers to a scheduled task executed on Satellite Server that checks the specified hosts for compliance against SCAP content. Compute profile Specifies default attributes for new virtual machines on a compute resource. Compute resource A virtual or cloud infrastructure, which Red Hat Satellite uses for deployment of hosts and systems. Examples include Red Hat Virtualization, Red Hat OpenStack Platform, EC2, and VMWare. Container (Docker container) An isolated application sandbox that contains all runtime dependencies required by an application. Satellite supports container provisioning on a dedicated compute resource. Container image A static snapshot of the container's configuration. Satellite supports various methods of importing container images as well as distributing images to hosts through content views. Content A general term for everything Satellite distributes to hosts. Includes software packages (RPM files), or Docker images. Content is synchronized into the Library and then promoted into lifecycle environments using content views so that they can be consumed by hosts. Content delivery network (CDN) The mechanism used to deliver Red Hat content to Satellite Server. Content host The part of a host that manages tasks related to content and subscriptions. Content view A subset of Library content created by intelligent filtering. Once a content view is published, it can be promoted through the lifecycle environment path, or modified using incremental upgrades. Discovered host A bare-metal host detected on the provisioning network by the Discovery plugin. Discovery image Refers to the minimal operating system based on Red Hat Enterprise Linux that is PXE-booted on hosts to acquire initial hardware information and to communicate with Satellite Server before starting the provisioning process. Discovery plugin Enables automatic bare-metal discovery of unknown hosts on the provisioning network. The plugin consists of three components: services running on Satellite Server and Capsule Server, and the Discovery image running on host. Discovery rule A set of predefined provisioning rules which assigns a host group to discovered hosts and triggers provisioning automatically. Docker tag A mark used to differentiate container images, typically by the version of the application stored in the image. In the Satellite web UI, you can filter images by tag under Content > Docker Tags . ERB Embedded Ruby (ERB) is a template syntax used in provisioning and job templates. Errata Updated RPM packages containing security fixes, bug fixes, and enhancements. In relationship to a host, erratum is applicable if it updates a package installed on the host and installable if it is present in the host's content view (which means it is accessible for installation on the host). External node classifier A construct that provides additional data for a server to use when configuring hosts. Red Hat Satellite acts as an External Node Classifier to Puppet servers in a Satellite deployment. The External Node Classifier will be removed in a future Satellite version. Facter A program that provides information (facts) about the system on which it is run; for example, Facter can report total memory, operating system version, architecture, and more. Puppet modules enable specific configurations based on host data gathered by Facter. Facts Host parameters such as total memory, operating system version, or architecture. Facts are reported by Facter and used by Puppet. Foreman The component mainly responsible for provisioning and content lifecycle management. Foreman is the main upstream counterpart of Red Hat Satellite. Foreman hook An executable that is automatically triggered when an orchestration event occurs, such as when a host is created or when provisioning of a host has completed. Foreman hook functionality is deprecated and will be removed in a future Satellite version. Full host image A boot disk used for PXE-less provisioning of a specific host. The full host image contains an embedded Linux kernel and init RAM disk of the associated operating system installer. Generic image A boot disk for PXE-less provisioning that is not tied to a specific host. The generic image sends the host's MAC address to Satellite Server, which matches it against the host entry. Hammer A command line tool for managing Red Hat Satellite. You can execute Hammer commands from the command line or utilize them in scripts. Hammer also provides an interactive shell. Host Refers to any system, either physical or virtual, that Red Hat Satellite manages. Host collection A user defined group of one or more Hosts used for bulk actions such as errata installation. Host group A template for building a host. Host groups hold shared parameters, such as subnet or lifecycle environment, that are inherited by host group members. Host groups can be nested to create a hierarchical structure. Host image A boot disk used for PXE-less provisioning of a specific host. The host image only contains the boot files necessary to access the installation media on Satellite Server. Incremental upgrade (of a content view) The act of creating a new (minor) content view version in a lifecycle environment. Incremental upgrades provide a way to make in-place modification of an already published content view. Useful for rapid updates, for example when applying security errata. Job A command executed remotely on a host from Satellite Server. Every job is defined in a job template. Katello A Foreman plugin responsible for subscription and repository management. Lazy sync The ability to change the default download policy of a repository from Immediate to On Demand . The On Demand setting saves storage space and synchronization time by only downloading the packages when requested by a host. Location A collection of default settings that represent a physical place. Library A container for content from all synchronized repositories on Satellite Server. Libraries exist by default for each organization as the root of every lifecycle environment path and the source of content for every content view. Lifecycle environment A container for content view versions consumed by the content hosts. A Lifecycle Environment represents a step in the lifecycle environment path. Content moves through lifecycle environments by publishing and promoting content views. Lifecycle environment path A sequence of lifecycle environments through which the content views are promoted. You can promote a content view through a typical promotion path; for example, from development to test to production. Manifest (Red Hat subscription manifest) A mechanism for transferring subscriptions from the Red Hat Customer Portal to Red Hat Satellite. Do not confuse with Puppet manifest . Migrating Satellite The process of moving an existing Satellite installation to a new instance. OpenSCAP A project implementing security compliance auditing according to the Security Content Automation Protocol (SCAP). OpenSCAP is integrated in Satellite to provide compliance auditing for hosts. Organization An isolated collection of systems, content, and other functionality within a Satellite deployment. Parameter Defines the behavior of Red Hat Satellite components during provisioning. Depending on the parameter scope, we distinguish between global, domain, host group, and host parameters. Depending on the parameter complexity, we distinguish between simple parameters (key-value pair) and smart parameters (conditional arguments, validation, overrides). Parametrized class (smart class parameter) A parameter created by importing a class from Puppet server. Permission Defines an action related to a selected part of Satellite infrastructure (resource type). Each resource type is associated with a set of permissions, for example the Architecture resource type has the following permissions: view_architectures , create_architectures , edit_architectures , and destroy_architectures . You can group permissions into roles and associate them with users or user groups. Product A collection of content repositories. Products are either provided by Red Hat CDN or created by the Satellite administrator to group custom repositories. Promote (a content view) The act of moving a content view from one lifecycle environment to another. For more information, see Promoting a content view in Managing content . Provisioning template Defines host provisioning settings. Provisioning templates can be associated with host groups, lifecycle environments, or operating systems. Publish (a content view) The act of making a content view version available in a lifecycle environment and usable by hosts. Pulp A service within Katello responsible for repository and content management. Pulp mirror A Capsule Server component that mirrors content. Puppet The configuration management component of Satellite. Puppet agent A service running on a host that applies configuration changes to that host. Puppet environment An isolated set of Puppet Agent nodes that can be associated with a specific set of Puppet Modules. Puppet manifest Refers to Puppet scripts, which are files with the .pp extension. The files contain code to define a set of necessary resources, such as packages, services, files, users and groups, and so on, using a set of key-value pairs for their attributes. Do not confuse with Manifest (Red Hat subscription manifest) . Puppet server A Capsule Server component that provides Puppet Manifests to hosts for execution by the Puppet Agent. Puppet module A self-contained bundle of code (Puppet Manifests) and data (facts) that you can use to manage resources such as users, files, and services. Recurring logic A job executed automatically according to a schedule. In the Satellite web UI, you can view those jobs under Monitor > Recurring logics . Registry An archive of container images. Satellite supports importing images from local and external registries. Satellite itself can act as an image registry for hosts. However, hosts cannot push changes back to the registry. Repository Provides storage for a collection of content. Resource type Refers to a part of Satellite infrastructure, for example host, Capsule, or architecture. Used in permission filtering. Role Specifies a collection of permissions that are applied to a set of resources, such as hosts. Roles can be assigned to users and user groups. Satellite provides a number of predefined roles. SCAP content A file containing the configuration and security baseline against which hosts are checked. Used in compliance policies. Subnet image A type of generic image for PXE-less provisioning that communicates through Capsule Server. Subscription An entitlement for receiving content and service from Red Hat. Synchronization Refers to mirroring content from external resources into the Red Hat Satellite Library. Sync plan Provides scheduled execution of content synchronization. Task A background process executed on the Satellite or Capsule Server, such as repository synchronization or content view publishing. You can monitor the task status in the Satellite web UI under Monitor > Satellite Tasks > Tasks . Updating Satellite The process of advancing your Satellite Server and Capsule Server installations from a z-stream release to the , for example Satellite 6.15.0 to Satellite 6.15.1. Upgrading Satellite The process of advancing your Satellite Server and Capsule Server installations from a y-stream release to the , for example Satellite 6.14 to Satellite 6.15. User group A collection of roles which can be assigned to a collection of users. User Anyone registered to use Red Hat Satellite. Authentication and authorization is possible through built-in logic, through external resources (LDAP, Identity Management, or Active Directory), or with Kerberos. virt-who An agent for retrieving IDs of virtual machines from the hypervisor. When used with Satellite, virt-who reports those IDs to Satellite Server so that it can provide subscriptions for hosts provisioned on virtual machines.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/overview_concepts_and_deployment_considerations/glossary-of-terms-used-in-satellite_planning
Deploying and managing OpenShift Data Foundation using Google Cloud
Deploying and managing OpenShift Data Foundation using Google Cloud Red Hat OpenShift Data Foundation 4.13 Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install and manage Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on Google Cloud. Important Deploying and managing OpenShift Data Foundation on Google Cloud is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/index
Appendix C. Publishing modules reference
Appendix C. Publishing modules reference Several publisher, mapper, and rule modules are configured by default with the Certificate Manager. Section C.1, "Publisher plugin modules" Section C.2, "Mapper plugin modules" Section C.3, "Rule instances" C.1. Publisher plugin modules This section describes the publisher modules provided for the Certificate Manager. The modules are used by the Certificate Manager to enable and configure specific publisher instances. Section C.1.1, "FileBasedPublisher" Section C.1.2, "LdapCaCertPublisher" Section C.1.3, "LdapUserCertPublisher" Section C.1.4, "LdapCrlPublisher" Section C.1.5, "LdapDeltaCrlPublisher" Section C.1.6, "LdapCertificatePairPublisher" Section C.1.7, "OCSPPublisher" C.1.1. FileBasedPublisher The FileBasedPublisher plugin module configures a Certificate Manager to publish certificates and CRLs to file. This plugin can publish base-64 encoded files, DER-encoded files, or both, depending on the checkboxes selected when the publisher is configured. The certificate and CRL content can be viewed by converting the files using the PrettyPrintCert and PrettyPrintCRL tools. For details on viewing the content in base-64 and DER-encoded certificates and CRLs, see Section 7.10, "Viewing certificates and CRLs published to file" . By default, the Certificate Manager does not create an instance of the FileBasedPublisher module. Table C.1. FileBasedPublisher configuration parameters Parameter Description Publisher ID Specifies a name for the publisher, an alphanumeric string with no spaces. For example, PublishCertsToFile . directory Specifies the complete path to the directory to which the Certificate Manager creates the files; the path can be an absolute path or can be relative to the Certificate System instance directory. For example, /export/CS/certificates . C.1.2. LdapCaCertPublisher The LdapCaCertPublisher plugin module configures a Certificate Manager to publish or unpublish a CA certificate to the caCertificate;binary attribute of the CA's directory entry. The module converts the object class of the CA's entry to pkiCA or certificationAuthority , if it is not used already. Similarly, it also removes the pkiCA or certificationAuthority object class when unpublishing if the CA has no other certificates. During installation, the Certificate Manager automatically creates an instance of the LdapCaCertPublisher module for publishing the CA certificate to the directory. Table C.2. LdapCaCertPublisher configuration parameters Parameter Description caCertAttr Specifies the LDAP directory attribute to publish the CA certificate. This must be caCertificate;binary . caObjectClass Specifies the object class for the CA's entry in the directory. This must be pkiCA or certificationAuthority . C.1.3. LdapUserCertPublisher The LdapUserCertPublisher plugin module configures a Certificate Manager to publish or unpublish a user certificate to the userCertificate;binary attribute of the user's directory entry. This module is used to publish any end-entity certificate to an LDAP directory. Types of end-entity certificates include SSL client, S/MIME, SSL server, and OCSP responder. During installation, the Certificate Manager automatically creates an instance of the LdapUserCertPublisher module for publishing end-entity certificates to the directory. Table C.3. LdapUserCertPublisher configuration parameters Parameter Description certAttr Specifies the directory attribute of the mapped entry to which the Certificate Manager should publish the certificate. This must be userCertificate;binary . C.1.4. LdapCrlPublisher The LdapCrlPublisher plugin module configures a Certificate Manager to publish or unpublish the CRL to the certificateRevocationList;binary attribute of a directory entry. During installation, the Certificate Manager automatically creates an instance of the LdapCrlPublisher module for publishing CRLs to the directory. Table C.4. LdapCrlPublisher configuration parameters Parameter Description crlAttr Specifies the directory attribute of the mapped entry to which the Certificate Manager should publish the CRL. This must be certificateRevocationList;binary . C.1.5. LdapDeltaCrlPublisher The LdapDeltaCrlPublisher plugin module configures a Certificate Manager to publish or unpublish a delta CRL to the deltaRevocationList attribute of a directory entry. During installation, the Certificate Manager automatically creates an instance of the LdapDeltaCrlPublisher module for publishing CRLs to the directory. Table C.5. LdapDeltaCrlPublisher configuration parameters Parameter Description crlAttr Specifies the directory attribute of the mapped entry to which the Certificate Manager should publish the delta CRL. This must be deltaRevocationList;binary . C.1.6. LdapCertificatePairPublisher The LdapCertificatePairPublisher plugin module configures a Certificate Manager to publish or unpublish a cross-signed certificate to the crossCertPair;binary attribute of the CA's directory entry. The module also converts the object class of the CA's entry to a pkiCA or certificationAuthority , if it is not used already. Similarly, it also removes the pkiCA or certificationAuthority object class when unpublishing if the CA has no other certificates. During installation, the Certificate Manager automatically creates an instance of the LdapCertificatePairPublisher module named LdapCrossCertPairPublisher for publishing the cross-signed certificates to the directory. Table C.6. LdapCertificatePairPublisher Parameters Parameter Description crossCertPairAttr Specifies the LDAP directory attribute to publish the CA certificate. This must be crossCertificatePair;binary . caObjectClass Specifies the object class for the CA's entry in the directory. This must be pkiCA or certificationAuthority . C.1.7. OCSPPublisher The OCSPPublisher plugin module configures a Certificate Manager to publish its CRLs to an Online Certificate Status Manager. The Certificate Manager does not create any instances of the OCSPPublisher module at installation. Table C.7. OCSPPublisher Parameters Parameter Description host Specifies the fully qualified hostname of the Online Certificate Status Manager. port Specifies the port number on which the Online Certificate Status Manager is listening to the Certificate Manager. This is the Online Certificate Status Manager's SSL port number. path Specifies the path for publishing the CRL. This must be the default path, /ocsp/agent/ocsp/addCRL . enableClientAuth Sets whether to use client (certificate-based) authentication to access the OCSP service. nickname Gives the nickname of the certificate in the OCSP service's database to use for client authentication. This is only used if the enableClientAuth option is set to true. C.2. Mapper plugin modules This section describes the mapper plugin modules provided for the Certificate Manager. These modules configure a Certificate Manager to enable and configure specific mapper instances. The available mapper plugin modules include the following: Section C.2.1, "LdapCaSimpleMap" Section C.2.2, "LdapDNExactMap" Section C.2.3, "LdapSimpleMap" Section C.2.4, "LdapSubjAttrMap" Section C.2.5, "LdapDNCompsMap" C.2.1. LdapCaSimpleMap The LdapCaSimpleMap plugin module configures a Certificate Manager to create an entry for the CA in an LDAP directory automatically and then map the CA's certificate to the directory entry by formulating the entry's DN from components specified in the certificate request, certificate subject name, certificate extension, and attribute variable assertion (AVA) constants. For more information on AVAs, check the directory documentation. The CA certificate mapper specifies whether to create an entry for the CA, to map the certificate to an existing entry, or to do both. If a CA entry already exists in the publishing directory and the value assigned to the dnPattern parameter of this mapper is changed, but the uid and o attributes are the same, the mapper fails to create the second CA entry. For example, if the directory already has a CA entry for uid=CA,ou=Marketing,o=example.com and a mapper is configured to create another CA entry with uid=CA,ou=Engineering,o=example.com , the operation fails. The operation may fail because the directory has the UID Uniqueness plugin set to a specific base DN. This setting prevents the directory from having two entries with the same UID under that base DN. In this example, it prevents the directory from having two entries under o=example.com with the same UID, CA . If the mapper fails to create a second CA entry, check the base DN to which the UID Uniqueness plugin is set, and check if an entry with the same UID already exists in the directory. If necessary, adjust the mapper setting, remove the old CA entry, comment out the plugin, or create the entry manually. During installation, the Certificate Manager automatically creates two instances of the CA certificate mapper module. The mappers are named as follows: LdapCrlMap for CRLs (see Section C.2.1.2, "LdapCrlMap" ) LdapCaCertMap for CA certificates (see Section C.2.1.1, "LdapCaCertMap" ). Table C.8. LdapCaSimpleMap configuration parameters Parameter Description createCAEntry Creates a CA's entry, if selected (default). If selected, the Certificate Manager first attempts to create an entry for the CA in the directory. If the Certificate Manager succeeds in creating the entry, it then attempts to publish the CA's certificate to the entry. If this is not selected, the entry must already be present in order to publish to it. dnPattern Specifies the DN pattern the Certificate Manager should use to construct to search for the CA's entry in the publishing directory. The value of dnPattern can be a list of AVAs separated by commas. An AVA can be a variable, such as cn=USDsubj.cn , that the Certificate Manager can derive from the certificate subject name or a constant, such as o=Example Corporation . If the CA certificate does not have the cn component in its subject name, adjust the CA certificate mapping DN pattern to reflect the DN of the entry in the directory where the CA certificate is to be published. For example, if the CA certificate subject DN is o=Example Corporation and the CA's entry in the directory is cn=Certificate Authority, o=Example Corporation , the pattern is cn=Certificate Authority, o=USDsubj.o . Example 1: uid=CertMgr, o=Example Corporation Example 2: cn=USDsubj.cn,ou=USDsubj.ou,o=USDsubj.o,c=US Example 3: uid=USDreq.HTTP_PARAMS.uid, e=USDext.SubjectAlternativeName.RFC822Name,ou=USDsubj.ou In the above examples, USDreq takes the attribute from the certificate request, USDsubj takes the attribute from the certificate subject name, and USDext takes the attribute from the certificate extension. C.2.1.1. LdapCaCertMap The LdapCaCertMap mapper is an instance of the LdapCaSimpleMap module. The Certificate Manager automatically creates this mapper during installation. This mapper creates an entry for the CA in the directory and maps the CA certificate to the CA's entry in the directory. By default, the mapper is configured to create an entry for the CA in the directory, The default DN pattern for locating the CA's entry is as follows: C.2.1.2. LdapCrlMap The LdapCrlMap mapper is an instance of the LdapCaSimpleMap module. The Certificate Manager automatically creates this mapper during installation. This mapper creates an entry for the CA in the directory and maps the CRL to the CA's entry in the directory. By default, the mapper is configured to create an entry for the CA in the directory. The default DN pattern for locating the CA's entry is as follows: C.2.2. LdapDNExactMap The LdapDNExactMap plugin module configures a Certificate Manager to map a certificate to an LDAP directory entry by searching for the LDAP entry DN that matches the certificate subject name. To use this mapper, each certificate subject name must exactly match a DN in a directory entry. For example, if the certificate subject name is uid=jdoe, o=Example Corporation, c=US , when searching the directory for the entry, the Certificate Manager only searches for an entry with the DN uid=jdoe, o=Example Corporation, c=US . If no matching entries are found, the server returns an error and does not publish the certificate. This mapper does not require any values for any parameters because it obtains all values from the certificate. C.2.3. LdapSimpleMap The LdapSimpleMap plugin module configures a Certificate Manager to map a certificate to an LDAP directory entry by deriving the entry's DN from components specified in the certificate request, certificate's subject name, certificate extension, and attribute variable assertion (AVA) constants. For more information on AVAs, see the directory documentation. By default, the Certificate Manager uses mapper rules that are based on the simple mapper. During installation, the Certificate Manager automatically creates an instance of the simple mapper module, named LdapUserCertMap . The default mapper maps various types of end-entity certificates to their corresponding directory entries. The simple mapper requires one parameter, dnPattern . The value of dnPattern can be a list of AVAs separated by commas. An AVA can be a variable, such as uid=USDsubj.UID , or a constant, such as o=Example Corporation . Example 1: uid=CertMgr, o=Example Corporation Example 2: cn=USDsubj.cn,ou=USDsubj.ou,o=USDsubj.o,c=US Example 3: uid= USDreq.HTTP_PARAMS.uid, e=USDext.SubjectAlternativeName.RFC822Name,ou=USDsubj.ou In the examples, USDreq takes the attribute from the certificate request, USDsubj takes the attribute from the certificate subject name, and USDext takes the attribute from the certificate extension. C.2.4. LdapSubjAttrMap The LdapSubjAttrMap plugin module configures a Certificate Manager to map a certificate to an LDAP directory entry using a configurable LDAP attribute. To use this mapper, the directory entries must include the specified LDAP attribute. This mapper requires the exact pattern of the subject DN because the Certificate Manager searches the directory for the attribute with a value that exactly matches the entire subject DN. For example, if the specified LDAP attribute is certSubjectDN and the certificate subject name is uid=jdoe, o=Example Corporation, c=US , the Certificate Manager searches the directory for entries that have the attribute certSubjectDN=uid=jdoe, o=Example Corporation, c=US . If no matching entries are found, the server returns an error and writes it to the log. The following table describes these parameters. Table C.9. LdapSubjAttrMap parameters Parameter Description certSubjNameAttr Specifies the name of the LDAP attribute that contains a certificate subject name as its value. The default is certSubjectName , but this can be configured to any LDAP attribute. searchBase Specifies the base DN for starting the attribute search. The permissible value is a valid DN of an LDAP entry, such as o=example.com, c=US . C.2.5. LdapDNCompsMap The LdapDNCompsMap plugin module implements the DN components mapper. This mapper maps a certificate to an LDAP directory entry by constructing the entry's DN from components, such as cn , ou , o , and c , specified in the certificate subject name, and then uses it as the search DN to locate the entry in the directory. The mapper locates the following entries: The CA's entry in the directory for publishing the CA certificate and the CRL. End-entity entries in the directory for publishing end-entity certificates. The mapper takes DN components to build the search DN. The mapper also takes an optional root search DN. The server uses the DN components to form an LDAP entry to begin a subtree search and the filter components to form a search filter for the subtree. If none of the DN components are configured, the server uses the base DN for the subtree. If the base DN is null and none of the DN components match, an error is returned. If none of the DN components and filter components match, an error is returned. If the filter components are null, a base search is performed. Both the DNComps and filterComps parameters accept valid DN components or attributes separated by commas. The parameters do not accept multiple entries of an attribute; for example, filterComps can be set to cn,ou but not to cn,ou2,ou1 . To create a filter with multiple instances of the same attribute, such as if directory entries contain multiple ou s, modify the source code for the LdapDNCompsMap module. The following components are commonly used in DNs: uid represents the user ID of a user in the directory. cn represents the common name of a user in the directory. ou represents an organizational unit in the directory. o represents an organization in the directory. l represents a locality (city). st represents a state. c represents a country. For example, the following DN represents the user named Jane Doe who works for the Sales department at Example Corporation, which is located in Mountain View, California, United States: The Certificate Manager can use some or all of these components ( cn , ou , o , l , st , and c ) to build a DN for searching the directory. When creating a mapper rule, these components can be specified for the server to use to build a DN; that is, components to match attributes in the directory. This is set through the dnComps parameter. For example, the components cn , ou , o , and c are set as values for the dnComps parameter. To locate Jane Doe's entry in the directory, the Certificate Manager constructs the following DN by reading the DN attribute values from the certificate, and uses the DN as the base for searching the directory: A subject name does not need to have all of the components specified in the dnComps parameter. The server ignores any components that are not part of the subject name, such as l and st in this example. Unspecified components are not used to build the DN. In the example, if the ou component is not included, the server uses this DN as the base for searching the directory: For the dnComps parameter, enter those DN components that the Certificate Manager can use to form the LDAP DN exactly. In certain situations, however, the subject name in a certificate may match more than one entry in the directory. Then, the Certificate Manager might not get a single, distinct matching entry from the DN. For example, the subject name cn=Jane Doe, ou=Sales, o=Example Corporation, c=US might match two users with the name Jane Doe in the directory. If that occurs, the Certificate Manager needs additional criteria to determine which entry corresponds to the subject of the certificate. To specify the components the Certificate Manager must use to distinguish between different entries in the directory, use the filterComps parameter; for details, see Table C.10, "LdapDNCompsMap configuration parameters" . For example, if cn , ou , o , and c are values for the dnComps parameter, enter l for the filterComps parameter only if the l attribute can be used to distinguish between entries with identical cn , ou , o , and c values. If the two Jane Doe entries are distinguished by the value of the uid attribute - one entry's uid is janedoe1 , and the other entry's uid is janedoe2 - the subject names of certificates can be set to include the uid component. NOTE The e , l , and st components are not included in the standard set of certificate request forms provided for end entities. These components can be added to the forms, or the issuing agents can be required to insert these components when editing the subject name in the certificate issuance forms. C.2.5.1. Configuration parameters of LdapDNCompsMap With this configuration, a Certificate Manager maps its certificates with the ones in the LDAP directory by using the dnComps values to form a DN and the filterComps values to form a search filter for the subtree. If the formed DN is null, the server uses the baseDN value for the subtree. If both the formed DN and base DN are null, the server logs an error. If the filter is null, the server uses the baseDN value for the search. If both the filter and base DN are null, the server logs an error. The following table describes these parameters. Table C.10. LdapDNCompsMap configuration parameters Parameter Description baseDN Specifies the DN to start searching for an entry in the publishing directory. If the dnComps field is blank, the server uses the base DN value to start its search in the directory. dnComps Specifies where in the publishing directory the Certificate Manager should start searching for an LDAP entry that matches the CA's or the end entity's information. For example, if dnComps uses the o and c attributes of the DN, the server starts the search from the o= org , c= country entry in the directory, where org and country are replaced with values from the DN in the certificate. If the dnComps field is empty, the server checks the baseDN field and searches the directory tree specified by that DN for entries matching the filter specified by filterComps parameter values. The permissible values are valid DN components or attributes separated by commas. filterComps Specifies components the Certificate Manager should use to filter entries from the search result. The server uses the filterComps values to form an LDAP search filter for the subtree. The server constructs the filter by gathering values for these attributes from the certificate subject name; it uses the filter to search for and match entries in the LDAP directory. If the server finds more than one entry in the directory that matches the information gathered from the certificate, the search is successful, and the server optionally performs a verification. For example, if filterComps is set to use the email and user ID attributes ( filterComps=e,uid ), the server searches the directory for an entry whose values for email and user ID match the information gathered from the certificate. The permissible values are valid directory attributes in the certificate DN separated by commas. The attribute names for the filters need to be attribute names from the certificate, not from ones in the LDAP directory. For example, most certificates have an e attribute for the user's email address; LDAP calls that attribute mail . C.3. Rule instances This section discusses the rule instances that have been set. C.3.1. LdapCaCertRule The LdapCaCertRule can be used to publish CA certificates to an LDAP directory. Table C.11. LdapCaCert Rule configuration parameters Parameter Value Description type cacert Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapCaCertMap Specifies the mapper used with the rule. See Section C.2.1.1, "LdapCaCertMap" for details on the mapper. publisher LdapCaCertPublisher Specifies the publisher used with the rule. See Section C.1.2, "LdapCaCertPublisher" for details on the publisher. C.3.2. LdapXCertRule The LdapXCertRule is used to publish cross-pair certificates to an LDAP directory. Table C.12. LdapXCert rule configuration parameters Parameter Value Description type xcert Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapCaCertMap Specifies the mapper used with the rule. See Section C.2.1.1, "LdapCaCertMap" for details on the mapper. publisher LdapCrossCertPairPublisher Specifies the publisher used with the rule. See Section C.1.6, "LdapCertificatePairPublisher" for details on this publisher. C.3.3. LdapUserCertRule The LdapUserCertRule is used to publish user certificates to an LDAP directory. Table C.13. LdapUserCert rule configuration parameters Parameter Value Description type certs Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapUserCertMap Specifies the mapper used with the rule. See Section C.2.3, "LdapSimpleMap" for details on the mapper. publisher LdapUserCertPublisher Specifies the publisher used with the rule. See Section C.1.3, "LdapUserCertPublisher" for details on the publisher. C.3.4. LdapCRLRule The LdapCRLRule is used to publish CRLs to an LDAP directory. Table C.14. LdapCRL rule configuration parameters Parameter Value Description type crl Specifies the type of certificate that will be published. predicate Specifies a predicate for the publisher. enable yes Enables the rule. mapper LdapCrlMap Specifies the mapper used with the rule. See Section C.2.1.2, "LdapCrlMap" for details on the mapper. publisher LdapCrlPublisher Specifies the publisher used with the rule. See Section C.1.4, "LdapCrlPublisher" for details on the publisher.
[ "uid=USDsubj.cn,ou=people,o=USDsubj.o", "uid=USDsubj.cn,ou=people,o=USDsubj.o", "cn=Jane Doe, ou=Sales, o=Example Corporation, l=Mountain View, st=California, c=US", "cn=Jane Doe, ou=Sales, o=Example Corporation, c=US", "cn=Jane Doe, o=Example Corporation, c=US" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/publishing-modules-reference
Chapter 2. Trusted Artifact Signer's implementation of The Update Framework
Chapter 2. Trusted Artifact Signer's implementation of The Update Framework Starting with Red Hat Trusted Artifact Signer (RHTAS) version 1.1, we implemented The Update Framework (TUF) as a trust root to store public keys, and certificates used by RHTAS services. The Update Framework is a sophisticated framework for securing software update systems, and this makes it ideal for securing shipped artifacts. The Update Framework refers to the RHTAS services as trusted root targets. There are four trusted targets, one for each RHTAS service: Fulcio, Certificate Transparency (CT) log, Rekor, and Timestamp Authority (TSA). Client software, such as cosign , use the RHTAS trust root targets to sign and verify artifact signatures. A simple HTTP server distributes the public keys and certificates to the client software. This simple HTTP server has the TUF repository of the individual targets. When deploying the RHTAS operator in OpenShift, by default, we create a TUF repository, and prepopulate the individual targets. By default, the expiration date of all metadata files is 52 weeks from the time you deploy a Securesign instance. Red Hat recommends choosing shorter expiration periods, and rotating your public keys and certificates often. Doing these maintenance tasks regularly can help prevent attacks on your code base.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/administration_guide/trusted-artifact-signers-implementation-of-the-update-framework_admin
Installing an on-premise cluster with the Agent-based Installer
Installing an on-premise cluster with the Agent-based Installer OpenShift Container Platform 4.17 Installing an on-premise OpenShift Container Platform cluster with the Agent-based Installer Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_an_on-premise_cluster_with_the_agent-based_installer/index
Storage
Storage OpenShift Container Platform 4.15 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app:v4 resources: requests: ephemeral-storage: \"2Gi\" 1 limits: ephemeral-storage: \"4Gi\" 2 volumeMounts: - name: ephemeral mountPath: \"/tmp\" - name: log-aggregator image: images.my-company.example/log-aggregator:v6 resources: requests: ephemeral-storage: \"2Gi\" limits: ephemeral-storage: \"4Gi\" volumeMounts: - name: ephemeral mountPath: \"/tmp\" volumes: - name: ephemeral emptyDir: {}", "df -h /var/lib", "Filesystem Size Used Avail Use% Mounted on /dev/disk/by-partuuid/4cd1448a-01 69G 32G 34G 49% /", "oc delete pv <pv-name>", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s", "oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'", "oc get pv", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:", "oc get pv <pv-claim>", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3", "apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi", "apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3", "securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1", "cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF", "cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF", "cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false", "apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5", "oc create -f cinder-persistentvolume.yaml", "oc create serviceaccount <service_account>", "oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4", "{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }", "{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar", "\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7", "oc get pv", "NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m", "ls -lZ /opt/nfs -d", "drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs", "id nfsnobody", "uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)", "spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2", "spec: containers: 1 - name: securityContext: runAsUser: 65534 2", "setsebool -P virt_use_nfs 1", "/<example_fs> *(rw,root_squash)", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3", "oc create -f pvc.yaml", "vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk", "shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk", "apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5", "oc create -f pv1.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4", "oc create -f pvc1.yaml", "oc adm new-project openshift-local-storage", "oc annotate namespace openshift-local-storage openshift.io/node-selector=''", "oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f openshift-local-storage.yaml", "oc -n openshift-local-storage get pods", "NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m", "oc get csvs -n openshift-local-storage", "NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Filesystem 5 fsType: xfs 6 devicePaths: 7 - /path/to/device 8", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block 5 devicePaths: 6 - /path/to/device 7", "oc create -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "oc create -f <example-pv>.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4", "oc create -f <local-pvc>.yaml", "apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3", "oc create -f <local-pod>.yaml", "apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM", "oc apply -f local-volume-set.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"local-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg", "spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists", "oc edit localvolume <name> -n openshift-local-storage", "oc delete pv <pv-name>", "oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1", "oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces", "oc delete pv <pv-name>", "oc delete project openshift-local-storage", "apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''", "apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4", "oc create -f pv.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual", "oc create -f pvc.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file_name>", "oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase 4.13.0-202301261535 Succeeded", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.15 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.15 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}", "oc create ns <namespace>", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low", "oc create -f <file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: \"true\" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 forceWipeDevicesAndDestroyAllData: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10", "lsblk --paths --json -o NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE", "pvs <device-name> 1", "cat /proc/1/mountinfo | grep <device-name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: - name: vg1 1 fstype: ext4 2 default: true deviceSelector: 3 forceWipeDevicesAndDestroyAllData: false 4 thinPoolConfig: 5 nodeSelector: 6", "pvs -S vgname=<vg_name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 nodeSelector: 2 deviceSelector: 3 thinPoolConfig: 4", "oc create -f <file_name>", "lvmcluster/lvmcluster created", "oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>", "{\"deviceClassStatuses\": 1 [ { \"name\": \"vg1\", \"nodeStatus\": [ 2 { \"devices\": [ 3 \"/dev/nvme0n1\", \"/dev/nvme1n1\", \"/dev/nvme2n1\" ], \"node\": \"kube-node\", 4 \"status\": \"Ready\" 5 } ] } ] \"state\":\"Ready\"} 6", "status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m", "oc get volumesnapshotclass", "NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms namespace: openshift-storage spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 deviceSelector: 2 thinPoolConfig: 3 nodeSelector: 4 remediationAction: enforce severity: low", "oc create -f <file_name> -n <cluster_namespace> 1", "oc delete lvmcluster <lvmclustername> -n openshift-storage", "oc get lvmcluster -n <namespace>", "No resources found in openshift-storage namespace.", "oc delete -f <file_name> -n <cluster_namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "oc get policy -n <namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 limits: storage: 20Gi 4 storageClassName: lvms-vg1 5", "oc create -f <file_name> -n <application_namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc edit <lvmcluster_file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "oc edit -f <file_name> -ns <namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1", "oc patch <pvc_name> -n <application_namespace> -p \\ 1 '{ \"spec\": { \"resources\": { \"requests\": { \"storage\": \"<desired_size>\" }}}} --type=merge' 2", "oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}", "oc delete pvc <pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3", "oc get volumesnapshotclass", "oc create -f <file_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete volumesnapshot <volume_snapshot_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete pvc <clone_pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{\"spec\":{\"channel\":\"<update_channel>\"}}' 1", "oc get events -n openshift-storage", "8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.15 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.15 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.15 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.15 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.15 installing: waiting for deployment lvms-operator to become ready: deployment \"lvms-operator\" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.15 install strategy completed with no errors", "oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'", "lvms-operator.v4.15", "openshift.io/cluster-monitoring=true", "oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV", "currentCSV: lvms-operator.v4.15.3", "oc delete subscription.operators.coreos.com lvms-operator -n <namespace>", "subscription.operators.coreos.com \"lvms-operator\" deleted", "oc delete clusterserviceversion <currentCSV> -n <namespace> 1", "clusterserviceversion.operators.coreos.com \"lvms-operator.v4.15.3\" deleted", "oc get csv -n <namespace>", "oc delete -f <policy> -n <namespace> 1", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high", "oc create -f <policy> -ns <namespace>", "oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.15 --dest-dir=<directory_name>", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s", "oc describe pvc <pvc_name> 1", "Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io \"lvms-vg1\" not found", "oc get lvmcluster -n openshift-storage", "NAME AGE my-lvmcluster 65m", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m", "oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m", "oc describe pvc <pvc_name> 1", "oc project openshift-storage", "oc get logicalvolume", "oc delete logicalvolume <name> 1", "oc patch logicalvolume <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc get lvmvolumegroup", "oc delete lvmvolumegroup <name> 1", "oc patch lvmvolumegroup <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc delete lvmvolumegroupnodestatus --all", "oc delete lvmcluster --all", "oc patch lvmcluster <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF", "oc new-app mysql-persistent", "--> Deploying template \"openshift/mysql-persistent\" to project default", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s", "kind: CSIDriver metadata: name: csi.mydriver.company.org labels: security.openshift.io/csi-ephemeral-volume-profile: restricted 1", "kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar", "oc create -f my-csi-app.yaml", "oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedSecret metadata: name: my-share spec: secretRef: name: <name of secret> namespace: <namespace of secret> EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF", "oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder", "oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedSecret: my-share EOF", "oc apply -f - <<EOF apiVersion: sharedresource.openshift.io/v1alpha1 kind: SharedConfigMap metadata: name: my-share spec: configMapRef: name: <name of configmap> namespace: <namespace of configmap> EOF", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedconfigmaps resourceNames: - my-share verbs: - use EOF", "create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder", "oc apply -f - <<EOF kind: Pod apiVersion: v1 metadata: name: my-app namespace: my-namespace spec: serviceAccountName: default containers omitted .... Follow standard use of 'volumeMounts' for referencing your shared resource volume volumes: - name: my-csi-volume csi: readOnly: true driver: csi.sharedresource.openshift.io volumeAttributes: sharedConfigMap: my-share EOF", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete", "oc create -f volumesnapshotclass.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2", "oc create -f volumesnapshot-dynamic.yaml", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1", "oc create -f volumesnapshot-manual.yaml", "oc describe volumesnapshot mysnap", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi", "oc get volumesnapshotcontent", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1", "oc delete volumesnapshot <volumesnapshot_name>", "volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted", "oc delete volumesnapshotcontent <volumesnapshotcontent_name>", "oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f pvc-restore.yaml", "oc get pvc", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1", "oc create -f pvc-clone.yaml", "oc get pvc pvc-1-clone", "kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1", "spec: driverConfig: driverType: '' logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal storageClassState: Unmanaged 1", "patch clustercsidriver USDDRIVERNAME --type=merge -p \"{\\\"spec\\\":{\\\"storageClassState\\\":\\\"USD{STATE}\\\"}}\" 1", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: openshift-aws-efs-csi-driver namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - elasticfilesystem:* effect: Allow resource: '*' secretRef: name: aws-efs-cloud-credentials namespace: openshift-cluster-csi-drivers serviceAccountNames: - aws-efs-csi-driver-operator - aws-efs-csi-driver-controller-sa", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com", "2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created 2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml 2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6", "Trust relationships trusted entity trusted account A configuration on my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::301721915996:root\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": {} } ] } my-cross-account-assume-policy policy attached to my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" } } my-efs-acrossaccount-driver-policy attached to my-efs-acrossaccount-role in account B { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"VisualEditor0\", \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeNetworkInterfaces\", \"ec2:DescribeSubnets\" ], \"Resource\": \"*\" }, { \"Sid\": \"VisualEditor1\", \"Effect\": \"Allow\", \"Action\": [ \"elasticfilesystem:DescribeMountTargets\", \"elasticfilesystem:DeleteAccessPoint\", \"elasticfilesystem:ClientMount\", \"elasticfilesystem:DescribeAccessPoints\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:DescribeFileSystems\", \"elasticfilesystem:CreateAccessPoint\" ], \"Resource\": [ \"arn:aws:elasticfilesystem:*:589722580343:access-point/*\", \"arn:aws:elasticfilesystem:*:589722580343:file-system/*\" ] } ] }", "my-cross-account-assume-policy policy attached to Openshift cluster efs csi driver user in account A { \"Version\": \"2012-10-17\", \"Statement\": { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" } }", "oc -n openshift-cluster-csi-drivers create secret generic my-efs-cross-account --from-literal=awsRoleArn='arn:aws:iam::589722580343:role/my-efs-acrossaccount-role'", "oc -n openshift-cluster-csi-drivers create role access-secrets --verb=get,list,watch --resource=secrets oc -n openshift-cluster-csi-drivers create rolebinding --role=access-secrets default-to-secrets --serviceaccount=openshift-cluster-csi-drivers:aws-efs-csi-driver-controller-sa", "This step is not mandatory, but can be safer for AWS EFS volume usage.", "EFS volume filesystem policy in account B { \"Version\": \"2012-10-17\", \"Id\": \"efs-policy-wizard-8089bf4a-9787-40f0-958e-bc2363012ace\", \"Statement\": [ { \"Sid\": \"efs-statement-bd285549-cfa2-4f8b-861e-c372399fd238\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"*\" }, \"Action\": [ \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientMount\" ], \"Resource\": \"arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5\", \"Condition\": { \"Bool\": { \"elasticfilesystem:AccessedViaMountTarget\": \"true\" } } }, { \"Sid\": \"efs-statement-03646e39-d80f-4daf-b396-281be1e43bab\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::589722580343:role/my-efs-acrossaccount-role\" }, \"Action\": [ \"elasticfilesystem:ClientRootAccess\", \"elasticfilesystem:ClientWrite\", \"elasticfilesystem:ClientMount\" ], \"Resource\": \"arn:aws:elasticfilesystem:us-east-2:589722580343:file-system/fs-091066a9bf9becbd5\" } ] }", "The cross account efs volume storageClass kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-cross-account-mount-sc provisioner: efs.csi.aws.com mountOptions: - tls parameters: provisioningMode: efs-ap fileSystemId: fs-00f6c3ae6f06388bb directoryPerms: \"700\" gidRangeStart: \"1000\" gidRangeEnd: \"2000\" basePath: \"/account-a-data\" csi.storage.k8s.io/provisioner-secret-name: my-efs-cross-account csi.storage.k8s.io/provisioner-secret-namespace: openshift-cluster-csi-drivers volumeBindingMode: Immediate", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi", "apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3", "oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created", "oc get clustercsidriver efs.csi.aws.com -o yaml", "oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition", "oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 provisioner: disk.csi.azure.com parameters: skuName: <storage-class-account-type> 2 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF", "oc get storageclass", "oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile-csi file.csi.azure.com Delete Immediate true 68m managed-csi (default) disk.csi.azure.com Delete WaitForFirstConsumer true 68m sc-prem-zrs disk.csi.azure.com Delete WaitForFirstConsumer true 4m25s 1", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: file.csi.azure.com 2 parameters: protocol: nfs 3 skuName: Premium_LRS # available values: Premium_LRS, Premium_ZRS mountOptions: - nconnect=4", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1", "oc describe storageclass csi-gce-pd-cmek", "Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi", "oc apply -f pvc.yaml", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s", "gcloud services enable file.googleapis.com --project <my_gce_project> 1", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: filestore.csi.storage.gke.io spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: filestore-csi provisioner: filestore.csi.storage.gke.io parameters: connect-mode: DIRECT_PEERING 1 network: network-name 2 allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "oc -n openshift-machine-api get machinesets -o yaml | grep \"network:\" - network: gcp-filestore-network (...)", "oc get pvc -o json -A | jq -r '.items[] | select(.spec.storageClassName == \"filestore-csi\")", "oc delete <pvc-name> 1", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi", "oc create -f cinder-claim.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2", "oc create -f pvc-manila.yaml", "oc get pvc pvc-manila", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: \"USDopenshift-storage-policy-xxxx\" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim spec: resources: requests: storage: 1Gi accessModes: - ReadWriteMany storageClassName: thin-csi", "~ USD oc delete CSIDriver csi.vsphere.vmware.com", "csidriver.storage.k8s.io \"csi.vsphere.vmware.com\" deleted", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: encryption provisioner: csi.vsphere.vmware.com parameters: storagePolicyName: <storage-policy-name> 1 datastoreurl: \"ds:///vmfs/volumes/vsan:522e875627d-b090c96b526bb79c/\"", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-encrypted provisioner: csi.vsphere.vmware.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: storagePolicyName: <storage-policy-name> 1", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "~ USD oc edit clustercsidriver csi.vsphere.vmware.com -o yaml", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: csi.vsphere.vmware.com spec: logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null driverConfig: driverType: vSphere 1 vSphere: topologyCategories: 2 - openshift-zone - openshift-region", "~ USD oc get csinode", "NAME DRIVERS AGE co8-4s88d-infra-2m5vd 1 27m co8-4s88d-master-0 1 70m co8-4s88d-master-1 1 70m co8-4s88d-master-2 1 70m co8-4s88d-worker-j2hmg 1 47m co8-4s88d-worker-mbb46 1 47m co8-4s88d-worker-zlk7d 1 47m", "~ USD oc get csinode co8-4s88d-worker-j2hmg -o yaml", "spec: drivers: - allocatable: count: 59 name: csi-vsphere.vmware.com nodeID: co8-4s88d-worker-j2hmg topologyKeys: 1 - topology.csi.vmware.com/openshift-zone - topology.csi.vmware.com/openshift-region", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc 1 provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy 2 reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer", "~ USD oc get pv <pv-name> -o yaml", "nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.csi.vmware.com/openshift-zone 1 operator: In values: - <openshift-zone> -key: topology.csi.vmware.com/openshift-region 2 operator: In values: - <openshift-region> peristentVolumeclaimPolicy: Delete storageClassName: <zoned-storage-class-name> 3 volumeMode: Filesystem", "kind: Pod apiVersion: v1 metadata: name: my-app spec: containers: - name: my-frontend image: busybox:1.28 volumeMounts: - mountPath: \"/mnt/storage\" name: data command: [ \"sleep\", \"1000000\" ] volumes: - name: data 1 ephemeral: volumeClaimTemplate: metadata: labels: type: my-app-ephvol spec: accessModes: [ \"ReadWriteOnce\" ] storageClassName: \"gp2-csi\" resources: requests: storage: 1Gi", "oc edit storageclass <storage_class_name> 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1", "oc describe pvc <pvc_name>", "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs", "get node <node name> 1", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1", "spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/storage/index
Chapter 7. Logging
Chapter 7. Logging 7.1. Enabling protocol logging The client can log AMQP protocol frames to the console. This data is often critical when diagnosing problems. To enable protocol logging, set the PN_TRACE_FRM environment variable to 1 : Example: Enabling protocol logging USD export PN_TRACE_FRM=1 USD <your-client-program> To disable protocol logging, unset the PN_TRACE_FRM environment variable.
[ "export PN_TRACE_FRM=1 <your-client-program>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_ruby_client/logging
18.2. Remote Management with SSH
18.2. Remote Management with SSH The ssh package provides an encrypted network protocol that can securely send management functions to remote virtualization servers. The method described below uses the libvirt management connection, securely tunneled over an SSH connection, to manage the remote machines. All the authentication is done using SSH public key cryptography and passwords or passphrases gathered by your local SSH agent. In addition, the VNC console for each guest is tunneled over SSH . When using using SSH for remotely managing your virtual machines, be aware of the following problems: You require root log in access to the remote machine for managing virtual machines. The initial connection setup process may be slow. There is no standard or trivial way to revoke a user's key on all hosts or guests. SSH does not scale well with larger numbers of remote machines. Note Red Hat Virtualization enables remote management of large numbers of virtual machines. For further details, see the Red Hat Virtualization documentation . The following packages are required for SSH access: openssh openssh-askpass openssh-clients openssh-server Configuring Password-less or Password-managed SSH Access for virt-manager The following instructions assume you are starting from scratch and do not already have SSH keys set up. If you have SSH keys set up and copied to the other systems, you can skip this procedure. Important SSH keys are user-dependent and may only be used by their owners. A key's owner is the user who generated it. Keys may not be shared across different users. virt-manager must be run by the user who owns the keys to connect to the remote host. That means, if the remote systems are managed by a non-root user, virt-manager must be run in unprivileged mode. If the remote systems are managed by the local root user, then the SSH keys must be owned and created by root. You cannot manage the local host as an unprivileged user with virt-manager . Optional: Changing user Change user, if required. This example uses the local root user for remotely managing the other hosts and the local host. Generating the SSH key pair Generate a public key pair on the machine where virt-manager is used. This example uses the default key location, in the ~/.ssh/ directory. Copying the keys to the remote hosts Remote login without a password, or with a pass-phrase, requires an SSH key to be distributed to the systems being managed. Use the ssh-copy-id command to copy the key to root user at the system address provided (in the example, [email protected] ). Afterwards, try logging into the machine and check the .ssh/authorized_keys file to make sure unexpected keys have not been added: Repeat for other systems, as required. Optional: Add the passphrase to the ssh-agent Add the pass-phrase for the SSH key to the ssh-agent , if required. On the local host, use the following command to add the pass-phrase (if there was one) to enable password-less login. This command will fail to run if the ssh-agent is not running. To avoid errors or conflicts, make sure that your SSH parameters are set correctly. See the Red Hat Enterprise System Administration Guide for more information. The libvirt daemon ( libvirtd ) The libvirt daemon provides an interface for managing virtual machines. You must have the libvirtd daemon installed and running on every remote host that you intend to manage this way. After libvirtd and SSH are configured, you should be able to remotely access and manage your virtual machines. You should also be able to access your guests with VNC at this point. Accessing Remote Hosts with virt-manager Remote hosts can be managed with the virt-manager GUI tool. SSH keys must belong to the user executing virt-manager for password-less login to work. Start virt-manager . Open the File ⇒ Add Connection menu. Figure 18.1. Add connection menu Use the drop down menu to select hypervisor type, and click the Connect to remote host check box to open the Connection Method (in this case Remote tunnel over SSH), enter the User name and Hostname , then click Connect .
[ "su -", "ssh-keygen -t rsa", "ssh-copy-id -i ~/.ssh/id_rsa.pub [email protected] [email protected]'s password:", "ssh [email protected]", "ssh-add ~/.ssh/id_rsa", "ssh root@ somehost # systemctl enable libvirtd.service # systemctl start libvirtd.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-remote_management_of_guests-remote_management_with_ssh
Chapter 4. Enabling and configuring Data Grid statistics and JMX monitoring
Chapter 4. Enabling and configuring Data Grid statistics and JMX monitoring Data Grid can provide Cache Manager and cache statistics as well as export JMX MBeans. 4.1. Enabling statistics in embedded caches Configure Data Grid to export statistics for the Cache Manager and embedded caches. Procedure Open your Data Grid configuration for editing. Add the statistics="true" attribute or the .statistics(true) method. Save and close your Data Grid configuration. Embedded cache statistics XML <infinispan> <cache-container statistics="true"> <distributed-cache statistics="true"/> <replicated-cache statistics="true"/> </cache-container> </infinispan> GlobalConfigurationBuilder GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder().cacheContainer().statistics(true); DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); Configuration builder = new ConfigurationBuilder(); builder.statistics().enable(); 4.2. Configuring Data Grid metrics Data Grid generates metrics that are compatible with any monitoring system. Gauges provide values such as the average number of nanoseconds for write operations or JVM uptime. Histograms provide details about operation execution times such as read, write, and remove times. By default, Data Grid generates gauges when you enable statistics but you can also configure it to generate histograms. Note Data Grid metrics are provided at the vendor scope. Metrics related to the JVM are provided in the base scope. Prerequisites You must add Micrometer Core and Micrometer Registry Prometheus JARs to your classpath to export Data Grid metrics for embedded caches. Procedure Open your Data Grid configuration for editing. Add the metrics element or object to the cache container. Enable or disable gauges with the gauges attribute or field. Enable or disable histograms with the histograms attribute or field. Save and close your client configuration. Metrics configuration XML <infinispan> <cache-container statistics="true"> <metrics gauges="true" histograms="true" /> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "metrics" : { "gauges" : "true", "histograms" : "true" } } } } YAML infinispan: cacheContainer: statistics: "true" metrics: gauges: "true" histograms: "true" GlobalConfigurationBuilder GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() //Computes and collects statistics for the Cache Manager. .statistics().enable() //Exports collected statistics as gauge and histogram metrics. .metrics().gauges(true).histograms(true) .build(); Additional resources Micrometer Prometheus 4.3. Registering JMX MBeans Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must also enable statistics otherwise Data Grid provides 0 values for all statistic attributes in JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the jmx element or object to the cache container and specify true as the value for the enabled attribute or field. Add the domain attribute or field and specify the domain where JMX MBeans are exposed, if required. Save and close your client configuration. JMX configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" GlobalConfigurationBuilder GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain("org.mydomain"); 4.3.1. Enabling JMX remote ports Provide unique remote JMX ports to expose Data Grid MBeans through connections in JMXServiceURL format. You can enable remote JMX ports using one of the following approaches: Enable remote JMX ports that require authentication to one of the Data Grid Server security realms. Enable remote JMX ports manually using the standard Java management configuration options. Prerequisites For remote JMX with authentication, define JMX specific user roles using the default security realm. Users must have controlRole with read/write access or the monitorRole with read-only access to access any JMX resources. Procedure Start Data Grid Server with a remote JMX port enabled using one of the following ways: Enable remote JMX through port 9999 . Warning Using remote JMX with SSL disabled is not intended for production environments. Pass the following system properties to Data Grid Server at startup. Warning Enabling remote JMX with no authentication or SSL is not secure and not recommended in any environment. Disabling authentication and SSL allows unauthorized users to connect to your server and access the data hosted there. Additional resources Creating security realms 4.3.2. Data Grid MBeans Data Grid exposes JMX MBeans that represent manageable resources. org.infinispan:type=Cache Attributes and operations available for cache instances. org.infinispan:type=CacheManager Attributes and operations available for Cache Managers, including Data Grid cache and cluster health statistics. For a complete list of available JMX MBeans along with descriptions and available operations and attributes, see the Data Grid JMX Components documentation. Additional resources Data Grid JMX Components 4.3.3. Registering MBeans in custom MBean servers Data Grid includes an MBeanServerLookup interface that you can use to register MBeans in custom MBeanServer instances. Prerequisites Create an implementation of MBeanServerLookup so that the getMBeanServer() method returns the custom MBeanServer instance. Configure Data Grid to register JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the mbean-server-lookup attribute or field to the JMX configuration for the Cache Manager. Specify fully qualified name (FQN) of your MBeanServerLookup implementation. Save and close your client configuration. JMX MBean server lookup configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com" mbean-server-lookup="com.example.MyMBeanServerLookup"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com", "mbean-server-lookup" : "com.example.MyMBeanServerLookup" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" mbeanServerLookup: "com.example.MyMBeanServerLookup" GlobalConfigurationBuilder GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain("org.mydomain") .mBeanServerLookup(new com.acme.MyMBeanServerLookup()); 4.4. Exporting metrics during a state transfer operation You can export time metrics for clustered caches that Data Grid redistributes across nodes. A state transfer operation occurs when a clustered cache topology changes, such as a node joining or leaving a cluster. During a state transfer operation, Data Grid exports metrics from each cache, so that you can determine a cache's status. A state transfer exposes attributes as properties, so that Data Grid can export metrics from each cache. Note You cannot perform a state transfer operation in invalidation mode. Data Grid generates time metrics that are compatible with the REST API and the JMX API. Prerequisites Configure Data Grid metrics. Enable metrics for your cache type, such as embedded cache or remote cache. Initiate a state transfer operation by changing your clustered cache topology. Procedure Choose one of the following methods: Configure Data Grid to use the REST API to collect metrics. Configure Data Grid to use the JMX API to collect metrics. Additional resources Enabling and configuring Data Grid statistics and JMX monitoring (Data Grid caches) StateTransferManager (Data Grid 14.0 API) 4.5. Monitoring the status of cross-site replication Monitor the site status of your backup locations to detect interruptions in the communication between the sites. When a remote site status changes to offline , Data Grid stops replicating your data to the backup location. Your data become out of sync and you must fix the inconsistencies before bringing the clusters back online. Monitoring cross-site events is necessary for early problem detection. Use one of the following monitoring strategies: Monitoring cross-site replication with the REST API Monitoring cross-site replication with the Prometheus metrics or any other monitoring system Monitoring cross-site replication with the REST API Monitor the status of cross-site replication for all caches using the REST endpoint. You can implement a custom script to poll the REST endpoint or use the following example. Prerequisites Enable cross-site replication. Procedure Implement a script to poll the REST endpoint. The following example demonstrates how you can use a Python script to poll the site status every five seconds. #!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/cache-managers/{cache_manager}/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None # Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' # Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 # Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC) When a site status changes from online to offline or vice-versa, the function on_event is invoked. If you want to use this script, you must specify the following variables: USERNAME and PASSWORD : The username and password of Data Grid user with permission to access the REST endpoint. POLL_INTERVAL_SEC : The number of seconds between polls. SERVERS : The list of Data Grid Servers at this site. The script only requires a single valid response but the list is provided to allow fail over. REMOTE_SITES : The list of remote sites to monitor on these servers. CACHES : The list of cache names to monitor. Additional resources REST API: Getting status of backup locations Monitoring cross-site replication with the Prometheus metrics Prometheus, and other monitoring systems, let you configure alerts to detect when a site status changes to offline . Tip Monitoring cross-site latency metrics can help you to discover potential issues. Prerequisites Enable cross-site replication. Procedure Configure Data Grid metrics. Configure alerting rules using the Prometheus metrics format. For the site status, use 1 for online and 0 for offline . For the expr filed, use the following format: vendor_cache_manager_default_cache_<cache name>_x_site_admin_<site name>_status . In the following example, Prometheus alerts you when the NYC site gets offline for cache named work or sessions . groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0 The following image shows an alert that the NYC site is offline for cache work . Figure 4.1. Prometheus Alert Additional resources Configuring Data Grid metrics Prometheus Alerting Overview Grafana Alerting Documentation Openshift Managing Alerts
[ "<infinispan> <cache-container statistics=\"true\"> <distributed-cache statistics=\"true\"/> <replicated-cache statistics=\"true\"/> </cache-container> </infinispan>", "GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder().cacheContainer().statistics(true); DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); Configuration builder = new ConfigurationBuilder(); builder.statistics().enable();", "<infinispan> <cache-container statistics=\"true\"> <metrics gauges=\"true\" histograms=\"true\" /> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"metrics\" : { \"gauges\" : \"true\", \"histograms\" : \"true\" } } } }", "infinispan: cacheContainer: statistics: \"true\" metrics: gauges: \"true\" histograms: \"true\"", "GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() //Computes and collects statistics for the Cache Manager. .statistics().enable() //Exports collected statistics as gauge and histogram metrics. .metrics().gauges(true).histograms(true) .build();", "<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }", "infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\"", "GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain(\"org.mydomain\");", "bin/server.sh --jmx 9999", "bin/server.sh -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false", "<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\" mbean-server-lookup=\"com.example.MyMBeanServerLookup\"/> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\", \"mbean-server-lookup\" : \"com.example.MyMBeanServerLookup\" } } } }", "infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\" mbeanServerLookup: \"com.example.MyMBeanServerLookup\"", "GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain(\"org.mydomain\") .mBeanServerLookup(new com.acme.MyMBeanServerLookup());", "#!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/cache-managers/{cache_manager}/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC)", "groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/statistics-jmx
Chapter 2. Using the Argo CD plugin
Chapter 2. Using the Argo CD plugin You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps. This plugin provides a visual overview of the application's status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history. Prerequisites You have enabled the Argo CD plugin in Red Hat Developer Hub RHDH. Procedures Select the Catalog tab and choose the component that you want to use. Select the CD tab to view insights into deployments managed by Argo CD. Select an appropriate card to view the deployment details (for example, commit message, author name, and deployment history). Click the link icon ( ) to open the deployment details in Argo CD. Select the Overview tab and navigate to the Deployment summary section to review the summary of your application's deployment across namespaces. Additionally, select an appropriate Argo CD app to open the deployment details in Argo CD, or select a commit ID from the Revision column to review the changes in GitLab or GitHub. Additional resources For more information on installing dynamic plugins, see Installing and viewing dynamic plugins .
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/using_dynamic_plugins/using-the-argo-cd-plugin
Chapter 1. Observability and Service Mesh
Chapter 1. Observability and Service Mesh Red Hat OpenShift Observability provides real-time visibility, monitoring, and analysis of various system metrics, logs, and events to help you quickly diagnose and troubleshoot issues before they impact systems or applications. Red Hat OpenShift Observability connects open-source observability tools and technologies to create a unified Observability solution. The components of Red Hat OpenShift Observability work together to help you collect, store, deliver, analyze, and visualize data. Red Hat OpenShift Service Mesh integrates with the following Red Hat OpenShift Observability components: OpenShift Monitoring Red Hat OpenShift distributed tracing platform OpenShift Service Mesh also integrates with: Kiali provided by Red Hat, a powerful console for visualizing and managing your service mesh. OpenShift Service Mesh Console (OSSMC) plugin, an OpenShift Container Platform console plugin that seamlessly integrates Kiali console features into your OpenShift console.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.0.0tp1/html/observability/ossm-observability-service-mesh-assembly
5.9.2.3. ISO 9660
5.9.2.3. ISO 9660 In 1987, the International Organization for Standardization (known as ISO) released standard 9660. ISO 9660 defines how files are represented on CD-ROMs. Red Hat Enterprise Linux system administrators will likely see ISO 9660-formatted data in two places: CD-ROMs Files (usually referred to as ISO images ) containing complete ISO 9660 file systems, meant to be written to CD-R or CD-RW media The basic ISO 9660 standard is rather limited in functionality, especially when compared with more modern file systems. File names may be a maximum of eight characters long and an extension of no more than three characters is permitted. However, various extensions to the standard have become popular over the years, among them: Rock Ridge -- Uses some fields undefined in ISO 9660 to provide support for features such as long mixed-case file names, symbolic links, and nested directories (in other words, directories that can themselves contain other directories) Joliet -- An extension of the ISO 9660 standard, developed by Microsoft to allow CD-ROMs to contain long file names, using the Unicode character set Red Hat Enterprise Linux is able to correctly interpret ISO 9660 file systems using both the Rock Ridge and Joliet extensions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-fs-iso9660
5.4.16.5. Changing the Number of Images in an Existing RAID1 Device
5.4.16.5. Changing the Number of Images in an Existing RAID1 Device You can change the number of images in an existing RAID1 array just as you can change the number of images in the earlier implementation of LVM mirroring, by using the lvconvert command to specify the number of additional metadata/data subvolume pairs to add or remove. For information on changing the volume configuration in the earlier implementation of LVM mirroring, see Section 5.4.3.4, "Changing Mirrored Volume Configuration" . When you add images to a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to add to the device. You can also optionally specify on which physical volumes the new metadata/data image pairs will reside. Metadata subvolumes (named *_rmeta_* ) always exist on the same physical devices as their data subvolume counterparts *_rimage_* ). The metadata/data subvolume pairs will not be created on the same physical volumes as those from another metadata/data subvolume pair in the RAID array (unless you specify --alloc anywhere ). The format for the command to add images to a RAID1 volume is as follows: For example, the following display shows the LVM device my_vg/my_lv which is a 2-way RAID1 array: The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device: When you add an image to a RAID1 array, you can specify which physical volumes to use for the image. The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device, specifying that the physical volume /dev/sdd1 be used for the array: To remove images from a RAID1 array, use the following command. When you remove images from a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to remove from the device. You can also optionally specify the physical volumes from which to remove the device. Additionally, when an image and its associated metadata subvolume volume are removed, any higher-numbered images will be shifted down to fill the slot. If you remove lv_rimage_1 from a 3-way RAID1 array that consists of lv_rimage_0 , lv_rimage_1 , and lv_rimage_2 , this results in a RAID1 array that consists of lv_rimage_0 and lv_rimage_1 . The subvolume lv_rimage_2 will be renamed and take over the empty slot, becoming lv_rimage_1 . The following example shows the layout of a 3-way RAID1 logical volume my_vg/my_lv . The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume. The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume, specifying the physical volume that contains the image to remove as /dev/sde1 .
[ "lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m + num_additional_images vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m 2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 56.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m 2 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m - num_fewer_images vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvconvert -m1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m1 my_vg/my_lv /dev/sde1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdf1(1) [my_lv_rimage_1] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdf1(0) [my_lv_rmeta_1] /dev/sdg1(0)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/raid-upconvert
A.4. kvm_stat
A.4. kvm_stat The kvm_stat command is a python script which retrieves runtime statistics from the kvm kernel module. The kvm_stat command can be used to diagnose guest behavior visible to kvm . In particular, performance related issues with guests. Currently, the reported statistics are for the entire system; the behavior of all running guests is reported. To run this script you need to install the qemu-kvm-tools package. For more information, see Section 2.2, "Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System" . The kvm_stat command requires that the kvm kernel module is loaded and debugfs is mounted. If either of these features are not enabled, the command will output the required steps to enable debugfs or the kvm module. For example: Mount debugfs if required: kvm_stat Output The kvm_stat command outputs statistics for all guests and the host. The output is updated until the command is terminated (using Ctrl + c or the q key). Note that the output you see on your screen may differ. For an explanation of the output elements, click any of the terms to link to the defintion. Explanation of variables: kvm_ack_irq - Number of interrupt controller (PIC/IOAPIC) interrupt acknowledgements. kvm_age_page - Number of page age iterations by memory management unit (MMU) notifiers. kvm_apic - Number of APIC register accesses. kvm_apic_accept_irq - Number of interrupts accepted into local APIC. kvm_apic_ipi - Number of inter processor interrupts. kvm_async_pf_completed - Number of completions of asynchronous page faults. kvm_async_pf_doublefault - Number of asynchronous page fault halts. kvm_async_pf_not_present - Number of initializations of asynchronous page faults. kvm_async_pf_ready - Number of completions of asynchronous page faults. kvm_cpuid - Number of CPUID instructions executed. kvm_cr - Number of trapped and emulated control register (CR) accesses (CR0, CR3, CR4, CR8). kvm_emulate_insn - Number of emulated instructions. kvm_entry - Number of emulated instructions. kvm_eoi - Number of Advanced Programmable Interrupt Controller (APIC) end of interrupt (EOI) notifications. kvm_exit - Number of VM-exits . kvm_exit (NAME) - Individual exits that are processor-specific. See your processor's documentation for more information. kvm_fpu - Number of KVM floating-point units (FPU) reloads. kvm_hv_hypercall - Number of Hyper-V hypercalls. kvm_hypercall - Number of non-Hyper-V hypercalls. kvm_inj_exception - Number of exceptions injected into guest. kvm_inj_virq - Number of interrupts injected into guest. kvm_invlpga - Number of INVLPGA instructions intercepted. kvm_ioapic_set_irq - Number of interrupts level changes to the virtual IOAPIC controller. kvm_mmio - Number of emulated memory-mapped I/O (MMIO) operations. kvm_msi_set_irq - Number of message-signaled interrupts (MSI). kvm_msr - Number of model-specific register (MSR) accesses. kvm_nested_intercepts - Number of L1 ⇒ L2 nested SVM switches. kvm_nested_vmrun - Number of L1 ⇒ L2 nested SVM switches. kvm_nested_intr_vmexit - Number of nested VM-exit injections due to interrupt window. kvm_nested_vmexit - Exits to hypervisor while executing nested (L2) guest. kvm_nested_vmexit_inject - Number of L2 ⇒ L1 nested switches. kvm_page_fault - Number of page faults handled by hypervisor. kvm_pic_set_irq - Number of interrupts level changes to the virtual programmable interrupt controller (PIC). kvm_pio - Number of emulated programmed I/O (PIO) operations. kvm_pv_eoi - Number of paravirtual end of input (EOI) events. kvm_set_irq - Number of interrupt level changes at the generic IRQ controller level (counts PIC, IOAPIC and MSI). kvm_skinit - Number of SVM SKINIT exits. kvm_track_tsc - Number of time stamp counter (TSC) writes. kvm_try_async_get_page - Number of asynchronous page fault attempts. kvm_update_master_clock - Number of pvclock masterclock updates. kvm_userspace_exit - Number of exits to user space. kvm_write_tsc_offset - Number of TSC offset writes. vcpu_match_mmio - Number of SPTE cached memory-mapped I/O (MMIO) hits. The output information from the kvm_stat command is exported by the KVM hypervisor as pseudo files which are located in the /sys/kernel/debug/tracing/events/kvm/ directory.
[ "kvm_stat Please mount debugfs ('mount -t debugfs debugfs /sys/kernel/debug') and ensure the kvm modules are loaded", "mount -t debugfs debugfs /sys/kernel/debug", "kvm_stat kvm statistics kvm_exit 17724 66 Individual exit reasons follow, see kvm_exit (NAME) for more information. kvm_exit(CLGI) 0 0 kvm_exit(CPUID) 0 0 kvm_exit(CR0_SEL_WRITE) 0 0 kvm_exit(EXCP_BASE) 0 0 kvm_exit(FERR_FREEZE) 0 0 kvm_exit(GDTR_READ) 0 0 kvm_exit(GDTR_WRITE) 0 0 kvm_exit(HLT) 11 11 kvm_exit(ICEBP) 0 0 kvm_exit(IDTR_READ) 0 0 kvm_exit(IDTR_WRITE) 0 0 kvm_exit(INIT) 0 0 kvm_exit(INTR) 0 0 kvm_exit(INVD) 0 0 kvm_exit(INVLPG) 0 0 kvm_exit(INVLPGA) 0 0 kvm_exit(IOIO) 0 0 kvm_exit(IRET) 0 0 kvm_exit(LDTR_READ) 0 0 kvm_exit(LDTR_WRITE) 0 0 kvm_exit(MONITOR) 0 0 kvm_exit(MSR) 40 40 kvm_exit(MWAIT) 0 0 kvm_exit(MWAIT_COND) 0 0 kvm_exit(NMI) 0 0 kvm_exit(NPF) 0 0 kvm_exit(PAUSE) 0 0 kvm_exit(POPF) 0 0 kvm_exit(PUSHF) 0 0 kvm_exit(RDPMC) 0 0 kvm_exit(RDTSC) 0 0 kvm_exit(RDTSCP) 0 0 kvm_exit(READ_CR0) 0 0 kvm_exit(READ_CR3) 0 0 kvm_exit(READ_CR4) 0 0 kvm_exit(READ_CR8) 0 0 kvm_exit(READ_DR0) 0 0 kvm_exit(READ_DR1) 0 0 kvm_exit(READ_DR2) 0 0 kvm_exit(READ_DR3) 0 0 kvm_exit(READ_DR4) 0 0 kvm_exit(READ_DR5) 0 0 kvm_exit(READ_DR6) 0 0 kvm_exit(READ_DR7) 0 0 kvm_exit(RSM) 0 0 kvm_exit(SHUTDOWN) 0 0 kvm_exit(SKINIT) 0 0 kvm_exit(SMI) 0 0 kvm_exit(STGI) 0 0 kvm_exit(SWINT) 0 0 kvm_exit(TASK_SWITCH) 0 0 kvm_exit(TR_READ) 0 0 kvm_exit(TR_WRITE) 0 0 kvm_exit(VINTR) 1 1 kvm_exit(VMLOAD) 0 0 kvm_exit(VMMCALL) 0 0 kvm_exit(VMRUN) 0 0 kvm_exit(VMSAVE) 0 0 kvm_exit(WBINVD) 0 0 kvm_exit(WRITE_CR0) 2 2 kvm_exit(WRITE_CR3) 0 0 kvm_exit(WRITE_CR4) 0 0 kvm_exit(WRITE_CR8) 0 0 kvm_exit(WRITE_DR0) 0 0 kvm_exit(WRITE_DR1) 0 0 kvm_exit(WRITE_DR2) 0 0 kvm_exit(WRITE_DR3) 0 0 kvm_exit(WRITE_DR4) 0 0 kvm_exit(WRITE_DR5) 0 0 kvm_exit(WRITE_DR6) 0 0 kvm_exit(WRITE_DR7) 0 0 kvm_entry 17724 66 kvm_apic 13935 51 kvm_emulate_insn 13924 51 kvm_mmio 13897 50 varl-kvm_eoi 3222 12 kvm_inj_virq 3222 12 kvm_apic_accept_irq 3222 12 kvm_pv_eoi 3184 12 kvm_fpu 376 2 kvm_cr 177 1 kvm_apic_ipi 278 1 kvm_msi_set_irq 295 0 kvm_pio 79 0 kvm_userspace_exit 52 0 kvm_set_irq 50 0 kvm_pic_set_irq 50 0 kvm_ioapic_set_irq 50 0 kvm_ack_irq 25 0 kvm_cpuid 90 0 kvm_msr 12 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-kvm_stat
10.5.39. ServerSignature
10.5.39. ServerSignature The ServerSignature directive adds a line containing the Apache HTTP Server server version and the ServerName to any server-generated documents, such as error messages sent back to clients. ServerSignature is set to on by default. It can also be set to off or to EMail . EMail , adds a mailto:ServerAdmin HTML tag to the signature line of auto-generated responses.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-serversignature
Chapter 1. Architecture
Chapter 1. Architecture 1.1. Terminology VM or Process - a JBoss EAP instance running JBoss Data Virtualization. Host - a machine that is "hosting" one or more VMs. Service - a subsystem running in a VM (often in many VMs) and providing a related set of functionality In addition to these main components, the service platform provides a core set of services available to applications built on top of the service platform. These services are: Session - the Session service manages active session information. Buffer Manager - the Buffer Manager service provides access to data management for intermediate results. See Section 1.2.2, "Buffer Management" . Transaction - the Transaction service manages global, local, and request scoped transactions. See Section 5.1, "Transaction Support" for more information.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-architecture
Chapter 24. credential
Chapter 24. credential This chapter describes the commands under the credential command. 24.1. credential create Create new credential Usage: Table 24.1. Positional arguments Value Summary <user> User that owns the credential (name or id) <data> New credential data Table 24.2. Command arguments Value Summary -h, --help Show this help message and exit --type <type> New credential type: cert, ec2, totp and so on --project <project> Project which limits the scope of the credential (name or ID) Table 24.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 24.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 24.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 24.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 24.2. credential delete Delete credential(s) Usage: Table 24.7. Positional arguments Value Summary <credential-id> Id of credential(s) to delete Table 24.8. Command arguments Value Summary -h, --help Show this help message and exit 24.3. credential list List credentials Usage: Table 24.9. Command arguments Value Summary -h, --help Show this help message and exit --user <user> Filter credentials by <user> (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --type <type> Filter credentials by type: cert, ec2, totp and so on Table 24.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 24.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 24.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 24.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 24.4. credential set Set credential properties Usage: Table 24.14. Positional arguments Value Summary <credential-id> Id of credential to change Table 24.15. Command arguments Value Summary -h, --help Show this help message and exit --user <user> User that owns the credential (name or id) --type <type> New credential type: cert, ec2, totp and so on --data <data> New credential data --project <project> Project which limits the scope of the credential (name or ID) 24.5. credential show Display credential details Usage: Table 24.16. Positional arguments Value Summary <credential-id> Id of credential to display Table 24.17. Command arguments Value Summary -h, --help Show this help message and exit Table 24.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 24.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 24.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 24.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack credential create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--type <type>] [--project <project>] <user> <data>", "openstack credential delete [-h] <credential-id> [<credential-id> ...]", "openstack credential list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>] [--type <type>]", "openstack credential set [-h] --user <user> --type <type> --data <data> [--project <project>] <credential-id>", "openstack credential show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <credential-id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/credential
Chapter 4. Important update on odo
Chapter 4. Important update on odo Red Hat does not provide information about odo on the OpenShift Container Platform documentation site. See the documentation maintained by Red Hat and the upstream community for documentation information related to odo . Important For the materials maintained by the upstream community, Red Hat provides support under Cooperative Community Support .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/developer-cli-odo
Chapter 3. Using odf-cli command
Chapter 3. Using odf-cli command odf-cli command and its subcommands help to reduce repetitive tasks and provide better experience. You can download the odf-cli tool from the customer portal . Subcommands of odf get command odf get recovery-profile Displays the recovery-profile value set for the OSD. By default, an empty value is displayed if the value is not set using the odf set recovery-profile command. After the value is set, the appropriate value is displayed. Example: odf get health Checks the health of the Ceph cluster and common configuration issues. This command checks for the following: At least three mon pods are running on different nodes Mon quorum and Ceph health details At least three OSD pods are running on different nodes The 'Running' status of all pods Placement group status At least one MGR pod is running Example: odf get dr-health In mirroring-enabled clusters, fetches the connection status of a cluster from another cluster. The cephblockpool is queried with mirroring-enabled and If not found will exit with relevant logs. Example: odf get dr-prereq Checks and fetches the status of all the prerequisites to enable Disaster Recovery on a pair of clusters. The command takes the peer cluster name as an argument and uses it to compare current cluster configuration with the peer cluster configuration. Based on the comparison results, the status of the prerequisites is shown. Example Subcommands of odf operator command odf operator rook set Sets the provided property value in the rook-ceph-operator config configmap Example: where, ROOK_LOG_LEVEL can be DEBUG , INFO , or WARNING odf operator rook restart Restarts the Rook-Ceph operator Example: odf restore mon-quorum Restores the mon quorum when the majority of mons are not in quorum and the cluster is down. When the majority of mons are lost permanently, the quorum needs to be restored to a remaining good mon in order to bring the Ceph cluster up again. Example: odf restore deleted <crd> Restores the deleted Rook CR when there is still data left for the components, CephClusters, CephFilesystems, and CephBlockPools. Generally, when Rook CR is deleted and there is leftover data, the Rook operator does not delete the CR to ensure data is not lost and the operator does not remove the finalizer on the CR. As a result, the CR is stuck in the Deleting state and cluster health is not ensured. Upgrades are blocked too. This command helps to repair the CR without the cluster downtime. Note A warning message seeking confirmation to restore appears. After confirming, you need to enter continue to start the operator and expand to the full mon-quorum again. Example: 3.1. Configuring debug verbosity of Ceph components You can configure verbosity of Ceph components by enabling or increasing the log debugging for a specific Ceph subsystem from OpenShift Data Foundation. For information about the Ceph subsystems and the log levels that can be updated, see Ceph subsystems default logging level values . Procedure Set log level for Ceph daemons: where ceph-subsystem can be osd , mds , or mon . For example,
[ "odf get recovery-profile high_recovery_ops", "odf get health Info: Checking if at least three mon pods are running on different nodes rook-ceph-mon-a-7fb76597dc-98pxz Running openshift-storage ip-10-0-69-145.us-west-1.compute.internal rook-ceph-mon-b-885bdc59c-4vvcm Running openshift-storage ip-10-0-64-239.us-west-1.compute.internal rook-ceph-mon-c-5f59bb5dbc-8vvlg Running openshift-storage ip-10-0-30-197.us-west-1.compute.internal Info: Checking mon quorum and ceph health details Info: HEALTH_OK [...]", "odf get dr-health Info: fetching the cephblockpools with mirroring enabled Info: found \"ocs-storagecluster-cephblockpool\" cephblockpool with mirroring enabled Info: running ceph status from peer cluster Info: cluster: id: 9a2e7e55-40e1-4a79-9bfa-c3e4750c6b0f health: HEALTH_OK [...]", "odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled. odf get mon-endpoints Displays the mon endpoints odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled.", "odf operator rook set ROOK_LOG_LEVEL DEBUG configmap/rook-ceph-operator-config patched", "odf operator rook restart deployment.apps/rook-ceph-operator restarted", "odf restore mon-quorum c", "odf restore deleted cephclusters Info: Detecting which resources to restore for crd \"cephclusters\" Info: Restoring CR my-cluster Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no [...]", "odf set ceph log-level <ceph-subsystem1> <ceph-subsystem2> <log-level>", "odf set ceph log-level osd crush 20", "odf set ceph log-level mds crush 20", "odf set ceph log-level mon crush 20" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/troubleshooting_openshift_data_foundation/odf-cli-command_rhodf
Chapter 70. Kubernetes Resources Quota
Chapter 70. Kubernetes Resources Quota Since Camel 2.17 Only producer is supported The Kubernetes Resources Quota component is one of the Kubernetes Components which provides a producer to execute Kubernetes Resource Quota operations. 70.1. Dependencies When using kubernetes-resources-quota with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 70.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 70.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 70.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 70.3. Component Options The Kubernetes Resources Quota component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 70.4. Endpoint Options The Kubernetes Resources Quota endpoint is configured using URI syntax: with the following path and query parameters: 70.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 70.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 70.5. Message Headers The Kubernetes Resources Quota component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesResourcesQuotaLabels (producer) Constant: KUBERNETES_RESOURCES_QUOTA_LABELS The resource quota labels. Map CamelKubernetesResourcesQuotaName (producer) Constant: KUBERNETES_RESOURCES_QUOTA_NAME The resource quota name. String CamelKubernetesResourceQuotaSpec (producer) Constant: KUBERNETES_RESOURCE_QUOTA_SPEC The spec for a resource quota. ResourceQuotaSpec 70.6. Supported producer operation listResourcesQuota listResourcesQuotaByLabels getResourcesQuota createResourcesQuota updateResourceQuota deleteResourcesQuota 70.7. Kubernetes Resource Quota Producer Examples listResourcesQuota: this operation list the Resource Quotas on a kubernetes cluster. from("direct:list"). toF("kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuota"). to("mock:result"); This operation returns a List of Resource Quotas from your cluster. listResourcesQuotaByLabels: this operation list the Resource Quotas by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_RESOURCES_QUOTA_LABELS, labels); } }); toF("kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuotaByLabels"). to("mock:result"); This operation returns a List of Resource Quotas from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 70.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-resources-quota:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuota\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_RESOURCES_QUOTA_LABELS, labels); } }); toF(\"kubernetes-resources-quota:///?kubernetesClient=#kubernetesClient&operation=listResourcesQuotaByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-resources-quota-component-starter
Chapter 6. Assessing system-upgrade readiness with the pre-upgrade analysis task
Chapter 6. Assessing system-upgrade readiness with the pre-upgrade analysis task This task is a component of the in-place upgrade capability for Red Hat Enterprise Linux using the Leapp tool. For more information about the Leapp tool and using it to check upgrade readiness manually, see Upgrading from RHEL 8 to RHEL 9, Instructions for an in-place upgrade from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 . The pre-upgrade analysis task checks the readiness of systems to upgrade from Red Hat Enterprise Linux (RHEL) 8 to RHEL 9. If Insights detects upgrade-blocking issues, you can see more information about the issues, including steps to resolve them, in Insights for Red Hat Enterprise Linux on the Red Hat Hybrid Cloud Console (Console). The pre-upgrade analysis task can run on any RHEL 8 system that is connected to Red Hat Insights using the remote host configuration (rhc) solution. You can verify that your system is connected to Insights by locating it in the Insights system inventory on the Console. If the system is not in the inventory, see Remote Host Configuration and Management documentation for information about connecting systems to Insights. You can also run the Leapp utility manually on systems. When an Insights-connected system has a Leapp report in its archive, whether the utility was run manually or as an Insights task, you can see results from the report in Insights. 6.1. Requirements and prerequisites The following requirements and prerequisites apply to the pre-upgrade analysis task: This guide assumes that you have read and understood the in-place upgrade documentation before attempting to perform any upgrade-related action using Red Hat Insights. Your systems must be eligible for in-place upgrade. See in-place upgrade documentation for system requirements and limitations. Your RHEL system must be connected to Red Hat Insights using the remote host configuration solution in order to execute Insights tasks and other remediation playbooks from the Insights for Red Hat Enterprise Linux UI. For more information, see Remote Host Configuration and Management documentation. You are logged into the console.redhat.com with Tasks administrator privileges granted in User Access. Note All members of the Default admin access group have Tasks administrator access. If you are not a member of a User Access group with this role, you will not see any tasks on the Tasks page. For more information about User Access, including how to request greater access to Insights features, see User Access Configuration Guide for Role-based Access Control (RBAC) with FedRAMP . 6.2. Running the pre-upgrade analysis task Use the following procedure to analyze the readiness of RHEL systems for upgrading from RHEL 8 to RHEL 9. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to the Red Hat Hybrid Cloud Console > Red Hat Insights > RHEL > Automation Toolkit > Tasks . Locate the Pre-upgrade analysis for in-place upgrade from RHEL 8 task. Note If you can not see any tasks on the page, you might not have adequate User Access. See User Access Configuration Guide for Role-based Access Control (RBAC) with FedRAMP for more information. Optional: You can view details of the pre-upgrade analysis utility by clicking Download preview of playbook . Click Run task. On the Pre-upgrade analysis for in-place upgrade from RHEL 8 popup, select systems on which to run the pre-upgrade analysis by checking the box to each system. Note By default, the list of systems is filtered to only display systems that are eligible to run the task. You can change or add filters to expand the parameters of included systems from your inventory. Click Execute task to run the task on the selected systems. Verification Use the following procedure to verify that a task has been executed successfully. Go to the Red Hat Hybrid Cloud Console > Red Hat Insights > RHEL > Automation Toolkit > Tasks page and click the Activity tab. The status of tasks, whether they are in progress or have been completed, can be viewed here. Locate your task based on the run date and time. You can see whether the task completed or failed. 6.3. Reviewing the pre-upgrade analysis task report After executing the pre-upgrade analysis task on systems, you can review specific details and upgrade-inhibiting recommendations for each system. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to the Red Hat Hybrid Cloud Console > Red Hat Insights > RHEL > Automation Toolkit > Tasks and click the Activity tab. Click on the task name to view the results of a task. Note the run date and time so that you select the correct report. Click on the carat to the system name to view a list of alerts for that system. View information about upgrade-inhibiting alerts by clicking on the carat to an alert with a white exclamation mark inside of a red dot, accompanying red alert text. Note In addition to the inhibitor alerts, you might also see lower severity and informative alerts that do not require remediation in order for the upgrade to proceed. Review the report thoroughly. While some recommendations may be informational, it is crucial to take action if you encounter any errors or warnings. In the event of such issues, address them on your systems and re-run the pre-upgrade task to assess the impact of your remediation efforts. Note Certain errors are classified as official inhibitors, and proceeding with the upgrade is not possible until these are remediated. 6.4. Viewing upgrade-inhibiting recommendations After running the pre-upgrade analysis task, or manually running the Leapp tool on individual systems, you can view a list of recommendations for upgrade-inhibiting issues in your infrastructure. Using the list of pre-upgrade recommendations, you can view the following information about each recommendation: Recommendation details Affected-system information Total risk and impact insights Risk to system availability during resolution actions Prerequisites Any user with default access (the default for every user) can view the list of in-place upgrade recommendations. Procedure Go to Red Hat Insights > Operations > Advisor > Topics > In-place upgrade to view recommendations affecting the success of in-place upgrades. Note Currently, the in-place upgrade recommendations list only shows recommendations that Insights has identified as upgrade inhibitors. All in-place upgrade recommendations, including non-inhibitors, can be seen in the detailed view of each executed task. 6.5. Remediating upgrade-inhibiting recommendations You can use the in-place upgrade recommendations list as a basis for remediating upgrade-inhibiting issues on systems in your infrastructure. Some recommendations have a playbook available for automating the execution of remediations directly from the Insights for Red Hat Enterprise Linux UI. However, some recommendations require manual resolutions, the steps of which are customized for the system and recommendation pair, and are provided with the recommendation. You can tell which recommendations have playbooks available by viewing the Remediation column in the list of recommendations. For more information about Insights remediations, see the Red Hat Insights Remediations Guide with FedRAMP . 6.5.1. Using Insights remediation playbooks to resolve RHEL upgrade-inhibiting recommendations You can automate the remediation of upgrade-inhibiting recommendations using Ansible Playbooks that you create in Insights. Use the following procedure to locate your inhibitor issues and select recommendations and systems to remediate. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to Red Hat Insights > Operations > Advisor > Topics > In-place upgrade to view recommendations affecting the success of in-place upgrades. Choose a recommendation with the word "Playbook" in the Remediation tab, which indicates issues that have a playbook available. For each recommendation with an available playbook , take the following actions: Click on the recommendation to see more information about the issue, including the systems that are affected. Check the box to each system you want to add to the playbook and click Remediate . In the popup, select Create a new playbook and enter a name for the playbook, then click . Optional: Alternatively, you can add the resolution for the selected systems to an existing playbook. Review the included systems and click . Review the included recommendation. You can click the carat to the recommendation name to see included systems. Important Some resolutions require the system to reboot. Auto reboot is not enabled by default but you can enable it by clicking Turn on autoreboot above the list of recommendations. Click Submit . The final popup view confirms that the playbook was created successfully. You can select to return to the application or open the playbook. Find the playbook in Automation Toolkit > Remediations and click on it to open it. The playbook includes a list of actions. Select the actions to execute by checking the box to each one. Click Execute playbook to run the playbook on the specified systems. On the popup, click on the Execute playbook on systems button. The playbook runs on those systems. 6.5.2. Remediating RHEL upgrade-inhibiting recommendations manually You can remediate upgrade-inhibiting recommendations by manually applying resolution steps on affected systems. The following procedure shows how to find the resolution steps for a system and recommendation pairing. Prerequisites Prerequisites are listed in the Requirements and prerequisites section of this chapter. Procedure Go to Red Hat Insights > Operations > Advisor > Topics > In-place upgrade to view recommendations affecting the success of in-place upgrades. Choose a recommendation with the word "Manual" in the *Remediation tab, which indicates that the issue requires manual remediation. For each recommendation requiring a manual remediation, take the following actions: Click on the recommendation to open the recommendation-details page, which shows affected systems. Click on a system name. Pick a recommendation to resolve manually and click on the carat to view the Steps to resolve the recommendation on the system. Perform the resolution steps on the system. Repeat steps b, c, and d for each affected system.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks_with_fedramp/pre-upgrade-analysis-task_overview-tasks
Chapter 3. PXE Network Installations
Chapter 3. PXE Network Installations Red Hat Enterprise Linux allows for installation over a network using the NFS, FTP, or HTTP protocols. A network installation can be started from a boot CD-ROM, a bootable flash memory drive, or by using the askmethod boot option with the Red Hat Enterprise Linux CD #1. Alternatively, if the system to be installed contains a network interface card (NIC) with Pre-Execution Environment (PXE) support, it can be configured to boot from files on another networked system rather than local media such as a CD-ROM. For a PXE network installation, the client's NIC with PXE support sends out a broadcast request for DHCP information. The DHCP server provides the client with an IP address, other network information such as name server, the IP address or hostname of the tftp server (which provides the files necessary to start the installation program), and the location of the files on the tftp server. This is possible because of PXELINUX, which is part of the syslinux package. The following steps must be performed to prepare for a PXE installation: Configure the network (NFS, FTP, HTTP) server to export the installation tree. Configure the files on the tftp server necessary for PXE booting. Configure which hosts are allowed to boot from the PXE configuration. Start the tftp service. Configure DHCP. Boot the client, and start the installation. 3.1. Setting up the Network Server First, configure an NFS, FTP, or HTTP server to export the entire installation tree for the version and variant of Red Hat Enterprise Linux to be installed. Refer to the section Preparing for a Network Installation in the Installation Guide for detailed instructions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/PXE_Network_Installations
Chapter 11. Provisioning virtual machines on OpenShift Virtualization
Chapter 11. Provisioning virtual machines on OpenShift Virtualization OpenShift Virtualization addresses the needs of development teams that have adopted or want to adopt Red Hat OpenShift Container Platform but possess existing virtual machine (VM) workloads that cannot be easily containerized. This technology provides a unified development platform where developers can build, modify, and deploy applications residing in application containers and VMs in a shared environment. These capabilities support rapid application modernization across the open hybrid cloud. You can create a compute resource for OpenShift Virtualization so that you can provision and manage virtual machines in OpenShift Container Platform by using Satellite. Note that template provisioning is not supported for this release. Important The OpenShift Virtualization compute resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . You must have cluster-admin permissions for the OpenShift Container Platform cluster. A Capsule Server managing a network on the OpenShift Container Platform cluster. Ensure that no other DHCP services run on this network to avoid conflicts with Capsule Server. For more information about network service configuration for Capsule Servers, see Configuring Networking in Provisioning hosts . Additional resources For a list of permissions a non-admin user requires to provision hosts, see Appendix E, Permissions required to provision hosts . You can configure Satellite to remove the associated virtual machine when you delete a host. For more information, see Section 2.22, "Removing a virtual machine upon host deletion" . 11.1. Adding an OpenShift Virtualization connection to Satellite Server Use this procedure to add OpenShift Virtualization as a compute resource in Satellite. Procedure Enter the following satellite-installer command to enable the OpenShift Virtualization plugin for Satellite: Obtain a token to use for HTTP and HTTPs authentication: Log in to the OpenShift Container Platform cluster and list the secrets that contain tokens: Obtain the token for your secret: Record the token to use later in this procedure. In the Satellite web UI, navigate to Infrastructure > Compute Resources , and click Create Compute Resource . In the Name field, enter a name for the new compute resource. From the Provider list, select OpenShift Virtualization . In the Description field, enter a description for the compute resource. In the Hostname field, enter the FQDN, hostname, or IP address of the OpenShift Container Platform cluster. In the API Port field, enter the port number that you want to use for provisioning requests from Satellite to OpenShift Virtualization. In the Namespace field, enter the user name of the OpenShift Container Platform cluster. In the Token field, enter the bearer token for HTTP and HTTPs authentication. Optional: In the X509 Certification Authorities field, enter a certificate to enable client certificate authentication for API server calls.
[ "satellite-installer --enable-foreman-plugin-kubevirt", "oc get secrets", "oc get secrets MY_SECRET -o jsonpath='{.data.token}' | base64 -d | xargs" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/provisioning_virtual_machines_kubevirt_kubevirt-provisioning
Chapter 2. Storage classes
Chapter 2. Storage classes The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications. Note Custom storage classes are not supported for external mode OpenShift Data Foundation clusters. 2.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 2.2. Storage class for persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class. Note For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 2.2.1. Access configuration for Key Management System (KMS) Based on your use case, you need to configure access to KMS using one of the following ways: Using vaulttokens : allows users to authenticate using a token Using Thales CipherTrust Manager : uses Key Management Interoperability Protocol (KMIP) Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 2.2.1.1. Configuring access to KMS using vaulttokens Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Procedure Create a secret in the tenant's namespace. In the OpenShift Container Platform web console, navigate to Workloads Secrets . Click Create Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. 2.2.1.2. Configuring access to KMS using Thales CipherTrust Manager Prerequisites Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token be navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Procedure To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below: Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. 2.2.1.3. Configuring access to KMS using vaulttenantsa Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: Procedure You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault . The following instructions create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault . Apply the following YAML to your Openshift cluster: Create a secret for serviceaccount token and CA certificate. Get the token and the CA certificate from the secret. Retrieve the OpenShift cluster endpoint. Use the information collected in the steps to set up the kubernetes authentication method in Vault as shown: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . Sample YAML To create a storageclass that uses the vaulttenantsa method for PV encryption, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType Set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress The hostname or IP address of the vault server with the port number. vaultTLSServerName (Optional) The vault TLS server name vaultAuthPath (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath The backend path in Vault where the encryption keys will be stored vaultCAFromSecret The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. 2.2.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage Storage Classes . Click the Storage class name YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 2.2.2.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: After the yaml is edited, click on Create . 2.3. Storage class with single replica You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level. Warning Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD. Procedure Enable the single replica feature using the following command: Verify storagecluster is in Ready state: Example output: New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state: Example output: Verify new storage classes have been created: Example output: New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state: Example output: 2.3.1. Recovering after OSD lost from single replica When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss. Procedure Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the steps. If you do not know where the failing OSD is, skip to step 2. Example output: Copy the corresponding failure domain name for use in steps, then skip to step 4. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD: Identify the replica-1 pool that had the failed OSD. Identify the node where the failed OSD was running: Identify the failureDomainLabel for the node where the failed OSD was running: The output shows the replica-1 pool name whose OSD is failing, for example: where USDfailure_domain_value is the failureDomainName. Delete the replica-1 pool. Connect to the toolbox pod: Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example: Replace replica1-pool-name with the failure domain name identified earlier. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide. Restart the rook-ceph operator: Recreate any affected applications in that avaialbity zone to start using the new pool with same name.
[ "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF", "apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rbd-csi-vault-token-review-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: \"rbd-csi-vault-token-review\" type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>", "apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details", "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }", "--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephNonResilientPools/enable\", \"value\": true }]'", "oc get storagecluster", "NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.17.0", "oc get cephblockpools", "NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m", "oc get pods | grep osd", "rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m", "oc get cephblockpools", "NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-south-1 Ready ocs-storagecluster-cephblockpool-us-south-2 Ready ocs-storagecluster-cephblockpool-us-south-3 Ready", "oc get pods -n openshift-storage -l app=rook-ceph-osd | grep 'CrashLoopBackOff\\|Error'", "failed_osd_id=0 #replace with the ID of the failed OSD", "failure_domain_label=USD(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print USD2}')", "failure_domain_value=USD\"(oc get pods USDfailed_osd_id -oyaml |grep topology-location-zone |awk '{print USD2}')\"", "replica1-pool-name= \"ocs-storagecluster-cephblockpool-USDfailure_domain_value\"", "toolbox=USD(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}') rsh USDtoolbox -n openshift-storage", "ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it", "oc delete pod -l rook-ceph-operator -n openshift-storage" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/storage-classes_rhodf
Chapter 12. Integrating with the Red Hat build of Debezium for change data capture
Chapter 12. Integrating with the Red Hat build of Debezium for change data capture The Red Hat build of Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate the Red Hat build of Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred. Debezium has multiple uses, including: Data replication Updating caches and search indexes Simplifying monolithic applications Data integration Enabling streaming queries To capture database changes, deploy Kafka Connect with a Debezium database connector. You configure a KafkaConnector resource to define the connector instance. For more information on deploying the Red Hat build of Debezium with AMQ Streams, refer to the product documentation . The documentation includes a Getting Started with Debezium guide that guides you through the process of setting up the services and connector required to view change event records for database updates.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/ref-using-debezium-str
Registry
Registry OpenShift Container Platform 4.13 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>", "podman pull registry.redhat.io/<repository_name>", "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule", "topologySpreadConstraints: - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule - labelSelector: matchLabels: docker-registry: default maxSkew: 1 topologyKey: node-role.kubernetes.io/worker whenUnsatisfiable: DoNotSchedule", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local", "oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USERNAME=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: swift: container: <container-id>", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: rgwbucket namespace: openshift-storage 1 spec: storageClassName: ocs-storagecluster-ceph-rgw generateBucketName: rgwbucket EOF", "bucket_name=USD(oc get obc -n openshift-storage rgwbucket -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage rgwbucket -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route ocs-storagecluster-cephobjectstore -n openshift-storage --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: noobaatest namespace: openshift-storage 1 spec: storageClassName: openshift-storage.noobaa.io generateBucketName: noobaatest EOF", "bucket_name=USD(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')", "AWS_ACCESS_KEY_ID=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_ACCESS_KEY_ID:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "AWS_SECRET_ACCESS_KEY=USD(oc get secret -n openshift-storage noobaatest -o yaml | grep -w \"AWS_SECRET_ACCESS_KEY:\" | head -n1 | awk '{print USD2}' | base64 --decode)", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=USD{AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=USD{AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry", "route_host=USD(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"s3\":{\"bucket\":'\\\"USD{bucket_name}\\\"',\"region\":\"us-east-1\",\"regionEndpoint\":'\\\"https://USD{route_host}\\\"',\"virtualHostedStyle\":false,\"encrypt\":false,\"trustedCA\":{\"name\":\"image-registry-s3-bundle\"}}}}}' --type=merge", "cat <<EOF | oc apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: registry-storage-pvc namespace: openshift-image-registry spec: accessModes: - ReadWriteMany resources: requests: storage: 100Gi storageClassName: ocs-storagecluster-cephfs EOF", "oc patch config.image/cluster -p '{\"spec\":{\"managementState\":\"Managed\",\"replicas\":2,\"storage\":{\"managementState\":\"Unmanaged\",\"pvc\":{\"claim\":\"registry-storage-pvc\"}}}}' --type=merge", "oc policy add-role-to-user registry-viewer <user_name>", "oc policy add-role-to-user registry-editor <user_name>", "oc get nodes", "oc debug nodes/<node_name>", "sh-4.2# chroot /host", "sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443", "sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "sh-4.2# podman pull <name.io>/<image>", "sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>", "sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>", "oc get pods -n openshift-image-registry", "NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m", "oc logs deployments/image-registry -n openshift-image-registry", "2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF", "oc adm policy add-cluster-role-to-user prometheus-scraper <username>", "openshift: oc whoami -t", "curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20", "HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc extract secret/USD(oc get ingresscontroller -n openshift-ingress-operator default -o json | jq '.spec.defaultCertificate.name // \"router-certs-default\"' -r) -n openshift-ingress --confirm", "sudo mv tls.crt /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust enable", "sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1", "oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/registry/index
Chapter 2. Using the Software Development Kit
Chapter 2. Using the Software Development Kit This chapter outlines several examples of how to use the Java Software Development Kit. All examples in this chapter use version 3 of the software development kit unless otherwise noted. 2.1. Connecting to the Red Hat Enterprise Virtualization Manager in Version 3 In V3 of the Java software development kit, the Api class is the main class you use to connect to and manipulate objects in a Red Hat Enterprise Virtualization environment. To declare an instance of this class, you must declare an instance of the ApiBuilder class, pass the required arguments to this instance using builder methods, then call the build method on the instance. The build method returns an instance of the Api class that you can then assign to a variable and use to perform subsequent actions. The following is an example of a simple Java SE program that creates a connection with a Red Hat Enterprise Virtualization environment, then gracefully shuts down and closes the connection: Example 2.1. Connecting to the Red Hat Enterprise Virtualization Manager package rhevm; import org.ovirt.engine.sdk.Api; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import org.ovirt.engine.sdk.ApiBuilder; import org.ovirt.engine.sdk.exceptions.ServerException; import org.ovirt.engine.sdk.exceptions.UnsecuredConnectionAttemptError; public class rhevm { public static void main(String[] args) { Api api = null; try { ApiBuilder myBuilder = new ApiBuilder() .url("https://rhevm.example.com/api") .user("admin@internal") .password("p@ssw0rd") .keyStorePath("/home/username/server.truststore") .keyStorePassword("p@ssw0rd"); api = myBuilder.build(); api.shutdown(); } catch (ServerException | UnsecuredConnectionAttemptError | IOException ex) { Logger.getLogger(Ovirt.class.getName()).log(Level.SEVERE, null, ex); } finally { if (api != null) { try { api.close(); } catch (Exception ex) { Logger.getLogger(Ovirt.class.getName()).log(Level.SEVERE, null, ex); } } } } } This example creates a connection using basic authentication, but other methods are also available. For a list of the key arguments that can be passed to instances of the ApiBuilder class, see Appendix A, ApiBuilder Methods . Note Note that the Api class does not implement the Autocloseable interface. As such, it is recommended that you shut down instances of the Api class in a finally block as per the above example to ensure the connection with the Red Hat Enterprise Virtualization Manager is closed gracefully. 2.2. Connecting to the Red Hat Virtualization Manager in Version 4 In V4 of the Java software development kit, the Connection class is the main class you use to connect to and manipulate objects in a Red Hat Virtualization environment. To declare an instance of this class, you must declare an instance of the ConnectionBuilder class, pass the required arguments to this instance using builder methods, then call the build method on the instance. The build method returns an instance of the Connection class that you can then assign to a variable and use to perform subsequent actions. The following is an example of a simple Java SE program that creates a connection with a Red Hat Virtualization environment using version 4 of the software development kit: Example 2.2. Connecting to the Red Hat Virtualization Manager package rhevm; import org.ovirt.engine.sdk4.Connection; import org.ovirt.engine.sdk4.ConnectionBuilder; public class rhevm { public static void main(String[] args) { ConnectionBuilder myBuilder = ConnectionBuilder.connection() .url("https://rhevm.example.com/ovirt-engine/api") .user("admin@internal") .password("p@ssw0rd") .trustStoreFile("/home/username/server.truststore") .trustStorePassword("p@ssw0rd"); try (Connection conn = myBuilder.build()) { // Requests } catch (Exception e) { // Error handling } } } This example creates a connection using basic authentication, but other methods are also available. For a list of the key arguments that can be passed to instances of the ConnectionBuilder class, see Appendix B, ConnectionBuilder Methods . 2.3. Listing Entities The following example outlines how to list entities in the Red Hat Virtualization Manager. In this example, the entities to be listed are virtual machines, which are listed using the getVMs() method of the Api class. Listing Entities Declare a List of the type of entity to be listed and use the corresponding method to get the list of entities: List<VM> vms = api.getVMs().list(); 2.4. Modifying the Attributes of Resources The following example outlines how to modify the attributes of a resource. In this example, the attribute to be modified is the description of the virtual machine with the name 'test', which is changed to 'java_sdk'. Modifying the Attributes of a Resource Declare an instance of the resource whose attributes are to be modified: VM vm = api.getVMs().get("test"); Set the new value of the attribute: vm.setDescription("java_sdk"); Update the virtual machine to apply the change: VM newVM = vm.update(); 2.5. Getting a Resource In the Java Software Development Kit, resources can be referred to via two attributes: name , and UUID . Both return an object with the specified attribute if that object exists. To get a resource using the value of the name attribute: VM vm = api.getVMs().get("test"); To get a resource using the value of the UUID attribute: VM vm = api.getVMs().get(UUID.fromString("5a89a1d2-32be-33f7-a0d1-f8b5bc974ff6")); 2.6. Adding Resources The following examples outline two ways to add resources to the Red Hat Virtualization Manager. In these examples, the resource to be added is a virtual machine. Example 1 In this example, an instance of the VM class is declared to represent the new virtual machine to be added. , the attributes of that virtual machine set to the preferred values. Finally, the new virtual machine is added to the Manager. org.ovirt.engine.sdk.entities.VM vmParams = new org.ovirt.engine.sdk.entities.VM(); vmParams.setName("myVm"); vmParams.setCluster(api.getClusters().get("myCluster")); vmParams.setTemplate(api.getTemplates().get("myTemplate")); ... VM vm = api.getVMs().add(vmParams); Example 2 In this example, an instance of the VM class is declared in the same way as Example 1. However, rather than using the get method to reference existing objects in the Manager, each attribute is referenced by declaring an instance of that attribute. Finally, the new virtual machine is added to the Manager. org.ovirt.engine.sdk.entities.VM vmParams = new org.ovirt.engine.sdk.entities.VM(); vmParams.setName("myVm"); org.ovirt.engine.sdk.entities.Cluster clusterParam = new Cluster(); clusterParam.setName("myCluster"); vmParams.setCluster(clusterParam); org.ovirt.engine.sdk.entities.Template templateParam = new Template(); templateParam.setName("myTemplate"); vmParams.setTemplate(templateParam); ... VM vm = api.getVMs().add(vmParams); 2.7. Performing Actions on Resources The following example outlines how to perform actions on a resource. In this example, a virtual machine with the name 'test' is started. Performing an Action on a Resource Declare an instance of the resource: VM vm = api.getVMs().get("test"); Declare action parameters to send to the resource: Action actionParam = new Action(); org.ovirt.engine.sdk.entities.VM vmParam = new org.ovirt.engine.sdk.entities.VM(); actionParam.setVm(vmParam); Perform the action: Action res = vm.start(actionParam); Alternatively, you can perform the action as an inner method: Action res = vm.start(new Action() { { setVm(new org.ovirt.engine.sdk.entities.VM()); } }); 2.8. Listing Sub-Resources The following example outlines how to list the sub-resources of a resource. In this example, the sub-resources of a virtual machine with the name 'test' are listed. Listing Sub-Resources Declare an instance of the resource whose sub-resources are to be listed: VM vm = api.getVMs().get("test"); List the sub-resources: List<VMDisk>; disks = vm.getDisks().list(); === Getting Sub-Resources The following example outlines how to reference the sub-resources of a resource. In this example, a disk with the name 'my disk' that belongs to a virtual machine with the name 'test' is referenced. Getting the Sub-Resources of a Resource Declare an instance of the resource whose sub-resources are to be referenced: VM vm = api.getVMs().get("test"); Declare an instance of the sub-resource to be referenced: VMDisk disk = vm.getDisks().get("my disk"); 2.9. Adding Sub-Resources to a Resource The following example outlines how to add sub-resources to a resource. In this example, a new disk with a size of '1073741824L', interface 'virtio' and format 'cow' are added to a virtual machine with the name 'test'. Adding a Sub-Resource to a Resource Declare an instance of the resource to which sub-resources are to be added: VM vm = api.getVMs().get("test"); Create parameters to define the attributes of the resource: Disk diskParam = new Disk(); diskParam.setProvisionedSize(1073741824L); diskParam.setInterface("virtio"); diskParam.setFormat("cow"); Add the sub-resource: Disk disk = vm.getDisks().add(diskParam); 2.10. Modifying Sub-Resources The following example outlines how to modify sub-resources. In this example, the name of a disk with the name 'test_Disk1' belonging to a virtual machine with the name 'test' is changed to 'test_Disk1_updated'. Updating a Sub-Resource Declare an instance of the resource whose sub-resource is to be modified: VM vm = api.getVMs().get("test"); Declare an instance of the sub-resource to be modified: VMDisk disk = vm.getDisks().get("test_Disk1"); Set the new value of the attribute: disk.setAlias("test_Disk1_updated"); Update the sub-resource: VMDisk updateDisk = disk.update(); 2.11. Performing Actions on Sub-Resources The following example outlines how to perform actions on sub-resources. In this example, a disk with the name 'test_Disk1' belonging to a virtual machine with the name 'test' is activated. Performing an Action on a Sub-Resource Declare an instance of the resource containing the sub-resource on which the action is to be performed: VM vm = api.getVMs().get("test"); Declare an instance of the sub-resource: VMDisk disk = vm.getDisks().get("test_Disk1"); Declare action parameters to send to the sub-resource: Action actionParam = new Action(); Perform the action: Action result = disk.activate(actionParam);
[ "package rhevm; import org.ovirt.engine.sdk.Api; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import org.ovirt.engine.sdk.ApiBuilder; import org.ovirt.engine.sdk.exceptions.ServerException; import org.ovirt.engine.sdk.exceptions.UnsecuredConnectionAttemptError; public class rhevm { public static void main(String[] args) { Api api = null; try { ApiBuilder myBuilder = new ApiBuilder() .url(\"https://rhevm.example.com/api\") .user(\"admin@internal\") .password(\"p@ssw0rd\") .keyStorePath(\"/home/username/server.truststore\") .keyStorePassword(\"p@ssw0rd\"); api = myBuilder.build(); api.shutdown(); } catch (ServerException | UnsecuredConnectionAttemptError | IOException ex) { Logger.getLogger(Ovirt.class.getName()).log(Level.SEVERE, null, ex); } finally { if (api != null) { try { api.close(); } catch (Exception ex) { Logger.getLogger(Ovirt.class.getName()).log(Level.SEVERE, null, ex); } } } } }", "package rhevm; import org.ovirt.engine.sdk4.Connection; import org.ovirt.engine.sdk4.ConnectionBuilder; public class rhevm { public static void main(String[] args) { ConnectionBuilder myBuilder = ConnectionBuilder.connection() .url(\"https://rhevm.example.com/ovirt-engine/api\") .user(\"admin@internal\") .password(\"p@ssw0rd\") .trustStoreFile(\"/home/username/server.truststore\") .trustStorePassword(\"p@ssw0rd\"); try (Connection conn = myBuilder.build()) { // Requests } catch (Exception e) { // Error handling } } }", "List<VM> vms = api.getVMs().list();", "VM vm = api.getVMs().get(\"test\");", "vm.setDescription(\"java_sdk\");", "VM newVM = vm.update();", "VM vm = api.getVMs().get(\"test\");", "VM vm = api.getVMs().get(UUID.fromString(\"5a89a1d2-32be-33f7-a0d1-f8b5bc974ff6\"));", "org.ovirt.engine.sdk.entities.VM vmParams = new org.ovirt.engine.sdk.entities.VM(); vmParams.setName(\"myVm\"); vmParams.setCluster(api.getClusters().get(\"myCluster\")); vmParams.setTemplate(api.getTemplates().get(\"myTemplate\"));", "VM vm = api.getVMs().add(vmParams);", "org.ovirt.engine.sdk.entities.VM vmParams = new org.ovirt.engine.sdk.entities.VM(); vmParams.setName(\"myVm\"); org.ovirt.engine.sdk.entities.Cluster clusterParam = new Cluster(); clusterParam.setName(\"myCluster\"); vmParams.setCluster(clusterParam); org.ovirt.engine.sdk.entities.Template templateParam = new Template(); templateParam.setName(\"myTemplate\"); vmParams.setTemplate(templateParam);", "VM vm = api.getVMs().add(vmParams);", "VM vm = api.getVMs().get(\"test\");", "Action actionParam = new Action(); org.ovirt.engine.sdk.entities.VM vmParam = new org.ovirt.engine.sdk.entities.VM(); actionParam.setVm(vmParam);", "Action res = vm.start(actionParam);", "Action res = vm.start(new Action() { { setVm(new org.ovirt.engine.sdk.entities.VM()); } });", "VM vm = api.getVMs().get(\"test\");", "List<VMDisk>; disks = vm.getDisks().list();", "VM vm = api.getVMs().get(\"test\");", "VMDisk disk = vm.getDisks().get(\"my disk\");", "VM vm = api.getVMs().get(\"test\");", "Disk diskParam = new Disk(); diskParam.setProvisionedSize(1073741824L); diskParam.setInterface(\"virtio\"); diskParam.setFormat(\"cow\");", "Disk disk = vm.getDisks().add(diskParam);", "VM vm = api.getVMs().get(\"test\");", "VMDisk disk = vm.getDisks().get(\"test_Disk1\");", "disk.setAlias(\"test_Disk1_updated\");", "VMDisk updateDisk = disk.update();", "VM vm = api.getVMs().get(\"test\");", "VMDisk disk = vm.getDisks().get(\"test_Disk1\");", "Action actionParam = new Action();", "Action result = disk.activate(actionParam);" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/java_sdk_guide/chap-using_the_software_development_kit
Preface
Preface Red Hat Enterprise Linux (RHEL) minor releases are an aggregation of individual security, enhancement, and bug fix errata. The Red Hat Enterprise Linux 7.6 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. Capabilities and limits of Red Hat Enterprise Linux 7 as compared to other versions of the system are available in the Red Hat Knowledgebase article available at https://access.redhat.com/articles/rhel-limits . Packages distributed with this release are listed in Red Hat Enterprise Linux 7 Package Manifest . Migration from Red Hat Enterprise Linux 6 is documented in the Migration Planning Guide. For information regarding the Red Hat Enterprise Linux life cycle, refer to https://access.redhat.com/support/policy/updates/errata/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/pref-release_notes-preface
Chapter 155. IEC 60870 Server Component
Chapter 155. IEC 60870 Server Component Available as of Camel version 2.20 The IEC 60870-5-104 Server component provides access to IEC 60870 servers using the Eclipse NeoSCADATM implementation. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-iec60870</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> The IEC 60870 Server component supports 2 options, which are listed below. Name Description Default Type defaultConnection Options (common) Default connection options ServerOptions resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean 155.1. URI format The URI syntax of the endpoint is: The information object address is encoded in the path in the syntax shows above. Please note that always the full, 5 octet address format is being used. Unused octets have to be filled with zero. 155.2. URI options The IEC 60870 Server endpoint is configured using URI syntax: with the following path and query parameters: 155.2.1. Path Parameters (1 parameters): Name Description Default Type uriPath Required The object information address ObjectAddress 155.2.2. Query Parameters (20 parameters): Name Description Default Type dataModuleOptions (common) Data module options DataModuleOptions filterNonExecute (common) Filter out all requests which don't have the execute bit set true boolean protocolOptions (common) Protocol options ProtocolOptions bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean acknowledgeWindow (connection) Parameter W - Acknowledgment window. 10 short adsuAddressType (connection) The common ASDU address size. May be either SIZE_1 or SIZE_2. ASDUAddressType causeOfTransmissionType (connection) The cause of transmission type. May be either SIZE_1 or SIZE_2. CauseOfTransmission Type informationObjectAddress Type (connection) The information address size. May be either SIZE_1, SIZE_2 or SIZE_3. InformationObject AddressType maxUnacknowledged (connection) Parameter K - Maximum number of un-acknowledged messages. 15 short timeout1 (connection) Timeout T1 in milliseconds. 15000 int timeout2 (connection) Timeout T2 in milliseconds. 10000 int timeout3 (connection) Timeout T3 in milliseconds. 20000 int causeSourceAddress (data) Whether to include the source address true boolean ignoreBackgroundScan (data) Whether background scan transmissions should be ignored. true boolean ignoreDaylightSavingTime (data) Whether to ignore or respect DST false boolean timeZone (data) The timezone to use. May be any Java time zone string UTC TimeZone connectionId (id) An identifier grouping connection instances String 155.3. Spring Boot Auto-Configuration The component supports 7 options, which are listed below. Name Description Default Type camel.component.iec60870-server.default-connection-options.background-scan-period The period in "ms" between background transmission cycles. <p> If this is set to zero or less, background transmissions will be disabled. </p> Integer camel.component.iec60870-server.default-connection-options.booleans-with-timestamp Send booleans with timestamps Boolean camel.component.iec60870-server.default-connection-options.buffering-period A time period in "ms" the protocol layer will buffer change events in order to send out aggregated change messages Integer camel.component.iec60870-server.default-connection-options.floats-with-timestamp Send floats with timestamps Boolean camel.component.iec60870-server.default-connection-options.spontaneous-duplicates Number of spontaneous events to keep in the buffer. <p> When there are more than this number of spontaneous in events in the buffer, then events will be dropped in order to maintain the buffer size. </p> Integer camel.component.iec60870-server.enabled Whether to enable auto configuration of the iec60870-server component. This is enabled by default. Boolean camel.component.iec60870-server.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-iec60870</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "iec60870-server:host:port/00-01-02-03-04", "iec60870-server:uriPath" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/iec60870-server-component
Chapter 4. API index
Chapter 4. API index API API group Alertmanager monitoring.coreos.com/v1 AlertmanagerConfig monitoring.coreos.com/v1beta1 APIRequestCount apiserver.openshift.io/v1 APIServer config.openshift.io/v1 APIService apiregistration.k8s.io/v1 AppliedClusterResourceQuota quota.openshift.io/v1 Authentication config.openshift.io/v1 Authentication operator.openshift.io/v1 BareMetalHost metal3.io/v1alpha1 Binding v1 BMCEventSubscription metal3.io/v1alpha1 BrokerTemplateInstance template.openshift.io/v1 Build build.openshift.io/v1 Build config.openshift.io/v1 BuildConfig build.openshift.io/v1 BuildLog build.openshift.io/v1 BuildRequest build.openshift.io/v1 CatalogSource operators.coreos.com/v1alpha1 CertificateSigningRequest certificates.k8s.io/v1 CloudCredential operator.openshift.io/v1 CloudPrivateIPConfig cloud.network.openshift.io/v1 ClusterAutoscaler autoscaling.openshift.io/v1 ClusterCSIDriver operator.openshift.io/v1 ClusterOperator config.openshift.io/v1 ClusterResourceQuota quota.openshift.io/v1 ClusterRole authorization.openshift.io/v1 ClusterRole rbac.authorization.k8s.io/v1 ClusterRoleBinding authorization.openshift.io/v1 ClusterRoleBinding rbac.authorization.k8s.io/v1 ClusterServiceVersion operators.coreos.com/v1alpha1 ClusterVersion config.openshift.io/v1 ComponentStatus v1 Config imageregistry.operator.openshift.io/v1 Config operator.openshift.io/v1 Config samples.operator.openshift.io/v1 ConfigMap v1 Console config.openshift.io/v1 Console operator.openshift.io/v1 ConsoleCLIDownload console.openshift.io/v1 ConsoleExternalLogLink console.openshift.io/v1 ConsoleLink console.openshift.io/v1 ConsoleNotification console.openshift.io/v1 ConsolePlugin console.openshift.io/v1 ConsoleQuickStart console.openshift.io/v1 ConsoleYAMLSample console.openshift.io/v1 ContainerRuntimeConfig machineconfiguration.openshift.io/v1 ControllerConfig machineconfiguration.openshift.io/v1 ControllerRevision apps/v1 ControlPlaneMachineSet machine.openshift.io/v1 CredentialsRequest cloudcredential.openshift.io/v1 CronJob batch/v1 CSIDriver storage.k8s.io/v1 CSINode storage.k8s.io/v1 CSISnapshotController operator.openshift.io/v1 CSIStorageCapacity storage.k8s.io/v1 CustomResourceDefinition apiextensions.k8s.io/v1 DaemonSet apps/v1 Deployment apps/v1 DeploymentConfig apps.openshift.io/v1 DeploymentConfigRollback apps.openshift.io/v1 DeploymentLog apps.openshift.io/v1 DeploymentRequest apps.openshift.io/v1 DNS config.openshift.io/v1 DNS operator.openshift.io/v1 DNSRecord ingress.operator.openshift.io/v1 EgressFirewall k8s.ovn.org/v1 EgressIP k8s.ovn.org/v1 EgressQoS k8s.ovn.org/v1 EgressRouter network.operator.openshift.io/v1 Endpoints v1 EndpointSlice discovery.k8s.io/v1 Etcd operator.openshift.io/v1 Event v1 Event events.k8s.io/v1 Eviction policy/v1 FeatureGate config.openshift.io/v1 FirmwareSchema metal3.io/v1alpha1 Group user.openshift.io/v1 HardwareData metal3.io/v1alpha1 HelmChartRepository helm.openshift.io/v1beta1 HorizontalPodAutoscaler autoscaling/v2 HostFirmwareSettings metal3.io/v1alpha1 Identity user.openshift.io/v1 Image config.openshift.io/v1 Image image.openshift.io/v1 ImageContentPolicy config.openshift.io/v1 ImageContentSourcePolicy operator.openshift.io/v1alpha1 ImageDigestMirrorSet config.openshift.io/v1 ImagePruner imageregistry.operator.openshift.io/v1 ImageSignature image.openshift.io/v1 ImageStream image.openshift.io/v1 ImageStreamImage image.openshift.io/v1 ImageStreamImport image.openshift.io/v1 ImageStreamLayers image.openshift.io/v1 ImageStreamMapping image.openshift.io/v1 ImageStreamTag image.openshift.io/v1 ImageTag image.openshift.io/v1 ImageTagMirrorSet config.openshift.io/v1 Infrastructure config.openshift.io/v1 Ingress config.openshift.io/v1 Ingress networking.k8s.io/v1 IngressClass networking.k8s.io/v1 IngressController operator.openshift.io/v1 InsightsOperator operator.openshift.io/v1 InstallPlan operators.coreos.com/v1alpha1 IPPool whereabouts.cni.cncf.io/v1alpha1 Job batch/v1 KubeAPIServer operator.openshift.io/v1 KubeControllerManager operator.openshift.io/v1 KubeletConfig machineconfiguration.openshift.io/v1 KubeScheduler operator.openshift.io/v1 KubeStorageVersionMigrator operator.openshift.io/v1 Lease coordination.k8s.io/v1 LimitRange v1 LocalResourceAccessReview authorization.openshift.io/v1 LocalSubjectAccessReview authorization.k8s.io/v1 LocalSubjectAccessReview authorization.openshift.io/v1 Machine machine.openshift.io/v1beta1 MachineAutoscaler autoscaling.openshift.io/v1beta1 MachineConfig machineconfiguration.openshift.io/v1 MachineConfigPool machineconfiguration.openshift.io/v1 MachineHealthCheck machine.openshift.io/v1beta1 MachineSet machine.openshift.io/v1beta1 Metal3Remediation infrastructure.cluster.x-k8s.io/v1beta1 Metal3RemediationTemplate infrastructure.cluster.x-k8s.io/v1beta1 MutatingWebhookConfiguration admissionregistration.k8s.io/v1 Namespace v1 Network config.openshift.io/v1 Network operator.openshift.io/v1 NetworkAttachmentDefinition k8s.cni.cncf.io/v1 NetworkPolicy networking.k8s.io/v1 Node v1 Node config.openshift.io/v1 OAuth config.openshift.io/v1 OAuthAccessToken oauth.openshift.io/v1 OAuthAuthorizeToken oauth.openshift.io/v1 OAuthClient oauth.openshift.io/v1 OAuthClientAuthorization oauth.openshift.io/v1 OLMConfig operators.coreos.com/v1 OpenShiftAPIServer operator.openshift.io/v1 OpenShiftControllerManager operator.openshift.io/v1 Operator operators.coreos.com/v1 OperatorCondition operators.coreos.com/v2 OperatorGroup operators.coreos.com/v1 OperatorHub config.openshift.io/v1 OperatorPKI network.operator.openshift.io/v1 OverlappingRangeIPReservation whereabouts.cni.cncf.io/v1alpha1 PackageManifest packages.operators.coreos.com/v1 PerformanceProfile performance.openshift.io/v2 PersistentVolume v1 PersistentVolumeClaim v1 Pod v1 PodDisruptionBudget policy/v1 PodMonitor monitoring.coreos.com/v1 PodNetworkConnectivityCheck controlplane.operator.openshift.io/v1alpha1 PodSecurityPolicyReview security.openshift.io/v1 PodSecurityPolicySelfSubjectReview security.openshift.io/v1 PodSecurityPolicySubjectReview security.openshift.io/v1 PodTemplate v1 PreprovisioningImage metal3.io/v1alpha1 PriorityClass scheduling.k8s.io/v1 Probe monitoring.coreos.com/v1 Profile tuned.openshift.io/v1 Project config.openshift.io/v1 Project project.openshift.io/v1 ProjectHelmChartRepository helm.openshift.io/v1beta1 ProjectRequest project.openshift.io/v1 Prometheus monitoring.coreos.com/v1 PrometheusRule monitoring.coreos.com/v1 Provisioning metal3.io/v1alpha1 Proxy config.openshift.io/v1 RangeAllocation security.openshift.io/v1 ReplicaSet apps/v1 ReplicationController v1 ResourceAccessReview authorization.openshift.io/v1 ResourceQuota v1 Role authorization.openshift.io/v1 Role rbac.authorization.k8s.io/v1 RoleBinding authorization.openshift.io/v1 RoleBinding rbac.authorization.k8s.io/v1 RoleBindingRestriction authorization.openshift.io/v1 Route route.openshift.io/v1 RuntimeClass node.k8s.io/v1 Scale autoscaling/v1 Scheduler config.openshift.io/v1 Secret v1 SecretList image.openshift.io/v1 SecurityContextConstraints security.openshift.io/v1 SelfSubjectAccessReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.openshift.io/v1 Service v1 ServiceAccount v1 ServiceCA operator.openshift.io/v1 ServiceMonitor monitoring.coreos.com/v1 StatefulSet apps/v1 Storage operator.openshift.io/v1 StorageClass storage.k8s.io/v1 StorageState migration.k8s.io/v1alpha1 StorageVersionMigration migration.k8s.io/v1alpha1 SubjectAccessReview authorization.k8s.io/v1 SubjectAccessReview authorization.openshift.io/v1 SubjectRulesReview authorization.openshift.io/v1 Subscription operators.coreos.com/v1alpha1 Template template.openshift.io/v1 TemplateInstance template.openshift.io/v1 ThanosRuler monitoring.coreos.com/v1 TokenRequest authentication.k8s.io/v1 TokenReview authentication.k8s.io/v1 Tuned tuned.openshift.io/v1 User user.openshift.io/v1 UserIdentityMapping user.openshift.io/v1 UserOAuthAccessToken oauth.openshift.io/v1 ValidatingWebhookConfiguration admissionregistration.k8s.io/v1 VolumeAttachment storage.k8s.io/v1 VolumeSnapshot snapshot.storage.k8s.io/v1 VolumeSnapshotClass snapshot.storage.k8s.io/v1 VolumeSnapshotContent snapshot.storage.k8s.io/v1
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/api_overview/api-index
Chapter 2. Configuring proxy support for Red Hat Ansible Automation Platform
Chapter 2. Configuring proxy support for Red Hat Ansible Automation Platform You can configure Red Hat Ansible Automation Platform to communicate with traffic using a proxy. Proxy servers act as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service or available resource from a different server, and the proxy server evaluates the request as a way to simplify and control its complexity. The following sections describe the supported proxy configurations and how to set them up. 2.1. Enabling proxy support through a load balancer A forward proxy deals with client traffic, regulating and securing it. To provide proxy server support, automation controller handles proxied requests (such as ALB, NLB , HAProxy, Squid, Nginx and tinyproxy in front of automation controller) using the REMOTE_HOST_HEADERS list variable in the automation controller settings. By default, REMOTE_HOST_HEADERS is set to ["REMOTE_ADDR", "REMOTE_HOST"] . To enable proxy server support, edit the REMOTE_HOST_HEADERS field in the settings page for your automation controller: Procedure From the navigation panel, select Settings System . Click Edit In the Remote Host Headers field, enter the following values: [ "HTTP_X_FORWARDED_FOR", "REMOTE_ADDR", "REMOTE_HOST" ] Click Save to save your settings. Automation controller determines the remote host's IP address by searching through the list of headers in Remote Host Headers until the first IP address is located. 2.2. Known proxies When automation controller is configured with REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST'] , it assumes that the value of X-Forwarded-For has originated from the proxy/load balancer sitting in front of automation controller. If automation controller is reachable without use of the proxy/load balancer, or if the proxy does not validate the header, the value of X-Forwarded-For can be falsified to fake the originating IP addresses. Using HTTP_X_FORWARDED_FOR in the REMOTE_HOST_HEADERS setting poses a vulnerability. To avoid this, you can configure a list of known proxies that are allowed. Procedure From the navigation panel, select Settings System . Enter a list of proxy IP addresses from which the service should trust custom remote header values in the Proxy IP Allowed List field. Note Load balancers and hosts that are not on the known proxies list result in a rejected request. 2.2.1. Configuring known proxies To configure a list of known proxies for your automation controller, add the proxy IP addresses to the Proxy IP Allowed List field in the System Settings page. Procedure From the navigation panel, select Settings System . In the Proxy IP Allowed List field, enter IP addresses that are permitted to connect to your automation controller, using the syntax in the following example: Example Proxy IP Allowed List entry Important Proxy IP Allowed List requires proxies in the list are properly sanitizing header input and correctly setting an X-Forwarded-For value equal to the real source IP of the client. Automation controller can rely on the IP addresses and hostnames in Proxy IP Allowed List to provide non-spoofed values for X-Forwarded-For . Do not configure HTTP_X_FORWARDED_FOR as an item in Remote Host Headers unless all of the following conditions are satisfied: You are using a proxied environment with ssl termination; The proxy provides sanitization or validation of the X-Forwarded-For header to prevent client spoofing; /etc/tower/conf.d/remote_host_headers.py defines PROXY_IP_ALLOWED_LIST that contains only the originating IP addresses of trusted proxies or load balancers. Click Save to save the settings. 2.3. Configuring a reverse proxy through a load balancer A reverse proxy manages external requests to servers, offering load balancing and concealing server identities for added security. You can support a reverse proxy server configuration by adding HTTP_X_FORWARDED_FOR to the Remote Host Headers field in the Systems Settings. The X-Forwarded-For (XFF) HTTP header field identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. Procedure From the navigation panel, select Settings System . In the Remote Host Headers field, enter the following values: [ "HTTP_X_FORWARDED_FOR", "REMOTE_ADDR", "REMOTE_HOST" ] Add the lines below to /etc/tower/conf.d/custom.py to ensure the application uses the correct headers: USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True Click Save to save the settings. 2.4. Enable sticky sessions By default, an application load balancer routes each request independently to a registered target based on the chosen load-balancing algorithm. To avoid authentication errors when running multiple instances of automation hub behind a load balancer, you must enable sticky sessions. Enabling sticky sessions sets a custom application cookie that matches the cookie configured on the load balancer to enable stickiness. This custom cookie can include any of the cookie attributes required by the application. Additional resources Refer to Sticky sessions for your Application Load Balancer for more information about enabling sticky sessions.
[ "[ \"HTTP_X_FORWARDED_FOR\", \"REMOTE_ADDR\", \"REMOTE_HOST\" ]", "[ \"example1.proxy.com:8080\", \"example2.proxy.com:8080\" ]", "[ \"HTTP_X_FORWARDED_FOR\", \"REMOTE_ADDR\", \"REMOTE_HOST\" ]", "USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/operating_ansible_automation_platform/assembly-configuring-proxy-support
3.8. Changing Default User Configuration
3.8. Changing Default User Configuration The realmd system supports modifying the default user home directory and shell POSIX attributes. For example, this might be required when some POSIX attributes are not set in the Windows user accounts or when these attributes are different from POSIX attributes of other users on the local system. Important Changing the configuration as described in this section only works if the realm join command has not been run yet. If a system is already joined, change the default home directory and shell in the /etc/sssd/sssd.conf file, as described in the section called "Optional: Configure User Home Directories and Shells" . To override the default home directory and shell POSIX attributes, specify the following options in the [users] section in the /etc/realmd.conf file: default-home The default-home option sets a template for creating a home directory for accounts that have no home directory explicitly set. A common format is /home/%d/%u , where %d is the domain name and %u is the user name. default-shell The default-shell option defines the default user shell. It accepts any supported system shell. For example: For more information about the options, see the realmd.conf (5) man page.
[ "[users] default-home = /home/%u default-shell = /bin/bash" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/config-realmd-users
Chapter 2. Selecting a cluster installation method and preparing it for users
Chapter 2. Selecting a cluster installation method and preparing it for users Before you install OpenShift Container Platform, decide what kind of installation process to follow and verify that you have all of the required resources to prepare the cluster for users. 2.1. Selecting a cluster installation type Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option. 2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms: Alibaba Cloud Amazon Web Services (AWS) on 64-bit x86 instances Amazon Web Services (AWS) on 64-bit ARM instances Microsoft Azure on 64-bit x86 instances Microsoft Azure on 64-bit ARM instances Microsoft Azure Stack Hub Google Cloud Platform (GCP) Red Hat OpenStack Platform (RHOSP) Red Hat Virtualization (RHV) IBM Cloud VPC IBM Z or IBM(R) LinuxONE IBM Z or IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM IBM Power IBM Power Virtual Server Nutanix VMware vSphere VMware Cloud (VMC) on AWS Bare metal or other platform agnostic infrastructure You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same data center or cloud hosting service. If you want to use OpenShift Container Platform but do not want to manage the cluster yourself, you have several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated or OpenShift Online . You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud VPC, or Google Cloud. For more information about managed services, see the OpenShift Products page. If you install an OpenShift Container Platform cluster with a cloud virtual machine as a virtual bare metal, the corresponding cloud-based storage is not supported. 2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines is part of the installation process for OpenShift Container Platform. See Differences between OpenShift Container Platform 3 and 4 . Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see Migrating from OpenShift Container Platform 3 to 4 overview . Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster. 2.1.3. Do you want to use existing components in your cluster? Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs. You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to Alibaba Cloud , AWS , Azure , Azure Stack Hub , GCP , Nutanix , or VMC on AWS . These installation methods are the fastest way to deploy a production-capable OpenShift Container Platform cluster. If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for Alibaba Cloud , AWS , Azure , GCP , Nutanix , or VMC on AWS . For installer-provisioned infrastructure installations, you can use an existing VPC in AWS , vNet in Azure , or VPC in GCP . You can also reuse part of your networking infrastructure so that your cluster in AWS , Azure , GCP , or VMC on AWS can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them. You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for RHOSP , RHOSP with Kuryr , RHV , vSphere , and bare metal . Additionally, for vSphere , VMC on AWS , you can also customize additional network parameters during installation. For some installer-provisioned infrastructure installations, for example on the VMware vSphere and bare metal platforms, the external traffic that reaches the ingress virtual IP (VIP) is not balanced between the default IngressController replicas. For vSphere and bare metal installer-provisioned infrastructure installations where exceeding the baseline IngressController router performance is expected, you must configure an external load balancer. Configuring an external load balancer achieves the performance of multiple IngressController replicas. For more information about the baseline IngressController performance, see Baseline Ingress Controller (router) performance . For more information about configuring an external load balancer, see Configuring an external load balancer . If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS , Azure , Azure Stack Hub , GCP , or VMC on AWS , you can use the provided templates to help you stand up all of the required components. You can also reuse a shared VPC on GCP . Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds. You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP , RHV , IBM Z or IBM(R) LinuxONE , IBM Z and IBM(R) LinuxONE with RHEL KVM , IBM Power , or vSphere , use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure. For some of these platforms, such as RHOSP , vSphere , VMC on AWS , and bare metal , you can also customize additional network parameters during installation. 2.1.4. Do you need extra security for your cluster? If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure. If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS , Azure , or GCP . If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for AWS , GCP , IBM Z or IBM(R) LinuxONE , IBM Z or IBM(R) LinuxONE with RHEL KVM , IBM Power , vSphere , VMC on AWS , or bare metal . You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS , GCP , Nutanix , VMC on AWS , RHOSP , RHV , and vSphere . If you need to deploy your cluster to an AWS GovCloud region , AWS China region , or Azure government region , you can configure those custom regions during an installer-provisioned infrastructure installation. 2.2. Preparing your cluster for users after installation Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider. For a production cluster, you must configure the following integrations: Persistent storage An identity provider Monitoring core OpenShift Container Platform components 2.3. Preparing your cluster for workloads Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy , you might need to make provisions for low-latency workloads or to protect sensitive workloads . You can also configure monitoring for application workloads. If you plan to run Windows workloads , you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed. 2.4. Supported installation methods for different platforms You can perform different types of installations on different platforms. Note Not all installation options are supported for all platforms, as shown in the following tables. A checkmark indicates that the option is supported and links to the relevant section. Table 2.1. Installer-provisioned infrastructure options Alibaba AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP Nutanix RHOSP RHV Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere VMC IBM Cloud VPC IBM Z IBM Power IBM Power Virtual Server Default [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Private clusters [✓] [✓] [✓] [✓] [✓] [✓] [✓] Existing virtual private networks [✓] [✓] [✓] [✓] [✓] [✓] [✓] Government regions [✓] [✓] Secret regions [✓] China regions [✓] Table 2.2. User-provisioned infrastructure options Alibaba AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP Nutanix RHOSP RHV Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere VMC IBM Cloud VPC IBM Z IBM Z with RHEL KVM IBM Power Platform agnostic Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Shared VPC hosted outside of cluster project [✓]
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installation_overview/installing-preparing
function::user_int16
function::user_int16 Name function::user_int16 - Retrieves a 16-bit integer value stored in user space Synopsis Arguments addr the user space address to retrieve the 16-bit integer from Description Returns the 16-bit integer value from a given user space address. Returns zero when user space data is not accessible.
[ "user_int16:long(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-int16
1.2.7. Updating a Working Copy
1.2.7. Updating a Working Copy To update a working copy and get the latest changes from a Subversion repository, change to the directory with the working copy and run the following command: svn update Example 1.13. Updating a working copy Imagine that the directory with your working copy of a Subversion repository has the following contents: Also imagine that somebody recently added ChangeLog to the repository, removed the TODO file from it, changed the name of LICENSE to COPYING , and made some changes to Makefile . To update this working copy, type:
[ "project]USD ls AUTHORS doc INSTALL LICENSE Makefile README src TODO", "myproject]USD svn update D LICENSE D TODO A COPYING A Changelog M Makefile Updated to revision 2." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/sect-revision_control_systems-svn-update
Chapter 25. Apache HTTP Secure Server Configuration
Chapter 25. Apache HTTP Secure Server Configuration 25.1. Introduction This chapter provides basic information on the Apache HTTP Server with the mod_ssl security module enabled to use the OpenSSL library and toolkit. The combination of these three components are referred to in this chapter as the secure Web server or just as the secure server. The mod_ssl module is a security module for the Apache HTTP Server. The mod_ssl module uses the tools provided by the OpenSSL Project to add a very important feature to the Apache HTTP Server - the ability to encrypt communications. In contrast, regular HTTP communications between a browser and a Web server are sent in plain text, which could be intercepted and read by someone along the route between the browser and the server. This chapter is not meant to be complete and exclusive documentation for any of these programs. When possible, this guide points to appropriate places where you can find more in-depth documentation on particular subjects. This chapter shows you how to install these programs. You can also learn the steps necessary to generate a private key and a certificate request, how to generate your own self-signed certificate, and how to install a certificate to use with your secure server. The mod_ssl configuration file is located at /etc/httpd/conf.d/ssl.conf . For this file to be loaded, and hence for mod_ssl to work, you must have the statement Include conf.d/*.conf in the /etc/httpd/conf/httpd.conf file. This statement is included by default in the default Apache HTTP Server configuration file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Apache_HTTP_Secure_Server_Configuration
Chapter 70. JmxTransTemplate schema reference
Chapter 70. JmxTransTemplate schema reference Used in: JmxTransSpec Property Property type Description deployment DeploymentTemplate Template for JmxTrans Deployment . pod PodTemplate Template for JmxTrans Pods . container ContainerTemplate Template for JmxTrans container. serviceAccount ResourceTemplate Template for the JmxTrans service account.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-JmxTransTemplate-reference
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/operational_measurements/making-open-source-more-inclusive
Chapter 11. Configuring IP failover
Chapter 11. Configuring IP failover This topic describes configuring IP failover for pods and services on your OpenShift Container Platform cluster. IP failover manages a pool of Virtual IP (VIP) addresses on a set of nodes. Every VIP in the set is serviced by a node selected from the set. As long a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it. Note The VIPs must be routable from outside the cluster. IP failover monitors a port on each VIP to determine whether the port is reachable on the node. If the port is not reachable, the VIP is not assigned to the node. If the port is set to 0 , this check is suppressed. The check script does the needed testing. IP failover uses Keepalived to host a set of externally accessible VIP addresses on a set of hosts. Each VIP is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available. When a node running Keepalived passes the check script, the VIP on that node can enter the master state based on its priority and the priority of the current master and as determined by the preemption strategy. A cluster administrator can provide a script through the OPENSHIFT_HA_NOTIFY_SCRIPT variable, and this script is called whenever the state of the VIP on the node changes. Keepalived uses the master state when it is servicing the VIP, the backup state when another node is servicing the VIP, or in the fault state when the check script fails. The notify script is called with the new state whenever the state changes. You can create an IP failover deployment configuration on OpenShift Container Platform. The IP failover deployment configuration specifies the set of VIP addresses, and the set of nodes on which to service them. A cluster can have multiple IP failover deployment configurations, with each managing its own set of unique VIP addresses. Each node in the IP failover configuration runs an IP failover pod, and this pod runs Keepalived. When using VIPs to access a pod with host networking, the application pod runs on all nodes that are running the IP failover pods. This enables any of the IP failover nodes to become the master and service the VIPs when needed. If application pods are not running on all nodes with IP failover, either some IP failover nodes never service the VIPs or some application pods never receive any traffic. Use the same selector and replication count, for both IP failover and the application pods, to avoid this mismatch. While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a NodePort . When using external IPs in the service definition, the VIPs are set to the external IPs, and the IP failover monitoring port is set to the service port. When using a node port, the port is open on every node in the cluster, and the service load-balances traffic from whatever node currently services the VIP. In this case, the IP failover monitoring port is set to the NodePort in the service definition. Important Setting up a NodePort is a privileged operation. Important Even though a service VIP is highly available, performance can still be affected. Keepalived makes sure that each of the VIPs is serviced by some node in the configuration, and several VIPs can end up on the same node even when other nodes have none. Strategies that externally load-balance across a set of VIPs can be thwarted when IP failover puts multiple VIPs on the same node. When you use ingressIP , you can set up IP failover to have the same VIP range as the ingressIP range. You can also disable the monitoring port. In this case, all the VIPs appear on same node in the cluster. Any user can set up a service with an ingressIP and have it highly available. Important There are a maximum of 254 VIPs in the cluster. 11.1. IP failover environment variables The following table contains the variables used to configure IP failover. Table 11.1. IP failover environment variables Variable Name Default Description OPENSHIFT_HA_MONITOR_PORT 80 The IP failover pod tries to open a TCP connection to this port on each Virtual IP (VIP). If connection is established, the service is considered to be running. If this port is set to 0 , the test always passes. OPENSHIFT_HA_NETWORK_INTERFACE The interface name that IP failover uses to send Virtual Router Redundancy Protocol (VRRP) traffic. The default value is eth0 . OPENSHIFT_HA_REPLICA_COUNT 2 The number of replicas to create. This must match spec.replicas value in IP failover deployment configuration. OPENSHIFT_HA_VIRTUAL_IPS The list of IP address ranges to replicate. This must be provided. For example, 1.2.3.4-6,1.2.3.9 . OPENSHIFT_HA_VRRP_ID_OFFSET 0 The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is 0 , and the allowed range is 0 through 255 . OPENSHIFT_HA_VIP_GROUPS The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the OPENSHIFT_HA_VIP_GROUPS variable. OPENSHIFT_HA_IPTABLES_CHAIN INPUT The name of the iptables chain, to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule is not added. If the chain does not exist, it is not created. OPENSHIFT_HA_CHECK_SCRIPT The full path name in the pod file system of a script that is periodically run to verify the application is operating. OPENSHIFT_HA_CHECK_INTERVAL 2 The period, in seconds, that the check script is run. OPENSHIFT_HA_NOTIFY_SCRIPT The full path name in the pod file system of a script that is run whenever the state changes. OPENSHIFT_HA_PREEMPTION preempt_nodelay 300 The strategy for handling a new higher priority host. The nopreempt strategy does not move master from the lower priority host to the higher priority host. 11.2. Configuring IP failover As a cluster administrator, you can configure IP failover on an entire cluster, or on a subset of nodes, as defined by the label selector. You can also configure multiple IP failover deployment configurations in your cluster, where each one is independent of the others. The IP failover deployment configuration ensures that a failover pod runs on each of the nodes matching the constraints or the label used. This pod runs Keepalived, which can monitor an endpoint and use Virtual Router Redundancy Protocol (VRRP) to fail over the virtual IP (VIP) from one node to another if the first node cannot reach the service or endpoint. For production use, set a selector that selects at least two nodes, and set replicas equal to the number of selected nodes. Prerequisites You are logged in to the cluster with a user with cluster-admin privileges. You created a pull secret. Procedure Create an IP failover service account: USD oc create sa ipfailover Update security context constraints (SCC) for hostNetwork : USD oc adm policy add-scc-to-user privileged -z ipfailover USD oc adm policy add-scc-to-user hostnetwork -z ipfailover Create a deployment YAML file to configure IP failover: Example deployment YAML for IP failover configuration apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: "" containers: - name: openshift-ipfailover image: quay.io/openshift/origin-keepalived-ipfailover ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: "ipfailover" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: "1.1.1.1-2" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: "10" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: "ens3" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: "30060" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: "0" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: "2" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: "false" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: "10.0.148.40,10.0.160.234,10.0.199.110" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: "INPUT" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: "/etc/keepalive/mycheckscript.sh" - name: OPENSHIFT_HA_PREEMPTION 11 value: "preempt_delay 300" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: "2" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13 1 The name of the IP failover deployment. 2 The list of IP address ranges to replicate. This must be provided. For example, 1.2.3.4-6,1.2.3.9 . 3 The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the OPENSHIFT_HA_VIP_GROUPS variable. 4 The interface name that IP failover uses to send VRRP traffic. By default, eth0 is used. 5 The IP failover pod tries to open a TCP connection to this port on each VIP. If connection is established, the service is considered to be running. If this port is set to 0 , the test always passes. The default value is 80 . 6 The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is 0 , and the allowed range is 0 through 255 . 7 The number of replicas to create. This must match spec.replicas value in IP failover deployment configuration. The default value is 2 . 8 The name of the iptables chain to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule is not added. If the chain does not exist, it is not created, and Keepalived operates in unicast mode. The default is INPUT . 9 The full path name in the pod file system of a script that is run whenever the state changes. 10 The full path name in the pod file system of a script that is periodically run to verify the application is operating. 11 The strategy for handling a new higher priority host. The default value is preempt_delay 300 , which causes a Keepalived instance to take over a VIP after 5 minutes if a lower-priority master is holding the VIP. 12 The period, in seconds, that the check script is run. The default value is 2 . 13 Create the pull secret before creating the deployment, otherwise you will get an error when creating the deployment. 11.3. About virtual IP addresses Keepalived manages a set of virtual IP addresses (VIP). The administrator must make sure that all of these addresses: Are accessible on the configured hosts from outside the cluster. Are not used for any other purpose within the cluster. Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node serves the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled. Note Each VIP in the set may end up being served by a different node. 11.4. Configuring check and notify scripts Keepalived monitors the health of the application by periodically running an optional user supplied check script. For example, the script can test a web server by issuing a request and verifying the response. When a check script is not provided, a simple default script is run that tests the TCP connection. This default test is suppressed when the monitor port is 0 . Each IP failover pod manages a Keepalived daemon that manages one or more virtual IPs (VIP) on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node may be in master , backup , or fault state. When the check script for that VIP on the node that is in master state fails, the VIP on that node enters the fault state, which triggers a renegotiation. During renegotiation, all VIPs on a node that are not in the fault state participate in deciding which node takes over the VIP. Ultimately, the VIP enters the master state on some node, and the VIP stays in the backup state on the other nodes. When a node with a VIP in backup state fails, the VIP on that node enters the fault state. When the check script passes again for a VIP on a node in the fault state, the VIP on that node exits the fault state and negotiates to enter the master state. The VIP on that node may then enter either the master or the backup state. As cluster administrator, you can provide an optional notify script, which is called whenever the state changes. Keepalived passes the following three parameters to the script: USD1 - group or instance USD2 - Name of the group or instance USD3 - The new state: master , backup , or fault The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the /hosts mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a config map. The full path names of the check and notify scripts are added to the Keepalived configuration file, _/etc/keepalived/keepalived.conf , which is loaded every time Keepalived starts. The scripts can be added to the pod with a config map as follows. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create the desired script and create a config map to hold it. The script has no input arguments and must return 0 for OK and 1 for fail . The check script, mycheckscript.sh : #!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0 Create the config map: USD oc create configmap mycustomcheck --from-file=mycheckscript.sh Add the script to the pod. The defaultMode for the mounted config map files must able to run by using oc commands or by editing the deployment configuration. A value of 0755 , 493 decimal, is typical: USD oc set env deploy/ipfailover-keepalived \ OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh USD oc set volume deploy/ipfailover-keepalived --add --overwrite \ --name=config-volume \ --mount-path=/etc/keepalive \ --source='{"configMap": { "name": "mycustomcheck", "defaultMode": 493}}' Note The oc set env command is whitespace sensitive. There must be no whitespace on either side of the = sign. Tip You can alternatively edit the ipfailover-keepalived deployment configuration: USD oc edit deploy ipfailover-keepalived spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh ... volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst ... volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume ... 1 In the spec.container.env field, add the OPENSHIFT_HA_CHECK_SCRIPT environment variable to point to the mounted script file. 2 Add the spec.container.volumeMounts field to create the mount point. 3 Add a new spec.volumes field to mention the config map. 4 This sets run permission on the files. When read back, it is displayed in decimal, 493 . Save the changes and exit the editor. This restarts ipfailover-keepalived . 11.5. Configuring VRRP preemption When a Virtual IP (VIP) on a node leaves the fault state by passing the check script, the VIP on the node enters the backup state if it has lower priority than the VIP on the node that is currently in the master state. However, if the VIP on the node that is leaving fault state has a higher priority, the preemption strategy determines its role in the cluster. The nopreempt strategy does not move master from the lower priority VIP on the host to the higher priority VIP on the host. With preempt_delay 300 , the default, Keepalived waits the specified 300 seconds and moves master to the higher priority VIP on the host. Prerequisites You installed the OpenShift CLI ( oc ). Procedure To specify preemption enter oc edit deploy ipfailover-keepalived to edit the router deployment configuration: USD oc edit deploy ipfailover-keepalived ... spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300 ... 1 Set the OPENSHIFT_HA_PREEMPTION value: preempt_delay 300 : Keepalived waits the specified 300 seconds and moves master to the higher priority VIP on the host. This is the default value. nopreempt : does not move master from the lower priority VIP on the host to the higher priority VIP on the host. 11.6. About VRRP ID offset Each IP failover pod managed by the IP failover deployment configuration, 1 pod per node or replica, runs a Keepalived daemon. As more IP failover deployment configurations are configured, more pods are created and more daemons join into the common Virtual Router Redundancy Protocol (VRRP) negotiation. This negotiation is done by all the Keepalived daemons and it determines which nodes service which virtual IPs (VIP). Internally, Keepalived assigns a unique vrrp-id to each VIP. The negotiation uses this set of vrrp-ids , when a decision is made, the VIP corresponding to the winning vrrp-id is serviced on the winning node. Therefore, for every VIP defined in the IP failover deployment configuration, the IP failover pod must assign a corresponding vrrp-id . This is done by starting at OPENSHIFT_HA_VRRP_ID_OFFSET and sequentially assigning the vrrp-ids to the list of VIPs. The vrrp-ids can have values in the range 1..255 . When there are multiple IP failover deployment configurations, you must specify OPENSHIFT_HA_VRRP_ID_OFFSET so that there is room to increase the number of VIPs in the deployment configuration and none of the vrrp-id ranges overlap. 11.7. Configuring IP failover for more than 254 addresses IP failover management is limited to 254 groups of Virtual IP (VIP) addresses. By default OpenShift Container Platform assigns one IP address to each group. You can use the OPENSHIFT_HA_VIP_GROUPS variable to change this so multiple IP addresses are in each group and define the number of VIP groups available for each Virtual Router Redundancy Protocol (VRRP) instance when configuring IP failover. Grouping VIPs creates a wider range of allocation of VIPs per VRRP in the case of VRRP failover events, and is useful when all hosts in the cluster have access to a service locally. For example, when a service is being exposed with an ExternalIP . Note As a rule for failover, do not limit services, such as the router, to one specific host. Instead, services should be replicated to each host so that in the case of IP failover, the services do not have to be recreated on the new host. Note If you are using OpenShift Container Platform health checks, the nature of IP failover and groups means that all instances in the group are not checked. For that reason, the Kubernetes health checks must be used to ensure that services are live. Prerequisites You are logged in to the cluster with a user with cluster-admin privileges. Procedure To change the number of IP addresses assigned to each group, change the value for the OPENSHIFT_HA_VIP_GROUPS variable, for example: Example Deployment YAML for IP failover configuration ... spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: "3" ... 1 If OPENSHIFT_HA_VIP_GROUPS is set to 3 in an environment with seven VIPs, it creates three groups, assigning three VIPs to the first group, and two VIPs to the two remaining groups. Note If the number of groups set by OPENSHIFT_HA_VIP_GROUPS is fewer than the number of IP addresses set to fail over, the group contains more than one IP address, and all of the addresses move as a single unit. 11.8. High availability For ingressIP In non-cloud clusters, IP failover and ingressIP to a service can be combined. The result is high availability services for users that create services using ingressIP . The approach is to specify an ingressIPNetworkCIDR range and then use the same range in creating the ipfailover configuration. Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the ingressIPNetworkCIDR needs to be /24 or smaller. 11.9. Removing IP failover When IP failover is initially configured, the worker nodes in the cluster are modified with an iptables rule that explicitly allows multicast packets on 224.0.0.18 for Keepalived. Because of the change to the nodes, removing IP failover requires running a job to remove the iptables rule and removing the virtual IP addresses used by Keepalived. Procedure Optional: Identify and delete any check and notify scripts that are stored as config maps: Identify whether any pods for IP failover use a config map as a volume: USD oc get pod -l ipfailover \ -o jsonpath="\ {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\n'}{end} {end}" Example output If the preceding step provided the names of config maps that are used as volumes, delete the config maps: USD oc delete configmap <configmap_name> Identify an existing deployment for IP failover: USD oc get deployment -l ipfailover Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d Delete the deployment: USD oc delete deployment <ipfailover_deployment_name> Remove the ipfailover service account: USD oc delete sa ipfailover Run a job that removes the IP tables rule that was added when IP failover was initially configured: Create a file such as remove-ipfailover-job.yaml with contents that are similar to the following example: apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: quay.io/openshift/origin-keepalived-ipfailover:4.10 command: ["/var/lib/ipfailover/keepalived/remove-failover.sh"] nodeSelector: kubernetes.io/hostname: <host_name> <.> restartPolicy: Never <.> Run the job for each node in your cluster that was configured for IP failover and replace the hostname each time. Run the job: USD oc create -f remove-ipfailover-job.yaml Example output Verification Confirm that the job removed the initial configuration for IP failover. USD oc logs job/remove-ipfailover-2h8dm Example output remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module ... - Cleaning up ... - Releasing VIPs (interface eth0) ...
[ "oc create sa ipfailover", "oc adm policy add-scc-to-user privileged -z ipfailover oc adm policy add-scc-to-user hostnetwork -z ipfailover", "apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: \"\" containers: - name: openshift-ipfailover image: quay.io/openshift/origin-keepalived-ipfailover ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: \"ipfailover\" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: \"1.1.1.1-2\" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: \"10\" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: \"ens3\" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: \"30060\" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: \"0\" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: \"2\" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: \"false\" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: \"10.0.148.40,10.0.160.234,10.0.199.110\" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: \"INPUT\" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: \"/etc/keepalive/mycheckscript.sh\" - name: OPENSHIFT_HA_PREEMPTION 11 value: \"preempt_delay 300\" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: \"2\" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13", "#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0", "oc create configmap mycustomcheck --from-file=mycheckscript.sh", "oc set env deploy/ipfailover-keepalived OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh", "oc set volume deploy/ipfailover-keepalived --add --overwrite --name=config-volume --mount-path=/etc/keepalive --source='{\"configMap\": { \"name\": \"mycustomcheck\", \"defaultMode\": 493}}'", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300", "spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: \"3\"", "oc get pod -l ipfailover -o jsonpath=\" {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\\n'}{end} {end}\"", "Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck", "oc delete configmap <configmap_name>", "oc get deployment -l ipfailover", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d", "oc delete deployment <ipfailover_deployment_name>", "oc delete sa ipfailover", "apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: quay.io/openshift/origin-keepalived-ipfailover:4.10 command: [\"/var/lib/ipfailover/keepalived/remove-failover.sh\"] nodeSelector: kubernetes.io/hostname: <host_name> <.> restartPolicy: Never", "oc create -f remove-ipfailover-job.yaml", "job.batch/remove-ipfailover-2h8dm created", "oc logs job/remove-ipfailover-2h8dm", "remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module - Cleaning up - Releasing VIPs (interface eth0)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/configuring-ipfailover
Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations
Chapter 1. Configuring an OpenShift cluster by deploying an application with cluster configurations With Red Hat OpenShift GitOps, you can configure Argo CD to recursively sync the content of a Git directory with an application that contains custom configurations for your cluster. 1.1. Prerequisites You have logged in to the OpenShift Container Platform cluster as an administrator. You have installed the Red Hat OpenShift GitOps Operator in your cluster. You have logged into Argo CD instance. 1.2. Using an Argo CD instance to manage cluster-scoped resources To manage cluster-scoped resources, update the existing Subscription object for the Red Hat OpenShift GitOps Operator and add the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES environment variable in the spec section. Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators Red Hat OpenShift GitOps Subscription . Click the Actions drop-down menu then click Edit Subscription . On the openshift-gitops-operator Subscription details page, under the YAML tab, edit the Subscription YAML file by adding the namespace of the Argo CD instance to the ARGOCD_CLUSTER_CONFIG_NAMESPACES environment variable in the spec section: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-operators # ... spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances> # ... To verify that the Argo instance is configured with a cluster role to manage cluster-scoped resources, perform the following steps: Navigate to User Management Roles and from the Filter drop-down menu select Cluster-wide Roles . Search for the argocd-application-controller by using the Search by name field. The Roles page displays the created cluster role. Tip Alternatively, in the OpenShift CLI, run the following command: oc auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller The output yes verifies that the Argo instance is configured with a cluster role to manage cluster-scoped resources. Else, check your configurations and take necessary steps as required. 1.3. Default permissions of an Argo CD instance By default Argo CD instance has the following permissions: Argo CD instance has the admin privileges to manage resources only in the namespace where it is deployed. For instance, an Argo CD instance deployed in the foo namespace has the admin privileges to manage resources only for that namespace. Argo CD has the following cluster-scoped permissions because Argo CD requires cluster-wide read privileges on resources to function appropriately: - verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - get - list nonResourceURLs: - '*' Note You can edit the cluster roles used by the argocd-server and argocd-application-controller components where Argo CD is running such that the write privileges are limited to only the namespaces and resources that you wish Argo CD to manage. USD oc edit clusterrole argocd-server USD oc edit clusterrole argocd-application-controller 1.4. Running the Argo CD instance at the cluster-level The default Argo CD instance and the accompanying controllers, installed by the Red Hat OpenShift GitOps Operator, can now run on the infrastructure nodes of the cluster by setting a simple configuration toggle. Procedure Label the existing nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Optional: If required, you can also apply taints and isolate the workloads on infrastructure nodes and prevent other workloads from scheduling on these nodes: USD oc adm taint nodes -l node-role.kubernetes.io/infra \ infra=reserved:NoSchedule infra=reserved:NoExecute Add the runOnInfra toggle in the GitOpsService custom resource: apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true Optional: If taints have been added to the nodes, then add tolerations to the GitOpsService custom resource, for example: spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved Verify that the workloads in the openshift-gitops namespace are now scheduled on the infrastructure nodes by viewing Pods Pod details for any pod in the console UI. Note Any nodeSelectors and tolerations manually added to the default Argo CD custom resource are overwritten by the toggle and tolerations in the GitOpsService custom resource. Additional resources To learn more about taints and tolerations, see Controlling pod placement using node taints . For more information on infrastructure machine sets, see Creating infrastructure machine sets . 1.5. Creating an application by using the Argo CD dashboard Argo CD provides a dashboard which allows you to create applications. This sample workflow walks you through the process of configuring Argo CD to recursively sync the content of the cluster directory to the cluster-configs application. The directory defines the OpenShift Container Platform web console cluster configurations that add a link to the Red Hat Developer Blog - Kubernetes under the menu in the web console, and defines a namespace spring-petclinic on the cluster. Procedure In the Argo CD dashboard, click NEW APP to add a new Argo CD application. For this workflow, create a cluster-configs application with the following configurations: Application Name cluster-configs Project default Sync Policy Manual Repository URL https://github.com/redhat-developer/openshift-gitops-getting-started Revision HEAD Path cluster Destination https://kubernetes.default.svc Namespace spring-petclinic Directory Recurse checked Click CREATE to create your application. Open the Administrator perspective of the web console and navigate to Administration Namespaces in the menu on the left. Search for and select the namespace, then enter argocd.argoproj.io/managed-by=openshift-gitops in the Label field so that the Argo CD instance in the openshift-gitops namespace can manage your namespace. 1.6. Creating an application by using the oc tool You can create Argo CD applications in your terminal by using the oc tool. Procedure Download the sample application : USD git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git Create the application: USD oc create -f openshift-gitops-getting-started/argo/app.yaml Run the oc get command to review the created application: USD oc get application -n openshift-gitops Add a label to the namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it: USD oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops 1.7. Synchronizing your application with your Git repository You can synchronize your application with your Git repository by modifying the synchronization policy for Argo CD. The policy modification automatically applies the changes in your cluster configurations from your Git repository to the cluster. Procedure In the Argo CD dashboard, notice that the cluster-configs Argo CD application has the statuses Missing and OutOfSync . Because the application was configured with a manual sync policy, Argo CD does not sync it automatically. Click SYNC on the cluster-configs tile, review the changes, and then click SYNCHRONIZE . Argo CD will detect any changes in the Git repository automatically. If the configurations are changed, Argo CD will change the status of the cluster-configs to OutOfSync . You can modify the synchronization policy for Argo CD to automatically apply changes from your Git repository to the cluster. Notice that the cluster-configs Argo CD application now has the statuses Healthy and Synced . Click the cluster-configs tile to check the details of the synchronized resources and their status on the cluster. Navigate to the OpenShift Container Platform web console and click to verify that a link to the Red Hat Developer Blog - Kubernetes is now present there. Navigate to the Project page and search for the spring-petclinic namespace to verify that it has been added to the cluster. Your cluster configurations have been successfully synchronized to the cluster. 1.8. In-built permissions for cluster configuration By default, the Argo CD instance has permissions to manage specific cluster-scoped resources such as cluster Operators, optional OLM Operators and user management. Note Argo CD does not have cluster-admin permissions. Permissions for the Argo CD instance: Resources Descriptions Resource Groups Configure the user or administrator operators.coreos.com Optional Operators managed by OLM user.openshift.io , rbac.authorization.k8s.io Groups, Users and their permissions config.openshift.io Control plane Operators managed by CVO used to configure cluster-wide build configuration, registry configuration and scheduler policies storage.k8s.io Storage console.openshift.io Console customization 1.9. Adding permissions for cluster configuration You can grant permissions for an Argo CD instance to manage cluster configuration. Create a cluster role with additional permissions and then create a new cluster role binding to associate the cluster role with a service account. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin privileges and are logged into the web console. You have installed the Red Hat OpenShift GitOps Operator on your cluster. Procedure In the web console, select User Management Roles Create Role . Use the following ClusterRole YAML template to add rules to specify the additional permissions. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secrets-cluster-role rules: - apiGroups: [""] resources: ["secrets"] verbs: ["*"] Click Create to add the cluster role. To create the cluster role binding, select User Management Role Bindings Create Binding . Select All Projects from the Project drop-down. Click Create binding . Select Binding type as Cluster-wide role binding (ClusterRoleBinding) . Enter a unique value for the RoleBinding name . Select the newly created cluster role or an existing cluster role from the drop down list. Select the Subject as ServiceAccount and the provide the Subject namespace and name . Subject namespace : openshift-gitops Subject name : openshift-gitops-argocd-application-controller Click Create . The YAML file for the ClusterRoleBinding object is as follows: kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-role-binding subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secrets-cluster-role 1.10. Installing OLM Operators using Red Hat OpenShift GitOps Red Hat OpenShift GitOps with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators. Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the OpenShift Container Platform web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster. Red Hat OpenShift GitOps places your Kubernetes resources in your Git repository. As a cluster administrator, use Red Hat OpenShift GitOps to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using Red Hat OpenShift GitOps, the Red Hat OpenShift GitOps automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster. 1.10.1. Installing cluster-scoped Operators Operator Lifecycle Manager (OLM) uses a default global-operators Operator group in the openshift-operators namespace for cluster-scoped Operators. Hence you do not have to manage the OperatorGroup resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the OperatorGroup resource in that namespace. To install cluster-scoped Operators, create and place the Subscription resource of the required Operator in your Git repository. Example: Grafana Operator subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: grafana spec: channel: v4 installPlanApproval: Automatic name: grafana-operator source: redhat-operators sourceNamespace: openshift-marketplace 1.10.2. Installing namepace-scoped Operators To install namespace-scoped Operators, create and place the Subscription and OperatorGroup resources of the required Operator in your Git repository. Example: Ansible Automation Platform Resource Operator # ... apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform # ... apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform # ... apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: patch-me installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace # ... Important When deploying multiple Operators using Red Hat OpenShift GitOps, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a failure state with the TooManyOperatorGroups reason. After the number of Operator groups in their corresponding namespaces reaches one, all the failure state CSVs transition to pending state. You must manually approve the pending install plan to complete the Operator installation.
[ "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-operators spec: config: env: - name: ARGOCD_CLUSTER_CONFIG_NAMESPACES value: openshift-gitops, <list of namespaces of cluster-scoped Argo CD instances>", "auth can-i create oauth -n openshift-gitops --as system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller", "- verbs: - get - list - watch apiGroups: - '*' resources: - '*' - verbs: - get - list nonResourceURLs: - '*'", "oc edit clusterrole argocd-server oc edit clusterrole argocd-application-controller", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc adm taint nodes -l node-role.kubernetes.io/infra infra=reserved:NoSchedule infra=reserved:NoExecute", "apiVersion: pipelines.openshift.io/v1alpha1 kind: GitopsService metadata: name: cluster spec: runOnInfra: true", "spec: runOnInfra: true tolerations: - effect: NoSchedule key: infra value: reserved - effect: NoExecute key: infra value: reserved", "git clone [email protected]:redhat-developer/openshift-gitops-getting-started.git", "oc create -f openshift-gitops-getting-started/argo/app.yaml", "oc get application -n openshift-gitops", "oc label namespace spring-petclinic argocd.argoproj.io/managed-by=openshift-gitops", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: secrets-cluster-role rules: - apiGroups: [\"\"] resources: [\"secrets\"] verbs: [\"*\"]", "kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-role-binding subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: secrets-cluster-role", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: grafana spec: channel: v4 installPlanApproval: Automatic name: grafana-operator source: redhat-operators sourceNamespace: openshift-marketplace", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: ansible-automation-platform apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: patch-me installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/declarative_cluster_configuration/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations
Customizing Anaconda
Customizing Anaconda Red Hat Enterprise Linux 9 Changing the installer appearance and creating custom add-ons on Red Hat Enterprise Linux Red Hat Customer Content Services
[ "mount -t iso9660 -o loop path/to/image.iso /mnt/iso", "mkdir /tmp/ISO", "cp -pRf /mnt/iso /tmp/ISO", "umount /mnt/iso", "label check menu label Test this ^media & install Red Hat Enterprise Linux 9. menu default kernel vmlinuz append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rd.live.check quiet", "menu begin ^Troubleshooting menu title Troubleshooting label rescue menu label ^Rescue a Red Hat Enterprise Linux system kernel vmlinuz append initrd=initrd.img inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rescue quiet menu separator label returntomain menu label Return to ^main menu menu exit menu end", "menu color element ansi foreground background shadow", "menuentry 'Test this media & install Red Hat Enterprise Linux 9' --class fedora --class gnu-linux --class gnu --class os { linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rd.live.check quiet initrdefi /images/pxeboot/initrd.img }", "submenu 'Submenu title' { menuentry 'Submenu option 1' { linuxefi /images/vmlinuz inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 xdriver=vesa nomodeset quiet initrdefi /images/pxeboot/initrd.img } menuentry 'Submenu option 2' { linuxefi /images/vmlinuz inst.stage2=hd:LABEL=RHEL-9-BaseOS-x86_64 rescue quiet initrdefi /images/initrd.img } }", "pixmaps ├─ anaconda-password-show-off.svg ├─ anaconda-password-show-on.svg ├─ right-arrow-icon.png ├─ sidebar-bg.png ├─ sidebar-logo.png └─ topbar-bg.png", "/* theme colors/images */ @define-color product_bg_color @redhat; /* logo and sidebar classes */ .logo-sidebar { background-image: url('/usr/share/anaconda/pixmaps/sidebar-bg.png'); background-color: @product_bg_color; background-repeat: no-repeat; } /* Add a logo to the sidebar */ .logo { background-image: url('/usr/share/anaconda/pixmaps/sidebar-logo.png'); background-position: 50% 20px; background-repeat: no-repeat; background-color: transparent; } /* This is a placeholder to be filled by a product-specific logo. */ .product-logo { background-image: none; background-color: transparent; } AnacondaSpokeWindow #nav-box { background-color: @product_bg_color; background-image: url('/usr/share/anaconda/pixmaps/topbar-bg.png'); background-repeat: no-repeat; color: white; }", "[Main] Product=My Distribution Version=9 BugURL=https://bugzilla.redhat.com/ IsFinal=True UUID=202007011344.x86_64 [Compose] Lorax=28.14.49-1", "Run Anaconda in the debugging mode. debug = False Enable Anaconda addons. This option is deprecated and will be removed in the future. addons_enabled = True List of enabled Anaconda DBus modules. This option is deprecated and will be removed in the future. kickstart_modules = List of Anaconda DBus modules that can be activated. Supported patterns: MODULE.PREFIX. , MODULE.NAME activatable_modules = org.fedoraproject.Anaconda.Modules. org.fedoraproject.Anaconda.Addons.* List of Anaconda DBus modules that are not allowed to run. Supported patterns: MODULE.PREFIX. , MODULE.NAME forbidden_modules = # List of Anaconda DBus modules that can fail to run. # The installation won't be aborted because of them. # Supported patterns: MODULE.PREFIX. , MODULE.NAME optional_modules = org.fedoraproject.Anaconda.Modules.Subscription org.fedoraproject.Anaconda.Addons.* Should the installer show a warning about enabled SMT? can_detect_enabled_smt = False Type of the installation target. type = HARDWARE A path to the physical root of the target. physical_root = /mnt/sysimage A path to the system root of the target. system_root = /mnt/sysroot Should we install the network configuration? can_configure_network = True Network device to be activated on boot if none was configured so. Valid values: # NONE No device DEFAULT_ROUTE_DEVICE A default route device FIRST_WIRED_WITH_LINK The first wired device with link # default_on_boot = NONE Default package environment. default_environment = List of ignored packages. ignored_packages = Names of repositories that provide latest updates. updates_repositories = List of .treeinfo variant types to enable. Valid items: # addon optional variant # enabled_repositories_from_treeinfo = addon optional variant Enable installation from the closest mirror. enable_closest_mirror = True Default installation source. Valid values: # CLOSEST_MIRROR Use closest public repository mirror. CDN Use Content Delivery Network (CDN). # default_source = CLOSEST_MIRROR Enable ssl verification for all HTTP connection verify_ssl = True GPG keys to import to RPM database by default. Specify paths on the installed system, each on a line. Substitutions for USDreleasever and USDbasearch happen automatically. default_rpm_gpg_keys = Enable SELinux usage in the installed system. Valid values: # -1 The value is not set. 0 SELinux is disabled. 1 SELinux is enabled. # selinux = -1 Type of the boot loader. Supported values: # DEFAULT Choose the type by platform. EXTLINUX Use extlinux as the boot loader. # type = DEFAULT Name of the EFI directory. efi_dir = default Hide the GRUB menu. menu_auto_hide = False Are non-iBFT iSCSI disks allowed? nonibft_iscsi_boot = False Arguments preserved from the installation system. preserved_arguments = cio_ignore rd.znet rd_ZNET zfcp.allow_lun_scan speakup_synth apic noapic apm ide noht acpi video pci nodmraid nompath nomodeset noiswmd fips selinux biosdevname ipv6.disable net.ifnames net.ifnames.prefix nosmt Enable dmraid usage during the installation. dmraid = True Enable iBFT usage during the installation. ibft = True Do you prefer creation of GPT disk labels? gpt = False Tell multipathd to use user friendly names when naming devices during the installation. multipath_friendly_names = True Do you want to allow imperfect devices (for example, degraded mdraid array devices)? allow_imperfect_devices = False Default file system type. Use whatever Blivet uses by default. file_system_type = Default partitioning. Specify a mount point and its attributes on each line. # Valid attributes: # size <SIZE> The size of the mount point. min <MIN_SIZE> The size will grow from MIN_SIZE to MAX_SIZE. max <MAX_SIZE> The max size is unlimited by default. free <SIZE> The required available space. # default_partitioning = / (min 1 GiB, max 70 GiB) /home (min 500 MiB, free 50 GiB) Default partitioning scheme. Valid values: # PLAIN Create standard partitions. BTRFS Use the Btrfs scheme. LVM Use the LVM scheme. LVM_THINP Use LVM Thin Provisioning. # default_scheme = LVM Default version of LUKS. Valid values: # luks1 Use version 1 by default. luks2 Use version 2 by default. # luks_version = luks2 Minimal size of the total memory. min_ram = 320 MiB Minimal size of the available memory for LUKS2. luks2_min_ram = 128 MiB Should we recommend to specify a swap partition? swap_is_recommended = False Recommended minimal sizes of partitions. Specify a mount point and a size on each line. min_partition_sizes = / 250 MiB /usr 250 MiB /tmp 50 MiB /var 384 MiB /home 100 MiB /boot 200 MiB Required minimal sizes of partitions. Specify a mount point and a size on each line. req_partition_sizes = Allowed device types of the / partition if any. Valid values: # LVM Allow LVM. MD Allow RAID. PARTITION Allow standard partitions. BTRFS Allow Btrfs. DISK Allow disks. LVM_THINP Allow LVM Thin Provisioning. # root_device_types = Mount points that must be on a linux file system. Specify a list of mount points. must_be_on_linuxfs = / /var /tmp /usr /home /usr/share /usr/lib Paths that must be directories on the / file system. Specify a list of paths. must_be_on_root = /bin /dev /sbin /etc /lib /root /mnt lost+found /proc Paths that must NOT be directories on the / file system. Specify a list of paths. must_not_be_on_root = Mount points that are recommended to be reformatted. # It will be recommended to create a new file system on a mount point that has an allowed prefix, but does not have a blocked one. Specify lists of mount points. reformat_allowlist = /boot /var /tmp /usr reformat_blocklist = /home /usr/local /opt /var/www The path to a custom stylesheet. custom_stylesheet = The path to a directory with help files. help_directory = /usr/share/anaconda/help A list of spokes to hide in UI. FIXME: Use other identification then names of the spokes. hidden_spokes = Should the UI allow to change the configured root account? can_change_root = False Should the UI allow to change the configured user accounts? can_change_users = False Define the default password policies. Specify a policy name and its attributes on each line. # Valid attributes: # quality <NUMBER> The minimum quality score (see libpwquality). length <NUMBER> The minimum length of the password. empty Allow an empty password. strict Require the minimum quality. # password_policies = root (quality 1, length 6) user (quality 1, length 6, empty) luks (quality 1, length 6) A path to EULA (if any) # If the given distribution has an EULA & feels the need to tell the user about it fill in this variable by a path pointing to a file with the EULA on the installed system. # This is currently used just to show the path to the file to the user at the end of the installation. eula =", "Anaconda configuration file for Red Hat Enterprise Linux. [Product] product_name = Red Hat Enterprise Linux Show a warning if SMT is enabled. can_detect_enabled_smt = True [Network] default_on_boot = DEFAULT_ROUTE_DEVICE [Payload] ignored_packages = ntfsprogs btrfs-progs dmraid enable_closest_mirror = False default_source = CDN [Boot loader] efi_dir = redhat [Storage] file_system_type = xfs default_partitioning = / (min 1 GiB, max 70 GiB) /home (min 500 MiB, free 50 GiB) swap [Storage Constraints] swap_is_recommended = True [User Interface] help_directory = /usr/share/anaconda/help/rhel [License] eula = /usr/share/redhat-release/EULA", "Anaconda configuration file for Red Hat Virtualization. [Product] product_name = Red Hat Virtualization (RHVH) [Base Product] product_name = Red Hat Enterprise Linux [Storage] default_scheme = LVM_THINP default_partitioning = / (min 6 GiB) /home (size 1 GiB) /tmp (size 1 GiB) /var (size 15 GiB) /var/crash (size 10 GiB) /var/log (size 8 GiB) /var/log/audit (size 2 GiB) swap [Storage Constraints] root_device_types = LVM_THINP must_not_be_on_root = /var req_partition_sizes = /var 10 GiB /boot 1 GiB", "com_example_hello_world ├─ gui │ ├─ init .py │ └─ spokes │ └─ init .py └─ tui ├─ init .py └─ spokes └─ init .py", "<!DOCTYPE busconfig PUBLIC \"-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN\" \"http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd\"> <busconfig> <policy user=\"root\"> <allow own=\"org.fedoraproject.Anaconda.Addons.HelloWorld\"/> <allow send_destination=\"org.fedoraproject.Anaconda.Addons.HelloWorld\"/> </policy> <policy context=\"default\"> <deny own=\"org.fedoraproject.Anaconda.Addons.HelloWorld\"/> <allow send_destination=\"org.fedoraproject.Anaconda.Addons.HelloWorld\"/> </policy> </busconfig>", "Start the org.fedoraproject.Anaconda.Addons.HelloWorld service. Runs org_fedora_hello_world/service/ main .py Name=org.fedoraproject.Anaconda.Addons.HelloWorld Exec=/usr/libexec/anaconda/start-module org_fedora_hello_world.service User=root", "will never be translated _ = lambda x: x N_ = lambda x: x the path to addons is in sys.path so we can import things from org_fedora_hello_world from org_fedora_hello_world.gui.categories.hello_world import HelloWorldCategory from pyanaconda.ui.gui.spokes import NormalSpoke export only the spoke, no helper functions, classes or constants all = [\"HelloWorldSpoke\"] class HelloWorldSpoke(FirstbootSpokeMixIn, NormalSpoke): \"\"\" Class for the Hello world spoke. This spoke will be in the Hello world category and thus on the Summary hub. It is a very simple example of a unit for the Anaconda's graphical user interface. Since it is also inherited form the FirstbootSpokeMixIn, it will also appear in the Initial Setup (successor of the Firstboot tool). :see: pyanaconda.ui.common.UIObject :see: pyanaconda.ui.common.Spoke :see: pyanaconda.ui.gui.GUIObject :see: pyanaconda.ui.common.FirstbootSpokeMixIn :see: pyanaconda.ui.gui.spokes.NormalSpoke \"\"\" # class attributes defined by API # # list all top-level objects from the .glade file that should be exposed # to the spoke or leave empty to extract everything builderObjects = [\"helloWorldSpokeWindow\", \"buttonImage\"] # the name of the main window widget mainWidgetName = \"helloWorldSpokeWindow\" # name of the .glade file in the same directory as this source uiFile = \"hello_world.glade\" # category this spoke belongs to category = HelloWorldCategory # spoke icon (will be displayed on the hub) # preferred are the -symbolic icons as these are used in Anaconda's spokes icon = \"face-cool-symbolic\" # title of the spoke (will be displayed on the hub) title = N_(\"_HELLO WORLD\")", "def __init__ (self, data, storage, payload): \"\"\" :see: pyanaconda.ui.common.Spoke. init :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, boot loader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload \"\"\" NormalSpoke. init (self, data, storage, payload) self._hello_world_module = HELLO_WORLD.get_proxy() def initialize(self): \"\"\" The initialize method that is called after the instance is created. The difference between init and this method is that this may take a long time and thus could be called in a separate thread. :see: pyanaconda.ui.common.UIObject.initialize \"\"\" NormalSpoke.initialize(self) self._entry = self.builder.get_object(\"textLines\") self._reverse = self.builder.get_object(\"reverseCheckButton\")", "def refresh(self): \"\"\" The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of internal data structures. :see: pyanaconda.ui.common.UIObject.refresh \"\"\" lines = self._hello_world_module.Lines self._entry.get_buffer().set_text(\"\".join(lines)) reverse = self._hello_world_module.Reverse self._reverse.set_active(reverse) def apply(self): \"\"\" The apply method that is called when user leaves the spoke. It should update the D-Bus service with values set in the GUI elements. \"\"\" buf = self._entry.get_buffer() text = buf.get_text(buf.get_start_iter(), buf.get_end_iter(), True) lines = text.splitlines(True) self._hello_world_module.SetLines(lines) self._hello_world_module.SetReverse(self._reverse.get_active()) def execute(self): \"\"\" The execute method that is called when the spoke is exited. It is supposed to do all changes to the runtime environment according to the values set in the GUI elements. \"\"\" # nothing to do here pass", "@property def ready(self): \"\"\" The ready property reports whether the spoke is ready, that is, can be visited or not. The spoke is made (in)sensitive based on the returned value of the ready property. :rtype: bool \"\"\" # this spoke is always ready return True @property def mandatory(self): \"\"\" The mandatory property that tells whether the spoke is mandatory to be completed to continue in the installation process. :rtype: bool \"\"\" # this is an optional spoke that is not mandatory to be completed return False", "@property def status(self): \"\"\" The status property that is a brief string describing the state of the spoke. It should describe whether all values are set and if possible also the values themselves. The returned value will appear on the hub below the spoke's title. :rtype: str \"\"\" lines = self._hello_world_module.Lines if not lines: return _(\"No text added\") elif self._hello_world_module.Reverse: return _(\"Text set with {} lines to reverse\").format(len(lines)) else: return _(\"Text set with {} lines\").format(len(lines))", "every GUIObject gets ksdata in init dialog = HelloWorldDialog(self.data) # show dialog above the lightbox with self.main_window.enlightbox(dialog.window): dialog.run()", "@classmethod def should_run(cls, environment, data): \"\"\"Run this spoke for Anaconda and Initial Setup\"\"\" return True", "def __init__(self, *args, **kwargs): \"\"\" Create the representation of the spoke. :see: simpleline.render.screen.UIScreen \"\"\" super().__init__(*args, **kwargs) self.title = N_(\"Hello World\") self._hello_world_module = HELLO_WORLD.get_proxy() self._container = None self._reverse = False self._lines = \"\" def initialize(self): \"\"\" The initialize method that is called after the instance is created. The difference between __init__ and this method is that this may take a long time and thus could be called in a separated thread. :see: pyanaconda.ui.common.UIObject.initialize \"\"\" # nothing to do here super().initialize() def setup(self, args=None): \"\"\" The setup method that is called right before the spoke is entered. It should update its state according to the contents of DBus modules. :see: simpleline.render.screen.UIScreen.setup \"\"\" super().setup(args) self._reverse = self._hello_world_module.Reverse self._lines = self._hello_world_module.Lines return True def refresh(self, args=None): \"\"\" The refresh method that is called every time the spoke is displayed. It should generate the UI elements according to its state. :see: pyanaconda.ui.common.UIObject.refresh :see: simpleline.render.screen.UIScreen.refresh \"\"\" super().refresh(args) self._container = ListColumnContainer( columns=1 ) self._container.add( CheckboxWidget( title=\"Reverse\", completed=self._reverse ), callback=self._change_reverse ) self._container.add( EntryWidget( title=\"Hello world text\", value=\"\".join(self._lines) ), callback=self._change_lines ) self.window.add_with_separator(self._container) def _change_reverse(self, data): \"\"\" Callback when user wants to switch checkbox. Flip state of the \"reverse\" parameter which is boolean. \"\"\" self._reverse = not self._reverse def _change_lines(self, data): \"\"\" Callback when user wants to input new lines. Show a dialog and save the provided lines. \"\"\" dialog = Dialog(\"Lines\") result = dialog.run() self._lines = result.splitlines(True) def input(self, args, key): \"\"\" The input method that is called by the main loop on user's input. * If the input should not be handled here, return it. * If the input is invalid, return InputState.DISCARDED. * If the input is handled and the current screen should be refreshed, return InputState.PROCESSED_AND_REDRAW. * If the input is handled and the current screen should be closed, return InputState.PROCESSED_AND_CLOSE. :see: simpleline.render.screen.UIScreen.input \"\"\" if self._container.process_user_input(key): return InputState.PROCESSED_AND_REDRAW if key.lower() == Prompt.CONTINUE: self.apply() self.execute() return InputState.PROCESSED_AND_CLOSE return super().input(args, key) def apply(self): \"\"\" The apply method is not called automatically for TUI. It should be called in input() if required. It should update the contents of internal data structures with values set in the spoke. \"\"\" self._hello_world_module.SetReverse(self._reverse) self._hello_world_module.SetLines(self._lines) def execute(self): \"\"\" The execute method is not called automatically for TUI. It should be called in input() if required. It is supposed to do all changes to the runtime environment according to the values set in the spoke. \"\"\" # nothing to do here pass", "class HelloWorldEditSpoke(NormalTUISpoke): \"\"\"Example class demonstrating usage of editing in TUI\"\"\" category = HelloWorldCategory def init (self, data, storage, payload): \"\"\" :see: simpleline.render.screen.UIScreen :param data: data object passed to every spoke to load/store data from/to it :type data: pykickstart.base.BaseHandler :param storage: object storing storage-related information (disks, partitioning, boot loader, etc.) :type storage: blivet.Blivet :param payload: object storing packaging-related information :type payload: pyanaconda.packaging.Payload \"\"\" super(). init (self, *args, **Kwargs) self.title = N_(\"Hello World Edit\") self._container = None # values for user to set self._checked = False self._unconditional_input = \"\" self._conditional_input = \"\" def refresh(self, args=None): \"\"\" The refresh method that is called every time the spoke is displayed. It should update the UI elements according to the contents of self.data. :see: pyanaconda.ui.common.UIObject.refresh :see: simpleline.render.screen.UIScreen.refresh :param args: optional argument that may be used when the screen is scheduled :type args: anything \"\"\" super().refresh(args) self._container = ListColumnContainer(columns=1) # add ListColumnContainer to window (main window container) # this will automatically add numbering and will call callbacks when required self.window.add(self._container) self._container.add(CheckboxWidget(title=\"Simple checkbox\", completed=self._checked), callback=self._checkbox_called) self._container.add(EntryWidget(title=\"Unconditional text input\", value=self._unconditional_input), callback=self._get_unconditional_input) # show conditional input only if the checkbox is checked if self._checked: self._container.add(EntryWidget(title=\"Conditional password input\", value=\"Password set\" if self._conditional_input else \"\"), callback=self._get_conditional_input) self.window.add_with_separator(self._container) def _checkbox_called(self, data): # pylint: disable=unused-argument \"\"\"Callback when user wants to switch checkbox. :param data: can be passed when adding callback in container (not used here) :type data: anything \"\"\" self._checked = not self._checked def _get_unconditional_input(self, data): # pylint: disable=unused-argument \"\"\"Callback when the user wants to set unconditional input. :param data: can be passed when adding callback in container (not used here) :type data: anything \"\"\" dialog = Dialog( \"Unconditional input\", conditions=[self._check_user_input] ) self._unconditional_input = dialog.run() def _get_conditional_input(self, data): # pylint: disable=unused-argument \"\"\"Callback when the user wants to set conditional input. :param data: can be passed when adding callback in container (not used here) :type data: anything \"\"\" dialog = PasswordDialog( \"Unconditional password input\", policy_name=PASSWORD_POLICY_ROOT ) self._conditional_input = dialog.run() def _check_user_input(self, user_input, report_func): \"\"\"Check if the user has written a valid value. :param user_input: user input for validation :type user_input: str :param report_func: function for reporting errors on user input :type report_func: func with one param \"\"\" if re.match(r'^\\w+USD', user_input): return True else: report_func(\"You must set at least one word\") return False def input(self, args, key): \"\"\" The input method that is called by the main loop on user's input. :param args: optional argument that may be used when the screen is scheduled :type args: anything :param key: user's input :type key: unicode :return: if the input should not be handled here, return it, otherwise return InputState.PROCESSED or InputState.DISCARDED if the input was processed successfully or not respectively :rtype: enum InputState \"\"\" if self._container.process_user_input(key): return InputState.PROCESSED_AND_REDRAW else: return super().input(args, key) @property def completed(self): # completed if user entered something non-empty to the Conditioned input return bool(self._conditional_input) @property def status(self): return \"Hidden input %s\" % (\"entered\" if self._conditional_input else \"not entered\") def apply(self): # nothing needed here, values are set in the self.args tree pass", "cd DIR", "find . | cpio -c -o | pigz -9cv > DIR / updates .img", "inst.updates=http://your-server/whatever/updates.img to boot options.", "cd /tmp", "mkdir product/", "mkdir -p product/usr/share/anaconda/addons", "cp -r ~/path/to/custom/addon/ product/usr/share/anaconda/addons/", "[Main] Product=Red Hat Enterprise Linux Version=8.4 BugURL=https://bugzilla.redhat.com/ IsFinal=True UUID=202007011344.x86_64 [Compose] Lorax=28.14.49-1", "cd product", "find . | cpio -c -o | gzip -9cv > ../product.img", "genisoimage -U -r -v -T -J -joliet-long -V \"RHEL-9 Server.x86_64\" -volset \"RHEL-9 Server.x86_64\" -A \"RHEL-9 Server.x86_64\" -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot -o ../NEWISO.iso .", "implantisomd5 ../NEWISO.iso" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/customizing_anaconda/index
Chapter 31. Getting started with an ext4 file system
Chapter 31. Getting started with an ext4 file system As a system administrator, you can create, mount, resize, backup, and restore an ext4 file system. The ext4 file system is a scalable extension of the ext3 file system. With Red Hat Enterprise Linux 8, it can support a maximum individual file size of 16 terabytes, and file system to a maximum of 50 terabytes. 31.1. Features of an ext4 file system Following are the features of an ext4 file system: Using extents: The ext4 file system uses extents, which improves performance when using large files and reduces metadata overhead for large files. Ext4 labels unallocated block groups and inode table sections accordingly, which allows the block groups and table sections to be skipped during a file system check. It leads to a quick file system check, which becomes more beneficial as the file system grows in size. Metadata checksum: By default, this feature is enabled in Red Hat Enterprise Linux 8. Allocation features of an ext4 file system: Persistent pre-allocation Delayed allocation Multi-block allocation Stripe-aware allocation Extended attributes ( xattr ): This allows the system to associate several additional name and value pairs per file. Quota journaling: This avoids the need for lengthy quota consistency checks after a crash. Note The only supported journaling mode in ext4 is data=ordered (default). For more information, see the Red Hat Knowledgebase solution Is the EXT journaling option "data=writeback" supported in RHEL? . Subsecond timestamps - This gives timestamps to the subsecond. Additional resources ext4 man page on your system 31.2. Creating an ext4 file system As a system administrator, you can create an ext4 file system on a block device using mkfs.ext4 command. Prerequisites A partition on your disk. For information about creating MBR or GPT partitions, see Creating a partition table on a disk with parted . Alternatively, use an LVM or MD volume. Procedure To create an ext4 file system: For a regular-partition device, an LVM volume, an MD volume, or a similar device, use the following command: Replace /dev/ block_device with the path to a block device. For example, /dev/sdb1 , /dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a , or /dev/my-volgroup/my-lv . In general, the default options are optimal for most usage scenarios. For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of file system creation. Using proper stripe geometry enhances the performance of an ext4 file system. For example, to create a file system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command: In the given example: stride=value: Specifies the RAID chunk size stripe-width=value: Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe. Note To specify a UUID when creating a file system: Replace UUID with the UUID you want to set: for example, 7cd65de3-e0be-41d9-b66d-96d749c02da7 . Replace /dev/ block_device with the path to an ext4 file system to have the UUID added to it: for example, /dev/sda8 . To specify a label when creating a file system: To view the created ext4 file system: Additional resources ext4 and mkfs.ext4 man pages on your system 31.3. Mounting an ext4 file system As a system administrator, you can mount an ext4 file system using the mount utility. Prerequisites An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file system . Procedure To create a mount point to mount the file system: Replace /mount/point with the directory name where mount point of the partition must be created. To mount an ext4 file system: To mount an ext4 file system with no extra options: To mount the file system persistently, see Persistently mounting file systems . To view the mounted file system: Additional resources mount , ext4 , and fstab man pages on your system Mounting file systems 31.4. Resizing an ext4 file system As a system administrator, you can resize an ext4 file system using the resize2fs utility. The resize2fs utility reads the size in units of file system block size, unless a suffix indicating a specific unit is used. The following suffixes indicate specific units: s (sectors) - 512 byte sectors K (kilobytes) - 1,024 bytes M (megabytes) - 1,048,576 bytes G (gigabytes) - 1,073,741,824 bytes T (terabytes) - 1,099,511,627,776 bytes Prerequisites An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file system . An underlying block device of an appropriate size to hold the file system after resizing. Procedure To resize an ext4 file system, take the following steps: To shrink and grow the size of an unmounted ext4 file system: Replace /dev/block_device with the path to the block device, for example /dev/sdb1 . Replace size with the required resize value using s , K , M , G , and T suffixes. An ext4 file system may be grown while mounted using the resize2fs command: Note The size parameter is optional (and often redundant) when expanding. The resize2fs automatically expands to fill the available space of the container, usually a logical volume or partition. To view the resized file system: Additional resources resize2fs , e2fsck , and ext4 man pages on your system 31.5. Comparison of tools used with ext4 and XFS This section compares which tools to use to accomplish common tasks on the ext4 and XFS file systems. Task ext4 XFS Create a file system mkfs.ext4 mkfs.xfs File system check e2fsck xfs_repair Resize a file system resize2fs xfs_growfs Save an image of a file system e2image xfs_metadump and xfs_mdrestore Label or tune a file system tune2fs xfs_admin Back up a file system dump and restore xfsdump and xfsrestore Quota management quota xfs_quota File mapping filefrag xfs_bmap
[ "mkfs.ext4 /dev/ block_device", "mkfs.ext4 -E stride=16,stripe-width=64 /dev/ block_device", "mkfs.ext4 -U UUID /dev/ block_device", "mkfs.ext4 -L label-name /dev/ block_device", "blkid", "mkdir /mount/point", "mount /dev/ block_device /mount/point", "df -h", "umount /dev/ block_device e2fsck -f /dev/ block_device resize2fs /dev/ block_device size", "resize2fs /mount/device size", "df -h" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/getting-started-with-an-ext4-file-system_managing-file-systems
Chapter 6. Customizing the web console in OpenShift Container Platform
Chapter 6. Customizing the web console in OpenShift Container Platform You can customize the OpenShift Container Platform web console to set a custom logo, product name, links, notifications, and command line downloads. This is especially helpful if you need to tailor the web console to meet specific corporate or government requirements. 6.1. Adding a custom logo and product name You can create custom branding by adding a custom logo or custom product name. You can set both or one without the other, as these settings are independent of each other. Prerequisites You must have administrator privileges. Create a file of the logo that you want to use. The logo can be a file in any common image format, including GIF, JPG, PNG, or SVG, and is constrained to a max-height of 60px . Image size must not exceed 1 MB due to constraints on the ConfigMap object size. Procedure Import your logo file into a config map in the openshift-config namespace: USD oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config Tip You can alternatively apply the following YAML to create the config map: apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1 1 Provide a valid base64-encoded logo. Edit the web console's Operator configuration to include customLogoFile and customProductName : USD oc edit consoles.operator.openshift.io cluster apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console Once the Operator configuration is updated, it will sync the custom logo config map into the console namespace, mount it to the console pod, and redeploy. Check for success. If there are any issues, the console cluster Operator will report a Degraded status, and the console Operator configuration will also report a CustomLogoDegraded status, but with reasons like KeyOrFilenameInvalid or NoImageProvided . To check the clusteroperator , run: USD oc get clusteroperator console -o yaml To check the console Operator configuration, run: USD oc get consoles.operator.openshift.io -o yaml 6.2. Creating custom links in the web console Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleLink . Select Instances tab Click Create Console Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1 1 Valid location settings are HelpMenu , UserMenu , ApplicationMenu , and NamespaceDashboard . To make the custom link appear in all namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces To make the custom link appear in only some namespaces, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called "Launcher" under "namespace" or "project" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace To make the custom link appear in the application menu, follow this example: apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24 Click Save to apply your changes. 6.3. Customizing console routes For console and downloads routes, custom routes functionality uses the ingress config route configuration API. If the console custom route is set up in both the ingress config and console-operator config, then the new ingress config custom route configuration takes precedent. The route configuration with the console-operator config is deprecated. 6.3.1. Customizing the console route You can customize the console route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. Note Add a DNS record for the custom console route that points to the application ingress load balancer. 6.3.2. Customizing the download route You can customize the download route by setting the custom hostname and TLS certificate in the spec.componentRoutes field of the cluster Ingress configuration. Prerequisites You have logged in to the cluster as a user with administrative privileges. You have created a secret in the openshift-config namespace containing the TLS certificate and key. This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Tip You can create a TLS secret by using the oc create secret tls command. Procedure Edit the cluster Ingress configuration: USD oc edit ingress.config.openshift.io cluster Set the custom hostname and optionally the serving certificate and key: apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2 1 The custom hostname. 2 Reference to a secret in the openshift-config namespace that contains a TLS certificate ( tls.crt ) and key ( tls.key ). This is required if the domain for the custom hostname suffix does not match the cluster domain suffix. The secret is optional if the suffix matches. Save the file to apply the changes. Note Add a DNS record for the custom downloads route that points to the application ingress load balancer. 6.4. Customizing the login page Create Terms of Service information with custom login pages. Custom login pages can also be helpful if you use a third-party login provider, such as GitHub or Google, to show users a branded page that they trust and expect before being redirected to the authentication provider. You can also render custom error pages during the authentication process. Note Customizing the error template is limited to identity providers (IDPs) that use redirects, such as request header and OIDC-based IDPs. It does not have an effect on IDPs that use direct password authentication, such as LDAP and htpasswd. Prerequisites You must have administrator privileges. Procedure Run the following commands to create templates you can modify: USD oc adm create-login-template > login.html USD oc adm create-provider-selection-template > providers.html USD oc adm create-error-template > errors.html Create the secrets: USD oc create secret generic login-template --from-file=login.html -n openshift-config USD oc create secret generic providers-template --from-file=providers.html -n openshift-config USD oc create secret generic error-template --from-file=errors.html -n openshift-config Run: USD oc edit oauths cluster Update the specification: apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster # ... spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template Run oc explain oauths.spec.templates to understand the options. 6.5. Defining a template for an external log link If you are connected to a service that helps you browse your logs, but you need to generate URLs in a particular way, then you can define a template for your link. Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleExternalLogLink . Select Instances tab Click Create Console External Log Link and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs 6.6. Creating custom notification banners Prerequisites You must have administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleNotification . Select Instances tab Click Create Console Notification and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce' 1 Valid location settings are BannerTop , BannerBottom , and BannerTopBottom . Click Create to apply your changes. 6.7. Customizing CLI downloads You can configure links for downloading the CLI with custom link text and URLs, which can point directly to file packages or to an external page that provides the packages. Prerequisites You must have administrator privileges. Procedure Navigate to Administration Custom Resource Definitions . Select ConsoleCLIDownload from the list of Custom Resource Definitions (CRDs). Click the YAML tab, and then make your edits: apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links spec: description: | This is an example of download links displayName: example links: - href: 'https://www.example.com/public/example.tar' text: example for linux - href: 'https://www.example.com/public/example.mac.zip' text: example for mac - href: 'https://www.example.com/public/example.win.zip' text: example for windows Click the Save button. 6.8. Adding YAML examples to Kubernetes resources You can dynamically add YAML examples to any Kubernetes resources at any time. Prerequisites You must have cluster administrator privileges. Procedure From Administration Custom Resource Definitions , click on ConsoleYAMLSample . Click YAML and edit the file: apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - "bin/bash" - "-c" - "for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done" restartPolicy: Never Use spec.snippet to indicate that the YAML sample is not the full YAML resource definition, but a fragment that can be inserted into the existing YAML document at the user's cursor. Click Save . 6.9. Customizing user perspectives The OpenShift Container Platform web console provides two perspectives by default, Administrator and Developer . You might have more perspectives available depending on installed console plugins. As a cluster administrator, you can show or hide a perspective for all users or for a specific user role. Customizing perspectives ensures that users can view only the perspectives that are applicable to their role and tasks. For example, you can hide the Administrator perspective from unprivileged users so that they cannot manage cluster resources, users, and projects. Similarly, you can show the Developer perspective to users with the developer role so that they can create, deploy, and monitor applications. You can also customize the perspective visibility for users based on role-based access control (RBAC). For example, if you customize a perspective for monitoring purposes, which requires specific permissions, you can define that the perspective is visible only to users with required permissions. Each perspective includes the following mandatory parameters, which you can edit in the YAML view: id : Defines the ID of the perspective to show or hide visibility : Defines the state of the perspective along with access review checks, if needed state : Defines whether the perspective is enabled, disabled, or needs an access review check Note By default, all perspectives are enabled. When you customize the user perspective, your changes are applicable to the entire cluster. 6.9.1. Customizing a perspective using YAML view Prerequisites You must have administrator privileges. Procedure In the Administrator perspective, navigate to Administration Cluster Settings . Select the Configuration tab and click the Console (operator.openshift.io) resource. Click the YAML tab and make your customization: To enable or disable a perspective, insert the snippet for Add user perspectives and edit the YAML code as needed: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: Enabled - id: dev visibility: state: Enabled To hide a perspective based on RBAC permissions, insert the snippet for Hide user perspectives and edit the YAML code as needed: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin requiresAccessReview: - group: rbac.authorization.k8s.io resource: clusterroles verb: list - id: dev state: Enabled To customize a perspective based on your needs, create your own YAML snippet: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: AccessReview accessReview: missing: - resource: deployment verb: list required: - resource: namespaces verb: list - id: dev visibility: state: Enabled Click Save . 6.9.2. Customizing a perspective using form view Prerequisites You must have administrator privileges. Procedure In the Administrator perspective, navigate to Administration Cluster Settings . Select the Configuration tab and click the Console (operator.openshift.io) resource. Click Actions Customize on the right side of the page. In the General settings, customize the perspective by selecting one of the following options from the dropdown list: Enabled : Enables the perspective for all users Only visible for privileged users : Enables the perspective for users who can list all namespaces Only visible for unprivileged users : Enables the perspective for users who cannot list all namespaces Disabled : Disables the perspective for all users A notification opens to confirm that your changes are saved. Note When you customize the user perspective, your changes are automatically saved and take effect after a browser refresh. 6.10. Developer catalog and sub-catalog customization As a cluster administrator, you have the ability to organize and manage the Developer catalog or its sub-catalogs. You can enable or disable the sub-catalog types or disable the entire developer catalog. The developerCatalog.types object includes the following parameters that you must define in a snippet to use them in the YAML view: state : Defines if a list of developer catalog types should be enabled or disabled. enabled : Defines a list of developer catalog types (sub-catalogs) that are visible to users. disabled : Defines a list of developer catalog types (sub-catalogs) that are not visible to users. You can enable or disable the following developer catalog types (sub-catalogs) using the YAML view or the form view. Builder Images Templates Devfiles Samples Helm Charts Event Sources Event Sinks Operator Backed 6.10.1. Customizing a developer catalog or its sub-catalogs using the YAML view You can customize a developer catalog by editing the YAML content in the YAML view. Prerequisites An OpenShift web console session with cluster administrator privileges. Procedure In the Administrator perspective of the web console, navigate to Administration Cluster Settings . Select the Configuration tab, click the Console (operator.openshift.io) resource and view the Details page. Click the YAML tab to open the editor and edit the YAML content as needed. For example, to disable a developer catalog type, insert the following snippet that defines a list of disabled developer catalog resources: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster ... spec: customization: developerCatalog: categories: types: state: Disabled disabled: - BuilderImage - Devfile - HelmChart ... Click Save . Note By default, the developer catalog types are enabled in the Administrator view of the Web Console. 6.10.2. Customizing a developer catalog or its sub-catalogs using the form view You can customize a developer catalog by using the form view in the Web Console. Prerequisites An OpenShift web console session with cluster administrator privileges. The Developer perspective is enabled. Procedure In the Administrator perspective, navigate to Administration Cluster Settings . Select the Configuration tab and click the Console (operator.openshift.io) resource. Click Actions Customize . Enable or disable items in the Pre-pinned navigation items , Add page , and Developer Catalog sections. Verification After you have customized the developer catalog, your changes are automatically saved in the system and take effect in the browser after a refresh. Note As an administrator, you can define the navigation items that appear by default for all users. You can also reorder the navigation items. Tip You can use a similar procedure to customize Web UI items such as Quick starts, Cluster roles, and Actions. 6.10.2.1. Example YAML file changes You can dynamically add the following snippets in the YAML editor for customizing a developer catalog. Use the following snippet to display all the sub-catalogs by setting the state type to Enabled . apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster ... spec: customization: developerCatalog: categories: types: state: Enabled Use the following snippet to disable all sub-catalogs by setting the state type to Disabled : apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster ... spec: customization: developerCatalog: categories: types: state: Disabled Use the following snippet when a cluster administrator defines a list of sub-catalogs, which are enabled in the Web Console. apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster ... spec: customization: developerCatalog: categories: types: state: Enabled enabled: - BuilderImage - Devfile - HelmChart - ...
[ "oc create configmap console-custom-logo --from-file /path/to/console-custom-logo.png -n openshift-config", "apiVersion: v1 kind: ConfigMap metadata: name: console-custom-logo namespace: openshift-config binaryData: console-custom-logo.png: <base64-encoded_logo> ... 1", "oc edit consoles.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: customLogoFile: key: console-custom-logo.png name: console-custom-logo customProductName: My Console", "oc get clusteroperator console -o yaml", "oc get consoles.operator.openshift.io -o yaml", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: example spec: href: 'https://www.example.com' location: HelpMenu 1 text: Link 1", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-link-for-all-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard text: This appears in all namespaces", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: namespaced-dashboard-for-some-namespaces spec: href: 'https://www.example.com' location: NamespaceDashboard # This text will appear in a box called \"Launcher\" under \"namespace\" or \"project\" in the web console text: Custom Link Text namespaceDashboard: namespaces: # for these specific namespaces - my-namespace - your-namespace - other-namespace", "apiVersion: console.openshift.io/v1 kind: ConsoleLink metadata: name: application-menu-link-1 spec: href: 'https://www.example.com' location: ApplicationMenu text: Link 1 applicationMenu: section: My New Section # image that is 24x24 in size imageURL: https://via.placeholder.com/24", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: console namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc edit ingress.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: componentRoutes: - name: downloads namespace: openshift-console hostname: <custom_hostname> 1 servingCertKeyPairSecret: name: <secret_name> 2", "oc adm create-login-template > login.html", "oc adm create-provider-selection-template > providers.html", "oc adm create-error-template > errors.html", "oc create secret generic login-template --from-file=login.html -n openshift-config", "oc create secret generic providers-template --from-file=providers.html -n openshift-config", "oc create secret generic error-template --from-file=errors.html -n openshift-config", "oc edit oauths cluster", "apiVersion: config.openshift.io/v1 kind: OAuth metadata: name: cluster spec: templates: error: name: error-template login: name: login-template providerSelection: name: providers-template", "apiVersion: console.openshift.io/v1 kind: ConsoleExternalLogLink metadata: name: example spec: hrefTemplate: >- https://example.com/logs?resourceName=USD{resourceName}&containerName=USD{containerName}&resourceNamespace=USD{resourceNamespace}&podLabels=USD{podLabels} text: Example Logs", "apiVersion: console.openshift.io/v1 kind: ConsoleNotification metadata: name: example spec: text: This is an example notification message with an optional link. location: BannerTop 1 link: href: 'https://www.example.com' text: Optional link text color: '#fff' backgroundColor: '#0088ce'", "apiVersion: console.openshift.io/v1 kind: ConsoleCLIDownload metadata: name: example-cli-download-links spec: description: | This is an example of download links displayName: example links: - href: 'https://www.example.com/public/example.tar' text: example for linux - href: 'https://www.example.com/public/example.mac.zip' text: example for mac - href: 'https://www.example.com/public/example.win.zip' text: example for windows", "apiVersion: console.openshift.io/v1 kind: ConsoleYAMLSample metadata: name: example spec: targetResource: apiVersion: batch/v1 kind: Job title: Example Job description: An example Job YAML sample yaml: | apiVersion: batch/v1 kind: Job metadata: name: countdown spec: template: metadata: name: countdown spec: containers: - name: counter image: centos:7 command: - \"bin/bash\" - \"-c\" - \"for i in 9 8 7 6 5 4 3 2 1 ; do echo USDi ; done\" restartPolicy: Never", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: Enabled - id: dev visibility: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin requiresAccessReview: - group: rbac.authorization.k8s.io resource: clusterroles verb: list - id: dev state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: perspectives: - id: admin visibility: state: AccessReview accessReview: missing: - resource: deployment verb: list required: - resource: namespaces verb: list - id: dev visibility: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Disabled disabled: - BuilderImage - Devfile - HelmChart", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Enabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Disabled", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: developerCatalog: categories: types: state: Enabled enabled: - BuilderImage - Devfile - HelmChart -" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/web_console/customizing-web-console
3.3. Recording Statistical History
3.3. Recording Statistical History The ETL service collects data into the statistical tables every minute. Data is stored for every minute of the past 24 hours, at a minimum, but can be stored for as long as 48 hours depending on the last time a deletion job was run. Minute-by-minute data more than two hours old is aggregated into hourly data and stored for two months. Hourly data more than two days old is aggregated into daily data and stored for five years. Hourly data and daily data can be found in the hourly and daily tables. Each statistical datum is kept in its respective aggregation level table: samples, hourly, and daily history. All history tables also contain a history_id column to uniquely identify rows. Tables reference the configuration version of a host in order to enable reports on statistics of an entity in relation to its past configuration.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/recording_statistical_history
15.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts
15.3. Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts If the Manager virtual machine shuts down or needs to be migrated, there must be enough memory on a self-hosted engine node for the Manager virtual machine to restart on or migrate to it. This memory can be reserved on multiple self-hosted engine nodes by using a scheduling policy. The scheduling policy checks if enough memory to start the Manager virtual machine will remain on the specified number of additional self-hosted engine nodes before starting or migrating any virtual machines. See Creating a Scheduling Policy in the Administration Guide for more information about scheduling policies. To add more self-hosted engine nodes to the Red Hat Virtualization Manager, see Section 15.4, "Adding Self-Hosted Engine Nodes to the Red Hat Virtualization Manager" . Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts Click Compute Clusters and select the cluster containing the self-hosted engine nodes. Click Edit . Click the Scheduling Policy tab. Click + and select HeSparesCount . Enter the number of additional self-hosted engine nodes that will reserve enough free memory to start the Manager virtual machine. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/configuring_memory_slots_reserved_for_the_she
1.4. LVM Logical Volumes in a Red Hat High Availability Cluster
1.4. LVM Logical Volumes in a Red Hat High Availability Cluster The Red Hat High Availability Add-On provides support for LVM volumes in two distinct cluster configurations: High availability LVM volumes (HA-LVM) in an active/passive failover configurations in which only a single node of the cluster accesses the storage at any one time. LVM volumes that use the Clustered Logical Volume (CLVM) extensions in an active/active configurations in which more than one node of the cluster requires access to the storage at the same time. CLVM is part of the Resilient Storage Add-On. 1.4.1. Choosing CLVM or HA-LVM When to use CLVM or HA-LVM should be based on the needs of the applications or services being deployed. If multiple nodes of the cluster require simultaneous read/write access to LVM volumes in an active/active system, then you must use CLVMD. CLVMD provides a system for coordinating activation of and changes to LVM volumes across nodes of a cluster concurrently. CLVMD's clustered-locking service provides protection to LVM metadata as various nodes of the cluster interact with volumes and make changes to their layout. This protection is contingent upon appropriately configuring the volume groups in question, including setting locking_type to 3 in the lvm.conf file and setting the clustered flag on any volume group that will be managed by CLVMD and activated simultaneously across multiple cluster nodes. If the high availability cluster is configured to manage shared resources in an active/passive manner with only one single member needing access to a given LVM volume at a time, then you can use HA-LVM without the CLVMD clustered-locking service Most applications will run better in an active/passive configuration, as they are not designed or optimized to run concurrently with other instances. Choosing to run an application that is not cluster-aware on clustered logical volumes may result in degraded performance if the logical volume is mirrored. This is because there is cluster communication overhead for the logical volumes themselves in these instances. A cluster-aware application must be able to achieve performance gains above the performance losses introduced by cluster file systems and cluster-aware logical volumes. This is achievable for some applications and workloads more easily than others. Determining what the requirements of the cluster are and whether the extra effort toward optimizing for an active/active cluster will pay dividends is the way to choose between the two LVM variants. Most users will achieve the best HA results from using HA-LVM. HA-LVM and CLVM are similar in the fact that they prevent corruption of LVM metadata and its logical volumes, which could otherwise occur if multiple machines are allowed to make overlapping changes. HA-LVM imposes the restriction that a logical volume can only be activated exclusively; that is, active on only one machine at a time. This means that only local (non-clustered) implementations of the storage drivers are used. Avoiding the cluster coordination overhead in this way increases performance. CLVM does not impose these restrictions and a user is free to activate a logical volume on all machines in a cluster; this forces the use of cluster-aware storage drivers, which allow for cluster-aware file systems and applications to be put on top. 1.4.2. Configuring LVM volumes in a cluster In Red Hat Enterprise Linux 7, clusters are managed through Pacemaker. Both HA-LVM and CLVM logical volumes are supported only in conjunction with Pacemaker clusters, and must be configured as cluster resources. For a procedure for configuring an HA-LVM volume as part of a Pacemaker cluster, see An active/passive Apache HTTP Server in a Red Hat High Availability Cluster in High Availability Add-On Administration . Note that this procedure includes the following steps: Configuring an LVM logical volume Ensuring that only the cluster is capable of activating the volume group Configuring the LVM volume as a cluster resource For a procedure for configuring a CLVM volume in a cluster, see Configuring a GFS2 File System in a Cluster in Global File System 2 .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_cluster_overview
Chapter 4. External storage services
Chapter 4. External storage services Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology Preview) IBM Power IBM Z The OpenShift Data Foundation operators create and manage services to satisfy Persistent Volume (PV) and Object Bucket Claims (OBCs) against the external services. External cluster can serve block, file and object storage classes for applications that run on OpenShift Container Platform. The operators do not deploy or manage the external clusters.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/planning_your_deployment/external-storage-services_rhodf
Chapter 7. Installing a cluster on AWS into an existing VPC
Chapter 7. Installing a cluster on AWS into an existing VPC In OpenShift Container Platform version 4.15, you can install a cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS). The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. If the existing VPC is owned by a different account than the cluster, you shared the VPC between accounts. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 7.2. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 7.2.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: Create a public and private subnet for each availability zone that your cluster uses. Each availability zone can contain no more than one public and one private subnet. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation. Record each subnet ID. Completing the installation requires that you enter these values in the platform section of the install-config.yaml file. See Finding a subnet ID in the AWS documentation. The VPC's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone: The public subnet requires a route to the internet gateway. The public subnet requires a NAT gateway with an EIP address. The private subnet requires a route to the NAT gateway in public subnet. The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 7.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 7.2.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 7.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.2.5. Optional: AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 7.2.6. Modifying trust policy when installing into a shared VPC If you install your cluster using a shared VPC, you can use the Passthrough or Manual credentials mode. You must add the IAM role used to install the cluster as a principal in the trust policy of the account that owns the VPC. If you use Passthrough mode, add the Amazon Resource Name (ARN) of the account that creates the cluster, such as arn:aws:iam::123456789012:user/clustercreator , to the trust policy as a principal. If you use Manual mode, add the ARN of the account that creates the cluster as well as the ARN of the ingress operator role in the cluster owner account, such as arn:aws:iam::123456789012:role/<cluster-name>-openshift-ingress-operator-cloud-credentials , to the trust policy as principals. You must add the following actions to the policy: Example 7.1. Required actions for shared VPC installation route53:ChangeResourceRecordSets route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ChangeTagsForResource route53:GetAccountLimit route53:GetChange route53:GetHostedZone route53:ListTagsForResource route53:UpdateHostedZoneComment tag:GetResources tag:UntagResources 7.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 7.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.2. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation". Example 7.2. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 7.6.3. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 7.3. Machine types based on 64-bit ARM architecture c6g.* m6g.* r8g.* 7.6.4. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths": ...}' 22 1 12 14 22 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 7.6.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.6.6. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 7.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 7.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 7.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 7.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 7.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 7.4. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 7.5. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 7.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 7.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 7.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 7.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.11. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 7.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 7.13. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . After installing a cluster on AWS into an existing VPC, you can extend the AWS VPC cluster into an AWS Outpost .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\": ...}' 22", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installing-aws-vpc
Chapter 63. Class Component
Chapter 63. Class Component Available as of Camel version 2.4 The class: component binds beans to Camel message exchanges. It works in the same way as the Bean component but instead of looking up beans from a Registry it creates the bean based on the class name. 63.1. URI format Where className is the fully qualified class name to create and use as bean. 63.2. Options The Class component supports 2 options, which are listed below. Name Description Default Type cache (advanced) If enabled, Camel will cache the result of the first Registry look-up. Cache can be enabled if the bean in the Registry is defined as a singleton scope. Boolean resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Class endpoint is configured using URI syntax: with the following path and query parameters: 63.2.1. Path Parameters (1 parameters): Name Description Default Type beanName Required Sets the name of the bean to invoke String 63.2.2. Query Parameters (5 parameters): Name Description Default Type method (producer) Sets the name of the method to invoke on the bean String cache (advanced) If enabled, Camel will cache the result of the first Registry look-up. Cache can be enabled if the bean in the Registry is defined as a singleton scope. Boolean multiParameterArray (advanced) Deprecated How to treat the parameters which are passed from the message body; if it is true, the message body should be an array of parameters. Note: This option is used internally by Camel, and is not intended for end users to use. Deprecation note: This option is used internally by Camel, and is not intended for end users to use. false boolean parameters (advanced) Used for configuring additional properties on the bean Map synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 63.3. Using You simply use the class component just as the Bean component but by specifying the fully qualified classname instead. For example to use the MyFooBean you have to do as follows: from("direct:start").to("class:org.apache.camel.component.bean.MyFooBean").to("mock:result"); You can also specify which method to invoke on the MyFooBean , for example hello : from("direct:start").to("class:org.apache.camel.component.bean.MyFooBean?method=hello").to("mock:result"); 63.4. Setting properties on the created instance In the endpoint uri you can specify properties to set on the created instance, for example if it has a setPrefix method: // Camel 2.17 onwards from("direct:start") .to("class:org.apache.camel.component.bean.MyPrefixBean?bean.prefix=Bye") .to("mock:result"); // Camel 2.16 and older from("direct:start") .to("class:org.apache.camel.component.bean.MyPrefixBean?prefix=Bye") .to("mock:result"); And you can also use the # syntax to refer to properties to be looked up in the Registry. // Camel 2.17 onwards from("direct:start") .to("class:org.apache.camel.component.bean.MyPrefixBean?bean.cool=#foo") .to("mock:result"); // Camel 2.16 and older from("direct:start") .to("class:org.apache.camel.component.bean.MyPrefixBean?cool=#foo") .to("mock:result"); Which will lookup a bean from the Registry with the id foo and invoke the setCool method on the created instance of the MyPrefixBean class. TIP:See more details at the Bean component as the class component works in much the same way. 63.5. See Also Configuring Camel Component Endpoint Getting Started Bean Bean Binding Bean Integration
[ "class:className[?options]", "class:beanName", "from(\"direct:start\").to(\"class:org.apache.camel.component.bean.MyFooBean\").to(\"mock:result\");", "from(\"direct:start\").to(\"class:org.apache.camel.component.bean.MyFooBean?method=hello\").to(\"mock:result\");", "// Camel 2.17 onwards from(\"direct:start\") .to(\"class:org.apache.camel.component.bean.MyPrefixBean?bean.prefix=Bye\") .to(\"mock:result\"); // Camel 2.16 and older from(\"direct:start\") .to(\"class:org.apache.camel.component.bean.MyPrefixBean?prefix=Bye\") .to(\"mock:result\");", "// Camel 2.17 onwards from(\"direct:start\") .to(\"class:org.apache.camel.component.bean.MyPrefixBean?bean.cool=#foo\") .to(\"mock:result\"); // Camel 2.16 and older from(\"direct:start\") .to(\"class:org.apache.camel.component.bean.MyPrefixBean?cool=#foo\") .to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/class-component
Architecture
Architecture OpenShift Platform Plus 4 OpenShift Platform Plus architecture Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_platform_plus/4/html/architecture/index
CI/CD
CI/CD OpenShift Container Platform 4.9 Contains information on builds, pipelines and GitOps for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/cicd/index
Chapter 3. Hot Rod Java Client Configuration
Chapter 3. Hot Rod Java Client Configuration Data Grid provides a Hot Rod Java client configuration API that exposes configuration properties. 3.1. Adding Hot Rod Java Client Dependencies Add Hot Rod Java client dependencies to include it in your project. Prerequisites Java 11 or greater. Procedure Add the infinispan-client-hotrod artifact as a dependency in your pom.xml as follows: <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-client-hotrod</artifactId> </dependency> Reference Data Grid Server Requirements 3.2. Configuring Hot Rod Client Connections Configure Hot Rod Java client connections to Data Grid Server. Procedure Use the ConfigurationBuilder class to generate immutable configuration objects that you can pass to RemoteCacheManager or use a hotrod-client.properties file on the application classpath. ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .addServer() .host("192.0.2.0") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username("username") .password("changeme") .realm("default") .saslMechanism("SCRAM-SHA-512"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); hotrod-client.properties Configuring Hot Rod URIs You can also configure Hot Rod client connections with URIs as follows: ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.uri("hotrod://username:[email protected]:11222,192.0.2.0:11222?auth_realm=default&sasl_mechanism=SCRAM-SHA-512"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); hotrod-client.properties Adding properties outside the classpath If the hotrod-client.properties file is not on the application classpath then you need to specify the location, as in the following example: ConfigurationBuilder builder = new ConfigurationBuilder(); Properties p = new Properties(); try(Reader r = new FileReader("/path/to/hotrod-client.properties")) { p.load(r); builder.withProperties(p); } RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); Additional resources Hot Rod Client Configuration org.infinispan.client.hotrod.configuration.ConfigurationBuilder org.infinispan.client.hotrod.RemoteCacheManager 3.2.1. Defining Data Grid Clusters in Client Configuration Provide the locations of Data Grid clusters in Hot Rod client configuration. Procedure Provide at least one Data Grid cluster name along with a host name and port for at least one node with the ClusterConfigurationBuilder class. If you want to define a cluster as default, so that clients always attempt to connect to it first, then define a server list with the addServers("<host_name>:<port>; <host_name>:<port>") method. Multiple cluster connections ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addCluster("siteA") .addClusterNode("hostA1", 11222) .addClusterNode("hostA2", 11222) .addCluster("siteB") .addClusterNodes("hostB1:11222; hostB2:11222"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); Default server list with a failover cluster ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServers("hostA1:11222; hostA2:11222") .addCluster("siteB") .addClusterNodes("hostB1:11222; hostB2:11223"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); 3.2.2. Manually Switching Data Grid Clusters Manually switch Hot Rod Java client connections between Data Grid clusters. Procedure Call one of the following methods in the RemoteCacheManager class: switchToCluster(clusterName) switches to a specific cluster defined in the client configuration. switchToDefaultCluster() switches to the default cluster in the client configuration, which is defined as a list of Data Grid servers. Additional resources RemoteCacheManager 3.2.3. Configuring Connection Pools Hot Rod Java clients keep pools of persistent connections to Data Grid servers to reuse TCP connections instead of creating them on each request. Procedure Configure Hot Rod client connection pool settings as in the following examples: ConfigurationBuilder ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .connectionPool() .maxActive(10) exhaustedAction(ExhaustedAction.valueOf("WAIT")) .maxWait(1) .minIdle(20) .minEvictableIdleTime(300000) .maxPendingRequests(20); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); hotrod-client.properties 3.3. Configuring Authentication Mechanisms for Hot Rod Clients Data Grid Server uses different mechanisms to authenticate Hot Rod client connections. Procedure Specify authentication mechanisms with the saslMechanism() method from the AuthenticationConfigurationBuilder class or with the infinispan.client.hotrod.sasl_mechanism property. SCRAM ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("SCRAM-SHA-512") .username("myuser") .password("qwer1234!"); DIGEST ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("DIGEST-MD5") .username("myuser") .password("qwer1234!"); PLAIN ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("PLAIN") .username("myuser") .password("qwer1234!"); OAUTHBEARER String token = "..."; // Obtain the token from your OAuth2 provider ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("OAUTHBEARER") .token(token); EXTERNAL ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host("127.0.0.1") .port(11222) .security() .ssl() // TrustStore stores trusted CA certificates for the server. .trustStoreFileName("/path/to/truststore") .trustStorePassword("truststorepassword".toCharArray()) .trustStoreType("PCKS12") // KeyStore stores valid client certificates. .keyStoreFileName("/path/to/keystore") .keyStorePassword("keystorepassword".toCharArray()) .keyStoreType("PCKS12") .authentication() .saslMechanism("EXTERNAL"); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache("secured"); GSSAPI LoginContext lc = new LoginContext("GssExample", new BasicCallbackHandler("krb_user", "krb_password".toCharArray())); lc.login(); Subject clientSubject = lc.getSubject(); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("GSSAPI") .clientSubject(clientSubject) .callbackHandler(new BasicCallbackHandler()); Basic Callback Handler The BasicCallbackHandler , as shown in the GSSAPI example, invokes the following callbacks: NameCallback and PasswordCallback construct the client subject. AuthorizeCallback is called during SASL authentication. OAUTHBEARER with Token Callback Handler Use a TokenCallbackHandler to refresh OAuth2 tokens before they expire, as in the following example: String token = "..."; // Obtain the token from your OAuth2 provider TokenCallbackHandler tokenHandler = new TokenCallbackHandler(token); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security() .authentication() .saslMechanism("OAUTHBEARER") .callbackHandler(tokenHandler); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache("secured"); // Refresh the token tokenHandler.setToken("newToken"); Custom CallbackHandler Hot Rod clients set up a default CallbackHandler to pass credentials to SASL mechanisms. In some cases you might need to provide a custom CallbackHandler , as in the following example: public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler(String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } } } ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host("127.0.0.1") .port(11222) .security().authentication() .serverName("myhotrodserver") .saslMechanism("DIGEST-MD5") .callbackHandler(new MyCallbackHandler("myuser","default","qwer1234!".toCharArray())); Note A custom CallbackHandler needs to handle callbacks that are specific to the authentication mechanism that you use. However, it is beyond the scope of this document to provide examples for each possible callback type. 3.3.1. Creating GSSAPI Login Contexts To use the GSSAPI mechanism, you must create a LoginContext so your Hot Rod client can obtain a Ticket Granting Ticket (TGT). Procedure Define a login module in a login configuration file. gss.conf For the IBM JDK: gss-ibm.conf Set the following system properties: Note krb5.conf provides the location of your KDC. Use the kinit command to authenticate with Kerberos and verify krb5.conf . 3.3.2. SASL authentication mechanisms Data Grid Server supports the following SASL authentications mechanisms with Hot Rod endpoints: Authentication mechanism Description Security realm type Related details PLAIN Uses credentials in plain-text format. You should use PLAIN authentication with encrypted connections only. Property realms and LDAP realms Similar to the BASIC HTTP mechanism. DIGEST-* Uses hashing algorithms and nonce values. Hot Rod connectors support DIGEST-MD5 , DIGEST-SHA , DIGEST-SHA-256 , DIGEST-SHA-384 , and DIGEST-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. SCRAM-* Uses salt values in addition to hashing algorithms and nonce values. Hot Rod connectors support SCRAM-SHA , SCRAM-SHA-256 , SCRAM-SHA-384 , and SCRAM-SHA-512 hashing algorithms, in order of strength. Property realms and LDAP realms Similar to the Digest HTTP mechanism. GSSAPI Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. GS2-KRB5 Uses Kerberos tickets and requires a Kerberos Domain Controller. You must add a corresponding kerberos server identity in the realm configuration. In most cases, you also specify an ldap-realm to provide user membership information. Kerberos realms Similar to the SPNEGO HTTP mechanism. EXTERNAL Uses client certificates. Trust store realms Similar to the CLIENT_CERT HTTP mechanism. OAUTHBEARER Uses OAuth tokens and requires a token-realm configuration. Token realms Similar to the BEARER_TOKEN HTTP mechanism. 3.4. Configuring Hot Rod client encryption Data Grid Server can enforce SSL/TLS encryption and present Hot Rod clients with certificates to establish trust and negotiate secure connections. To verify certificates issued to Data Grid Server, Hot Rod clients require either the full certificate chain or a partial chain that starts with the Root CA. You provide server certificates to Hot Rod clients as trust stores. Tip Alternatively to providing trust stores you can use shared system certificates. Prerequisites Create a trust store that Hot Rod clients can use to verify Data Grid Server identities. If you configure Data Grid Server to validate or authenticate client certificates, create a keystore as appropriate. Procedure Add the trust store to the client configuration with the trustStoreFileName() and trustStorePassword() methods or corresponding properties. If you configure client certificate authentication, do the following: Add the keystore to the client configuration with the keyStoreFileName() and keyStorePassword() methods or corresponding properties. Configure clients to use the EXTERNAL authentication mechanism. ConfigurationBuilder ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host("127.0.0.1") .port(11222) .security() .ssl() // Server SNI hostname. .sniHostName("myservername") // Keystore that contains the public keys for Data Grid Server. // Clients use the trust store to verify Data Grid Server identities. .trustStoreFileName("/path/to/server/truststore") .trustStorePassword("truststorepassword".toCharArray()) .trustStoreType("PCKS12") // Keystore that contains client certificates. // Clients present these certificates to Data Grid Server. .keyStoreFileName("/path/to/client/keystore") .keyStorePassword("keystorepassword".toCharArray()) .keyStoreType("PCKS12") .authentication() // Clients must use the EXTERNAL mechanism for certificate authentication. .saslMechanism("EXTERNAL"); hotrod-client.properties steps Add a client trust store to the USDRHDG_HOME/server/conf directory and configure Data Grid Server to use it, if necessary. Additional resources Encrypting Data Grid Server Connections SslConfigurationBuilder Hot Rod client configuration properties Using Shared System Certificates (Red Hat Enterprise Linux 7 Security Guide) 3.5. Enabling Hot Rod client statistics Hot Rod Java clients can provide statistics that include remote cache and near-cache hits and misses as well as connection pool usage. Procedure Open your Hot Rod Java client configuration for editing. Set true as the value for the statistics property or invoke the statistics().enable() methods. Export JMX MBeans for your Hot Rod client with the jmx and jmx_domain properties or invoke the jmxEnable() and jmxDomain() methods. Save and close your client configuration. Hot Rod Java client statistics ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.statistics().enable() .jmxEnable() .jmxDomain("my.domain.org") .addServer() .host("127.0.0.1") .port(11222); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(builder.build()); hotrod-client.properties infinispan.client.hotrod.statistics = true infinispan.client.hotrod.jmx = true infinispan.client.hotrod.jmx_domain = my.domain.org 3.6. Hot Rod client tracing propagation When you configure OpenTelemetry tracing on both the client VM and the Data Grid Server, the Hot Rod client enables automatic correlation of tracing spans between the client application and the Data Grid Server. Disabling tracing propagation from the client to the Data Grid Server Prerequisites Have OpenTelemetry tracing enabled on The Data Grid Server and the client side. Procedure Use the disableTracingPropagation() method to disable OpenTelemetry tracing propagation. import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .disableTracingPropagation(); The Hot Rod client stops sending tracing to the Data Grid Server. Additional resources Enabling Data Grid tracing 3.7. Near Caches Near caches are local to Hot Rod clients and store recently used data so that every read operation does not need to traverse the network, which significantly increases performance. Near caches: Are populated with read operations, calls to get() or getVersioned() methods. In the following example the put() call does not populate the near cache and only has the effect of invalidating the entry if it already exists: cache.put("k1", "v1"); cache.get("k1"); Register a client listener to invalidate entries when they are updated or removed in remote caches on Data Grid Server. If entries are requested after they are invalidated, clients must retrieve them from the remote caches again. Are cleared when clients fail over to different servers. Bounded near caches You should always use bounded near caches by specifying the maximum number of entries they can contain. When near caches reach the maximum number of entries, eviction automatically takes place to remove older entries. This means you do not need to manually keep the cache size within the boundaries of the client JVM. Important Do not use maximum idle expiration with near caches because near-cache reads do not propagate the last access time for entries. Bloom filters Bloom filters optimize performance for write operations by reducing the total number of invalidation messages. Bloom filters: Reside on Data Grid Server and keep track of the entries that the client has requested. Require a connection pool configuration that has a maximum of one active connection per server and uses the WAIT exhausted action. Cannot be used with unbounded near caches. 3.7.1. Configuring Near Caches Configure Hot Rod Java clients with near caches to store recently used data locally in the client JVM. Procedure Open your Hot Rod Java client configuration. Configure each cache to perform near caching with the nearCacheMode(NearCacheMode.INVALIDATED) method. Note Data Grid provides global near cache configuration properties. However, those properties are deprecated and you should not use them but configure near caching on a per-cache basis instead. Specify the maximum number of entries that the near cache can hold before eviction occurs with the nearCacheMaxEntries() method. Enable bloom filters for near caches with the nearCacheUseBloomFilter() method. import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.NearCacheMode; import org.infinispan.client.hotrod.configuration.ExhaustedAction; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username("username") .password("password") .realm("default") .saslMechanism("SCRAM-SHA-512") // Configure the connection pool for bloom filters. .connectionPool() .maxActive(1) .exhaustedAction(ExhaustedAction.WAIT); // Configure near caching for specific caches builder.remoteCache("books") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(100) .nearCacheUseBloomFilter(false); builder.remoteCache("authors") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(200) .nearCacheUseBloomFilter(true); Additional resources org.infinispan.client.hotrod.configuration.NearCacheConfiguration org.infinispan.client.hotrod.configuration.ExhaustedAction 3.8. Forcing Return Values To avoid sending data unnecessarily, write operations on remote caches return null instead of values. For example, the following method calls do not return values for keys: V remove(Object key); V put(K key, V value); You can, however, change the default behavior so your invocations return values for keys. Procedure Configure Hot Rod clients so method calls return values for keys in one of the following ways: FORCE_RETURN_VALUE flag cache.withFlags(Flag.FORCE_RETURN_VALUE).put("aKey", "newValue") Per-cache ConfigurationBuilder builder = new ConfigurationBuilder(); // Return values for keys for invocations for a specific cache. builder.remoteCache("mycache") .forceReturnValues(true); hotrod-client.properties Additional resources org.infinispan.client.hotrod.Flag 3.9. Creating remote caches from Hot Rod clients Use the Data Grid Hot Rod API to create remote caches on Data Grid Server from Java, C++, .NET/C#, JS clients and more. This procedure shows you how to use Hot Rod Java clients that create remote caches on first access. You can find code examples for other Hot Rod clients in the Data Grid Tutorials . Prerequisites Create a Data Grid user with admin permissions. Start at least one Data Grid Server instance. Have a Data Grid cache configuration. Procedure Invoke the remoteCache() method as part of your the ConfigurationBuilder . Set the configuration or configuration_uri properties in the hotrod-client.properties file on your classpath. ConfigurationBuilder File file = new File("path/to/infinispan.xml") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache("another-cache") .configuration("<distributed-cache name=\"another-cache\"/>"); builder.remoteCache("my.other.cache") .configurationURI(file.toURI()); hotrod-client.properties Important If the name of your remote cache contains the . character, you must enclose it in square brackets when using hotrod-client.properties files. Additional resources Hot Rod Client Configuration org.infinispan.client.hotrod.configuration.RemoteCacheConfigurationBuilder
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-client-hotrod</artifactId> </dependency>", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"127.0.0.1\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .addServer() .host(\"192.0.2.0\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username(\"username\") .password(\"changeme\") .realm(\"default\") .saslMechanism(\"SCRAM-SHA-512\"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());", "infinispan.client.hotrod.server_list = 127.0.0.1:11222,192.0.2.0:11222 infinispan.client.hotrod.auth_username = username infinispan.client.hotrod.auth_password = changeme infinispan.client.hotrod.auth_realm = default infinispan.client.hotrod.sasl_mechanism = SCRAM-SHA-512", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.uri(\"hotrod://username:[email protected]:11222,192.0.2.0:11222?auth_realm=default&sasl_mechanism=SCRAM-SHA-512\"); RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());", "infinispan.client.hotrod.uri = hotrod://username:[email protected]:11222,192.0.2.0:11222?auth_realm=default&sasl_mechanism=SCRAM-SHA-512", "ConfigurationBuilder builder = new ConfigurationBuilder(); Properties p = new Properties(); try(Reader r = new FileReader(\"/path/to/hotrod-client.properties\")) { p.load(r); builder.withProperties(p); } RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build());", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addCluster(\"siteA\") .addClusterNode(\"hostA1\", 11222) .addClusterNode(\"hostA2\", 11222) .addCluster(\"siteB\") .addClusterNodes(\"hostB1:11222; hostB2:11222\"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build());", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServers(\"hostA1:11222; hostA2:11222\") .addCluster(\"siteB\") .addClusterNodes(\"hostB1:11222; hostB2:11223\"); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build());", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .connectionPool() .maxActive(10) exhaustedAction(ExhaustedAction.valueOf(\"WAIT\")) .maxWait(1) .minIdle(20) .minEvictableIdleTime(300000) .maxPendingRequests(20); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(clientBuilder.build());", "infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.connection_pool.max_active = 10 infinispan.client.hotrod.connection_pool.exhausted_action = WAIT infinispan.client.hotrod.connection_pool.max_wait = 1 infinispan.client.hotrod.connection_pool.min_idle = 20 infinispan.client.hotrod.connection_pool.min_evictable_idle_time = 300000 infinispan.client.hotrod.connection_pool.max_pending_requests = 20", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"SCRAM-SHA-512\") .username(\"myuser\") .password(\"qwer1234!\");", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"DIGEST-MD5\") .username(\"myuser\") .password(\"qwer1234!\");", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"PLAIN\") .username(\"myuser\") .password(\"qwer1234!\");", "String token = \"...\"; // Obtain the token from your OAuth2 provider ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"OAUTHBEARER\") .token(token);", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .security() .ssl() // TrustStore stores trusted CA certificates for the server. .trustStoreFileName(\"/path/to/truststore\") .trustStorePassword(\"truststorepassword\".toCharArray()) .trustStoreType(\"PCKS12\") // KeyStore stores valid client certificates. .keyStoreFileName(\"/path/to/keystore\") .keyStorePassword(\"keystorepassword\".toCharArray()) .keyStoreType(\"PCKS12\") .authentication() .saslMechanism(\"EXTERNAL\"); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\");", "LoginContext lc = new LoginContext(\"GssExample\", new BasicCallbackHandler(\"krb_user\", \"krb_password\".toCharArray())); lc.login(); Subject clientSubject = lc.getSubject(); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"GSSAPI\") .clientSubject(clientSubject) .callbackHandler(new BasicCallbackHandler());", "String token = \"...\"; // Obtain the token from your OAuth2 provider TokenCallbackHandler tokenHandler = new TokenCallbackHandler(token); ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security() .authentication() .saslMechanism(\"OAUTHBEARER\") .callbackHandler(tokenHandler); remoteCacheManager = new RemoteCacheManager(clientBuilder.build()); RemoteCache<String, String> cache = remoteCacheManager.getCache(\"secured\"); // Refresh the token tokenHandler.setToken(\"newToken\");", "public class MyCallbackHandler implements CallbackHandler { final private String username; final private char[] password; final private String realm; public MyCallbackHandler(String username, String realm, char[] password) { this.username = username; this.password = password; this.realm = realm; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(username); } else if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password); } else if (callback instanceof AuthorizeCallback) { AuthorizeCallback authorizeCallback = (AuthorizeCallback) callback; authorizeCallback.setAuthorized(authorizeCallback.getAuthenticationID().equals( authorizeCallback.getAuthorizationID())); } else if (callback instanceof RealmCallback) { RealmCallback realmCallback = (RealmCallback) callback; realmCallback.setText(realm); } else { throw new UnsupportedCallbackException(callback); } } } } ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder.addServer() .host(\"127.0.0.1\") .port(11222) .security().authentication() .serverName(\"myhotrodserver\") .saslMechanism(\"DIGEST-MD5\") .callbackHandler(new MyCallbackHandler(\"myuser\",\"default\",\"qwer1234!\".toCharArray()));", "GssExample { com.sun.security.auth.module.Krb5LoginModule required client=TRUE; };", "GssExample { com.ibm.security.auth.module.Krb5LoginModule required client=TRUE; };", "java.security.auth.login.config=gss.conf java.security.krb5.conf=/etc/krb5.conf", "ConfigurationBuilder clientBuilder = new ConfigurationBuilder(); clientBuilder .addServer() .host(\"127.0.0.1\") .port(11222) .security() .ssl() // Server SNI hostname. .sniHostName(\"myservername\") // Keystore that contains the public keys for Data Grid Server. // Clients use the trust store to verify Data Grid Server identities. .trustStoreFileName(\"/path/to/server/truststore\") .trustStorePassword(\"truststorepassword\".toCharArray()) .trustStoreType(\"PCKS12\") // Keystore that contains client certificates. // Clients present these certificates to Data Grid Server. .keyStoreFileName(\"/path/to/client/keystore\") .keyStorePassword(\"keystorepassword\".toCharArray()) .keyStoreType(\"PCKS12\") .authentication() // Clients must use the EXTERNAL mechanism for certificate authentication. .saslMechanism(\"EXTERNAL\");", "infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.use_ssl = true infinispan.client.hotrod.sni_host_name = myservername Keystore that contains the public keys for Data Grid Server. Clients use the trust store to verify Data Grid Server identities. infinispan.client.hotrod.trust_store_file_name = server_truststore.pkcs12 infinispan.client.hotrod.trust_store_password = changeme infinispan.client.hotrod.trust_store_type = PCKS12 Keystore that contains client certificates. Clients present these certificates to Data Grid Server. infinispan.client.hotrod.key_store_file_name = client_keystore.pkcs12 infinispan.client.hotrod.key_store_password = changeme infinispan.client.hotrod.key_store_type = PCKS12 Clients must use the EXTERNAL mechanism for certificate authentication. infinispan.client.hotrod.sasl_mechanism = EXTERNAL", "ConfigurationBuilder builder = new ConfigurationBuilder(); builder.statistics().enable() .jmxEnable() .jmxDomain(\"my.domain.org\") .addServer() .host(\"127.0.0.1\") .port(11222); RemoteCacheManager remoteCacheManager = new RemoteCacheManager(builder.build());", "infinispan.client.hotrod.statistics = true infinispan.client.hotrod.jmx = true infinispan.client.hotrod.jmx_domain = my.domain.org", "import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"127.0.0.1\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .disableTracingPropagation();", "cache.put(\"k1\", \"v1\"); cache.get(\"k1\");", "import org.infinispan.client.hotrod.configuration.ConfigurationBuilder; import org.infinispan.client.hotrod.configuration.NearCacheMode; import org.infinispan.client.hotrod.configuration.ExhaustedAction; ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"127.0.0.1\") .port(ConfigurationProperties.DEFAULT_HOTROD_PORT) .security().authentication() .username(\"username\") .password(\"password\") .realm(\"default\") .saslMechanism(\"SCRAM-SHA-512\") // Configure the connection pool for bloom filters. .connectionPool() .maxActive(1) .exhaustedAction(ExhaustedAction.WAIT); // Configure near caching for specific caches builder.remoteCache(\"books\") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(100) .nearCacheUseBloomFilter(false); builder.remoteCache(\"authors\") .nearCacheMode(NearCacheMode.INVALIDATED) .nearCacheMaxEntries(200) .nearCacheUseBloomFilter(true);", "V remove(Object key); V put(K key, V value);", "cache.withFlags(Flag.FORCE_RETURN_VALUE).put(\"aKey\", \"newValue\")", "ConfigurationBuilder builder = new ConfigurationBuilder(); // Return previous values for keys for invocations for a specific cache. builder.remoteCache(\"mycache\") .forceReturnValues(true);", "Use the \"*\" wildcard in the cache name to return previous values for all caches that start with the \"somecaches\" string. infinispan.client.hotrod.cache.somecaches*.force_return_values = true", "File file = new File(\"path/to/infinispan.xml\") ConfigurationBuilder builder = new ConfigurationBuilder(); builder.remoteCache(\"another-cache\") .configuration(\"<distributed-cache name=\\\"another-cache\\\"/>\"); builder.remoteCache(\"my.other.cache\") .configurationURI(file.toURI());", "infinispan.client.hotrod.cache.another-cache.configuration=<distributed-cache name=\\\"another-cache\\\"/> infinispan.client.hotrod.cache.[my.other.cache].configuration_uri=file:///path/to/infinispan.xml" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_java_client_guide/hotrod-client-configuration_hot_rod
Chapter 3. Configuring data backup and recovery options
Chapter 3. Configuring data backup and recovery options This chapter explains how to add disaster recovery capabilities to your Red Hat Hyperconverged Infrastructure for Virtualization deployment so that you can restore your cluster to a working state after a disk or server failure. 3.1. Prerequisites 3.1.1. Prerequisites for geo-replication Be aware of the following requirements and limitations when configuring geo-replication: Two different managers required The source and destination volumes for geo-replication must be managed by different instances of Red Hat Virtualization Manager. 3.1.2. Prerequisites for failover and failback configuration Versions must match between environments Ensure that the primary and secondary environments have the same version of Red Hat Virtualization Manager, with identical data center compatibility versions, cluster compatibility versions, and PostgreSQL versions. No virtual machine disks in the hosted engine storage domain The storage domain used by the hosted engine virtual machine is not failed over, so any virtual machine disks in this storage domain will be lost. Execute Ansible playbooks manually from a separate machine Generate and execute Ansible playbooks manually from a separate machine that acts as an Ansible controller node. This node must have the ovirt-ansible-collection package, which provides all required disaster recovery Ansible roles. Note The ovirt-ansible-collection package is installed with the Hosted Engine virtual machine by default. However, during a disaster that affects the primary site, this virtual machine may be down. It is safe to use a machine that is outside the primary site to run this playbook, but for testing purposes these playbooks can be triggered from the Hosted Engine virtual machine. 3.2. Supported backup and recovery configurations There are two supported ways to add disaster recovery capabilities to your Red Hat Hyperconverged Infrastructure for Virtualization deployment. Configure backing up to a secondary volume only Regularly synchronizing your data to a remote secondary volume helps to ensure that your data is not lost in the event of disk or server failure. This option is suitable if the following statements are true of your deployment. You require only a backup of your data for disaster recovery. You do not require highly available storage. You do not want to maintain a secondary cluster. You are willing to manually restore your data and reconfigure your backup solution after a failure has occurred. Follow the instructions in Configuring backup to a secondary volume to configure this option. Configure failing over to and failing back from a secondary cluster This option provides failover and failback capabilities in addition to backing up data on a remote volume. Configuring failover of your primary cluster's operations and storage domains to a secondary cluster helps to ensure that your data remains available in event of disk or server failure in the primary cluster. This option is suitable if the following statements are true of your deployment. You require highly available storage. You are willing to maintain a secondary cluster. You do not want to manually restore your data or reconfigure your backup solution after a failure has occurred. Follow the instructions in Configuring failover to and failback from a secondary cluster to configure this option. Red Hat recommends that you configure at least a backup volume for production deployments. 3.3. Configuring backup to a secondary volume This section covers how to back up a gluster volume to a secondary gluster volume using geo-replication. To do this, you must: Ensure that all prerequisites are met. Create a suitable volume to use as a geo-replication target . Configure a geo-replication session between the source volume and the target volume. Schedule the geo-replication process. 3.3.1. Prerequisites 3.3.1.1. Enable shared storage on the source volume Ensure that the volume you want to back up (the source volume) has shared storage enabled. Run the following command on any server that hosts the source volume to enable shared storage. Ensure that a gluster volume named gluster_shared_storage is created in the source cluster, and is mounted at /run/gluster/shared_storage on all the nodes in the source cluster. See Setting Up Shared Storage for further information. 3.3.1.2. Match network protocol Ensure that all hosts use the same Internet Protocol version. If the hosts for your source volume use IPv4, the hosts for the target volume must also use IPv4. If the hosts for your source volume use IPv6, the hosts for the target volume must use IPv6. Additionally, configure geo-replication using FQDNs instead of IPv6 addresses to avoid Bug 1855965 . 3.3.2. Create a suitable target volume for geo-replication Prepare a secondary gluster volume to hold the geo-replicated copy of your source volume. This target volume should be in a separate cluster, hosted at a separate site, so that the risk of source and target volumes being affected by the same outages is minimised. Ensure that the target volume for geo-replication has sharding enabled. Run the following command on any node that hosts the target volume to enable sharding on that volume. 3.3.3. Configuring geo-replication for backing up volumes 3.3.3.1. Creating a geo-replication session A geo-replication session is required to replicate data from an active source volume to a passive target volume. Important Only rsync based geo-replication is supported with Red Hat Hyperconverged Infrastructure for Virtualization. Create a common pem pub file. Run the following command on a source node that has key-based SSH authentication without a password configured to the target nodes. Create the geo-replication session Run the following command to create a geo-replication session between the source and target volumes, using the created pem pub file for authentication. For example, the following command creates a geo-replication session from a source volume prodvol to a target volume called backupvol , which is hosted by backup.example.com . By default this command verifies that the target volume is a valid target with available space. You can append the force option to the command to ignore failed verification. Configure a meta-volume This relies on the source volume having shared storage configured, as described in Prerequisites . Important Do not start the geo-replication session. Starting the geo-replication session begins replication from your source volume to your target volume. 3.3.3.2. Verifying creation of a geo-replication session Log in to the Administration Portal on any source node. Click Storage Volumes . Check the Info column for the geo-replication icon. If this icon is present, geo-replication has been configured for that volume. If this icon is not present, try synchronizing the volume . 3.3.3.3. Synchronizing volume state using the Administration Portal Log in to the Administration Portal. Click Storage Volumes . Select the volume that you want to synchronize. Click the Geo-replication sub-tab. Click Sync . 3.3.4. Scheduling regular backups using geo-replication Log in to the Administration Portal on any source node. Click Storage Domains . Click the name of the storage domain that you want to back up. Click the Remote Data Sync Setup subtab. Click Setup . The Setup Remote Data Synchronization window opens. In the Geo-replicated to field, select the backup target. In the Recurrence field, select a recurrence interval type. Valid values are WEEKLY with at least one weekday checkbox selected, or DAILY . In the Hours and Minutes field, specify the time to start synchronizing. Note This time is based on the Hosted Engine's timezone. Click OK . Check the Events subtab for the source volume at the time you specified to verify that synchronization works correctly. 3.4. Configuring failover to and failback from a secondary cluster This section covers how to configure your cluster to fail over to a remote secondary cluster in the event of server failure. To do this, you must: Configure backing up to a remote volume . Create a suitable cluster to use as a failover target . Prepare a mapping file for the source and target clusters. Prepare a failover playbook . Prepare a cleanup playbook for the primary cluster. Prepare a failback playbook . 3.4.1. Creating a secondary cluster for failover Install and configure a secondary cluster that can be used in place of the primary cluster in the event of failure. This secondary cluster can be either of the following configurations: Red Hat Hyperconverged Infrastructure See Deploying Red Hat Hyperconverged Infrastructure for details. Red Hat Gluster Storage configured for use as a Red Hat Virtualization storage domain See Configuring Red Hat Virtualization with Red Hat Gluster Storage for details. Note that creating a storage domain is not necessary for this use case; the storage domain is imported as part of the failover process. The storage on the secondary cluster must not be attached to a data center, so that it can be added to the secondary site's data center during the failover process. 3.4.2. Creating a mapping file between source and target clusters Follow this section to create a file that maps the storage in your source cluster to the storage in your target cluster. Red Hat recommends that you create this file immediately after you first deploy your storage, and keep it up to date as your deployment changes. This helps to ensure that everything in your cluster fails over safely in the event of disaster. Create a playbook to generate the mapping file. Create a playbook that passes information about your cluster to the ovirt.ovirt.disaster_recovery role, using the site , username , password , and ca variables. Example playbook file: dr-ovirt-setup.yml Generate the mapping file by running the playbook with the generate_mapping tag. This creates the mapping file, disaster_recovery_vars.yml . Edit disaster_recovery_vars.yml and add information about the secondary cluster. Ensure that you only mention storage domains that have data synchronized to the secondary site; other storage domains can be removed. See Appendix A: Mapping File Attributes in the Red Hat Virtualization Disaster Recovery Guide for detailed information about attributes used in the mapping file. 3.4.3. Creating a failover playbook between source and target clusters Create a playbook file to handle failover. Define a password file (for example passwords.yml ) to store the Manager passwords for the primary and secondary site. For example: Example passwords.yml file Note For extra security you can encrypt the password file. However, you will need to use the --ask-vault-pass parameter when running the playbook. See Working with files encrypted using Ansible Vault for more information. Create a playbook file that passes the lists of hyperconverged hosts to use as a failover source and target to the ovirt.ovirt.disaster_recovery role, using the dr_target_host and dr_source_map variables. Example playbook file: dr-rhv-failover.yml For information about executing failover, see Failing over to a secondary cluster . 3.4.4. Creating a failover cleanup playbook for your primary cluster Create a playbook file that cleans up your primary cluster so that you can use it as a failback target. Example playbook file: dr-cleanup.yml For information about executing failback, see Failing back to a primary cluster . 3.4.5. Create a failback playbook between source and target clusters Create a playbook file that passes the lists of hyperconverged hosts to use as a failback source and target to the ovirt.ovirt.disaster_recovery role, using the dr_target_host and dr_source_map variables. Example playbook file: dr-rhv-failback.yml For information about executing failback, see Failing back to a primary cluster .
[ "gluster volume set all cluster.enable-shared-storage enable", "gluster volume set <volname> features.shard enable", "gluster system:: execute gsec_create", "gluster volume geo-replication <SOURCE_VOL> <TARGET_NODE>::<TARGET_VOL> create push-pem", "gluster volume geo-replication prodvol backup.example.com::backupvol create push-pem", "gluster volume geo-replication <SOURCE_VOL> <TARGET_HOST>::<TARGET_VOL> config use_meta_volume true", "--- - name: Collect mapping variables hosts: localhost connection: local vars: site: https://example.engine.redhat.com/ovirt-engine/api username: admin@internal password: my_password ca: /etc/pki/ovirt-engine/ca.pem var_file: disaster_recovery_vars.yml roles: - ovirt.ovirt.disaster_recovery", "ansible-playbook dr-ovirt-setup.yml --tags=\"generate_mapping\"", "--- This file is in plain text, if you want to encrypt this file, please execute following command: # USD ansible-vault encrypt passwords.yml # It will ask you for a password, which you must then pass to ansible interactively when executing the playbook. # USD ansible-playbook myplaybook.yml --ask-vault-pass # dr_sites_primary_password: primary_password dr_sites_secondary_password: secondary_password", "--- - name: Failover RHV hosts: localhost connection: local vars: dr_target_host: secondary dr_source_map: primary vars_files: - disaster_recovery_vars.yml - passwords.yml roles: - ovirt.ovirt.disaster_recovery", "--- - name: Clean RHV hosts: localhost connection: local vars: dr_source_map: primary vars_files: - disaster_recovery_vars.yml roles: - ovirt.ovirt.disaster_recovery", "--- - name: Failback RHV hosts: localhost connection: local vars: dr_target_host: primary dr_source_map: secondary vars_files: - disaster_recovery_vars.yml - passwords.yml roles: - ovirt.ovirt.disaster_recovery" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/config-backup-recovery
Chapter 3. Kafka Bridge configuration
Chapter 3. Kafka Bridge configuration Configure a deployment of the Kafka Bridge using configuration properties. Configure Kafka and specify the HTTP connection details needed to be able to interact with Kafka. You can also use configuration properties to enable and use distributed tracing with the Kafka Bridge. Distributed tracing allows you to track the progress of transactions between applications in a distributed system. Note Use the KafkaBridge resource to configure properties when you are running the Kafka Bridge on OpenShift . 3.1. Configuring Kafka Bridge properties This procedure describes how to configure the Kafka and HTTP connection properties used by the Kafka Bridge. You configure the Kafka Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties. kafka. for general configuration that applies to producers and consumers, such as server connection and security. kafka.consumer. for consumer-specific configuration passed only to the consumer. kafka.producer. for producer-specific configuration passed only to the producer. As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP methods to access them. Additional HTTP headers in requests describe the CORS origins that are permitted access to the Kafka cluster. Prerequisites The Kafka Bridge installation archive is downloaded Procedure Edit the application.properties file provided with the Kafka Bridge installation archive. Use the properties file to specify Kafka and HTTP-related properties. Configure standard Kafka-related properties, including properties specific to the Kafka consumers and producers. Use: kafka.bootstrap.servers to define the host/port connections to the Kafka cluster kafka.producer.acks to provide acknowledgments to the HTTP client kafka.consumer.auto.offset.reset to determine how to manage reset of the offset in Kafka For more information on configuration of Kafka properties, see the Apache Kafka website Configure HTTP-related properties to enable HTTP access to the Kafka cluster. For example: bridge.id=my-bridge http.host=0.0.0.0 http.port=8080 1 http.cors.enabled=true 2 http.cors.allowedOrigins=https://strimzi.io 3 http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH 4 1 The default HTTP configuration for the Kafka Bridge to listen on port 8080. 2 Set to true to enable CORS. 3 Comma-separated list of allowed CORS origins. You can use a URL or a Java regular expression. 4 Comma-separated list of allowed HTTP methods for CORS. Save the configuration file. 3.2. Configuring metrics Enable metrics for the Kafka Bridge by setting the KAFKA_BRIDGE_METRICS_ENABLED environment variable. Prerequisites The Kafka Bridge installation archive is downloaded . Procedure Set the environment variable for enabling metrics to true . Environment variable for enabling metrics KAFKA_BRIDGE_METRICS_ENABLED=true Run the Kafka Bridge script to enable metrics. Running the Kafka Bridge to enable metrics ./bin/kafka_bridge_run.sh --config-file=<path>/application.properties With metrics enabled, you can use GET /metrics with the /metrics endpoint to retrieve Kafka Bridge metrics in Prometheus format. 3.3. Configuring distributed tracing Enable distributed tracing to trace messages consumed and produced by the Kafka Bridge, and HTTP requests from client applications. Properties to enable tracing are present in the application.properties file. To enable distributed tracing, do the following: Set the bridge.tracing property value to enable the tracing you want to use. The only possible value is opentelemetry . Set environment variables for tracing. With the default configuration, OpenTelemetry tracing uses OTLP as the exporter protocol. By configuring the OTLP endpoint, you can still use a Jaeger backend instance to get traces. Note Jaeger has supported the OTLP protocol since version 1.35. Older Jaeger versions cannot get traces using the OTLP protocol. OpenTelemetry defines an API specification for collecting tracing data as spans of metrics data. Spans represent a specific operation. A trace is a collection of one or more spans. Traces are generated when the Kafka Bridge does the following: Sends messages from Kafka to consumer HTTP clients Receives messages from producer HTTP clients to send to Kafka Jaeger implements the required APIs and presents visualizations of the trace data in its user interface for analysis. To have end-to-end tracing, you must configure tracing in your HTTP clients. Caution Streams for Apache Kafka no longer supports OpenTracing. If you were previously using OpenTracing with the bridge.tracing=jaeger option, we encourage you to transition to using OpenTelemetry instead. Prerequisites The Kafka Bridge installation archive is downloaded . Procedure Edit the application.properties file provided with the Kafka Bridge installation archive. Use the bridge.tracing property to enable the tracing you want to use. Example configuration to enable OpenTelemetry bridge.tracing=opentelemetry 1 1 The property for enabling OpenTelemetry is uncommented by removing the # at the beginning of the line. With tracing enabled, you initialize tracing when you run the Kafka Bridge script. Save the configuration file. Set the environment variables for tracing. Environment variables for OpenTelemetry OTEL_SERVICE_NAME=my-tracing-service 1 OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 2 1 The name of the OpenTelemetry tracer service. 2 The gRPC-based OTLP endpoint that listens for spans on port 4317. Run the Kafka Bridge script with the property enabled for tracing. Running the Kafka Bridge with OpenTelemetry enabled ./bin/kafka_bridge_run.sh --config-file= <path> /application.properties The internal consumers and producers of the Kafka Bridge are now enabled for tracing. 3.3.1. Specifying tracing systems with OpenTelemetry Instead of the default OTLP tracing system, you can specify other tracing systems that are supported by OpenTelemetry. If you want to use another tracing system with OpenTelemetry, do the following: Add the library of the tracing system to the Kafka classpath. Add the name of the tracing system as an additional exporter environment variable. Additional environment variable when not using OTLP OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2 1 The name of the tracing system. In this example, Zipkin is specified. 2 The endpoint of the specific selected exporter that listens for spans. In this example, a Zipkin endpoint is specified. 3.3.2. Supported Span attributes The Kafka Bridge adds, in addition to the standard OpenTelemetry attributes, the following attributes from the OpenTelemetry standard conventions for HTTP to its spans. Attribute key Attribute value peer.service Hardcoded to kafka http.request.method The http method used to make the request url.scheme The URI scheme component url.path The URI path component url.query The URI query component messaging.destination.name The name of the Kafka topic being produced to or read from messaging.system Hardcoded to kafka http.response.status_code ok for http responses between 200 and 300. error for all other status codes Additional resources OpenTelemetry exporter values
[ "bridge.id=my-bridge http.host=0.0.0.0 http.port=8080 1 http.cors.enabled=true 2 http.cors.allowedOrigins=https://strimzi.io 3 http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH 4", "KAFKA_BRIDGE_METRICS_ENABLED=true", "./bin/kafka_bridge_run.sh --config-file=<path>/application.properties", "bridge.tracing=opentelemetry 1", "OTEL_SERVICE_NAME=my-tracing-service 1 OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 2", "./bin/kafka_bridge_run.sh --config-file= <path> /application.properties", "OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_bridge/assembly-kafka-bridge-config-bridge
Chapter 4. Configuration example for load-balancing with mod_proxy_cluster
Chapter 4. Configuration example for load-balancing with mod_proxy_cluster You can configure JBCS to use the mod_proxy_cluster connector for load-balancing in a Red Hat Enterprise Linux system. When you want to configure a load-balancing solution that uses mod_proxy_cluster , you must perform the following tasks: Set up JBCS as a proxy server . Configure a Tomcat worker node . Define iptables firewall rules . 4.1. Setting up JBCS as a proxy server When you configure JBCS to use mod_proxy_cluster , you must set up JBCS as a proxy server by specifying configuration details in the mod_proxy_cluster.conf file. Procedure Go to the JBCS_HOME /httpd/conf.d/ directory. Create a file named mod_proxy_cluster.conf . Enter the following configuration details: Important As shown in the preceding example, the mod_proxy_cluster package requires that you set the MemManagerFile directive in the conf.d file to cache/mod_proxy_cluster . Note The preceding example shows how to set up JBCS as a proxy server that is listening on localhost . 4.2. Configuring a Tomcat worker node When you configure JBCS to use mod_proxy_cluster , you must configure a Tomcat worker node by adding a Listener element to the server.xml file. Prerequisites You have set up JBCS as a proxy server . Procedure Open the JWS_HOME /tomcat <VERSION> /conf/server.xml file. Add the following Listener element: <Listener className="org.jboss.modcluster.container.catalina.standalone.ModClusterListener" advertise="true"/> 4.3. Defining iptables firewall rules example When you configure JBCS to use mod_proxy_cluster , you must define firewall rules by using iptables . Prerequisites You have configured a Tomcat worker node . Procedure Use iptables to define a set of firewall rules. For example: Note The preceding example shows to define firewall rules for a cluster node on the 192.168.1.0/24 subnet.
[ "LoadModule proxy_cluster_module modules/mod_proxy_cluster.so LoadModule cluster_slotmem_module modules/mod_cluster_slotmem.so LoadModule manager_module modules/mod_manager.so LoadModule advertise_module modules/mod_advertise.so MemManagerFile cache/mod_proxy_cluster <IfModule manager_module> Listen 6666 <VirtualHost *:6666> <Directory /> Require ip 127.0.0.1 </Directory> ServerAdvertise on EnableMCPMReceive <Location /mod_cluster_manager> SetHandler mod_cluster-manager Require ip 127.0.0.1 </Location> </VirtualHost> </IfModule>", "<Listener className=\"org.jboss.modcluster.container.catalina.standalone.ModClusterListener\" advertise=\"true\"/>", "/sbin/iptables -I INPUT 5 -p udp -d 224.0.1.0/24 -j ACCEPT -m comment --comment \"mod_proxy_cluster traffic\" /sbin/iptables -I INPUT 6 -p udp -d 224.0.0.0/4 -j ACCEPT -m comment --comment \"JBoss Cluster traffic\" /sbin/iptables -I INPUT 9 -p udp -s 192.168.1.0/24 -j ACCEPT -m comment --comment \"cluster subnet for inter-node communication\" /sbin/iptables -I INPUT 10 -p tcp -s 192.168.1.0/24 -j ACCEPT -m comment --comment \"cluster subnet for inter-node communication\" /etc/init.d/iptables save" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_connectors_and_load_balancing_guide/mod_proxy_cluster_example
Chapter 1. Understanding API tiers
Chapter 1. Understanding API tiers Important This guidance does not cover layered OpenShift Container Platform offerings. API tiers for bare-metal configurations also apply to virtualized configurations except for any feature that directly interacts with hardware. Those features directly related to hardware have no application operating environment (AOE) compatibility level beyond that which is provided by the hardware vendor. For example, applications that rely on Graphics Processing Units (GPU) features are subject to the AOE compatibility provided by the GPU vendor driver. API tiers in a cloud environment for cloud specific integration points have no API or AOE compatibility level beyond that which is provided by the hosting cloud vendor. For example, APIs that exercise dynamic management of compute, ingress, or storage are dependent upon the underlying API capabilities exposed by the cloud platform. Where a cloud vendor modifies a prerequisite API, Red Hat will provide commercially reasonable efforts to maintain support for the API with the capability presently offered by the cloud infrastructure vendor. Red Hat requests that application developers validate that any behavior they depend on is explicitly defined in the formal API documentation to prevent introducing dependencies on unspecified implementation-specific behavior or dependencies on bugs in a particular implementation of an API. For example, new releases of an ingress router may not be compatible with older releases if an application uses an undocumented API or relies on undefined behavior. 1.1. API tiers All commercially supported APIs, components, and features are associated under one of the following support levels: API tier 1 APIs and application operating environments (AOEs) are stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. API tier 2 APIs and AOEs are stable within a major release for a minimum of 9 months or 3 minor releases from the announcement of deprecation, whichever is longer. API tier 3 This level applies to languages, tools, applications, and optional Operators included with OpenShift Container Platform through Operator Hub. Each component will specify a lifetime during which the API and AOE will be supported. Newer versions of language runtime specific components will attempt to be as API and AOE compatible from minor version to minor version as possible. Minor version to minor version compatibility is not guaranteed, however. Components and developer tools that receive continuous updates through the Operator Hub, referred to as Operators and operands, should be considered API tier 3. Developers should use caution and understand how these components may change with each minor release. Users are encouraged to consult the compatibility guidelines documented by the component. API tier 4 No compatibility is provided. API and AOE can change at any point. These capabilities should not be used by applications needing long-term support. It is common practice for Operators to use custom resource definitions (CRDs) internally to accomplish a task. These objects are not meant for use by actors external to the Operator and are intended to be hidden. If any CRD is not meant for use by actors external to the Operator, the operators.operatorframework.io/internal-objects annotation in the Operators ClusterServiceVersion (CSV) should be specified to signal that the corresponding resource is internal use only and the CRD may be explicitly labeled as tier 4. 1.2. Mapping API tiers to API groups For each API tier defined by Red Hat, we provide a mapping table for specific API groups where the upstream communities are committed to maintain forward compatibility. Any API group that does not specify an explicit compatibility level and is not specifically discussed below is assigned API tier 3 by default except for v1alpha1 APIs which are assigned tier 4 by default. 1.2.1. Support for Kubernetes API groups API groups that end with the suffix *.k8s.io or have the form version.<name> with no suffix are governed by the Kubernetes deprecation policy and follow a general mapping between API version exposed and corresponding support tier unless otherwise specified. API version example API tier v1 Tier 1 v1beta1 Tier 2 v1alpha1 Tier 4 1.2.2. Support for OpenShift API groups API groups that end with the suffix *.openshift.io are governed by the OpenShift Container Platform deprecation policy and follow a general mapping between API version exposed and corresponding compatibility level unless otherwise specified. API version example API tier apps.openshift.io/v1 Tier 1 authorization.openshift.io/v1 Tier 1, some tier 1 deprecated build.openshift.io/v1 Tier 1, some tier 1 deprecated config.openshift.io/v1 Tier 1 image.openshift.io/v1 Tier 1 network.openshift.io/v1 Tier 1 network.operator.openshift.io/v1 Tier 1 oauth.openshift.io/v1 Tier 1 imagecontentsourcepolicy.operator.openshift.io/v1alpha1 Tier 1 project.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 route.openshift.io/v1 Tier 1 quota.openshift.io/v1 Tier 1 security.openshift.io/v1 Tier 1 except for RangeAllocation (tier 4) and *Reviews (tier 2) template.openshift.io/v1 Tier 1 console.openshift.io/v1 Tier 2 1.2.3. Support for Monitoring API groups API groups that end with the suffix monitoring.coreos.com have the following mapping: API version example API tier v1 Tier 1 v1alpha1 Tier 1 v1beta1 Tier 1 1.2.4. Support for Operator Lifecycle Manager API groups Operator Lifecycle Manager (OLM) provides APIs that include API groups with the suffix operators.coreos.com . These APIs have the following mapping: API version example API tier v2 Tier 1 v1 Tier 1 v1alpha1 Tier 1 1.3. API deprecation policy OpenShift Container Platform is composed of many components sourced from many upstream communities. It is anticipated that the set of components, the associated API interfaces, and correlated features will evolve over time and might require formal deprecation in order to remove the capability. 1.3.1. Deprecating parts of the API OpenShift Container Platform is a distributed system where multiple components interact with a shared state managed by the cluster control plane through a set of structured APIs. Per Kubernetes conventions, each API presented by OpenShift Container Platform is associated with a group identifier and each API group is independently versioned. Each API group is managed in a distinct upstream community including Kubernetes, Metal3, Multus, Operator Framework, Open Cluster Management, OpenShift itself, and more. While each upstream community might define their own unique deprecation policy for a given API group and version, Red Hat normalizes the community specific policy to one of the compatibility levels defined prior based on our integration in and awareness of each upstream community to simplify end-user consumption and support. The deprecation policy and schedule for APIs vary by compatibility level. The deprecation policy covers all elements of the API including: REST resources, also known as API objects Fields of REST resources Annotations on REST resources, excluding version-specific qualifiers Enumerated or constant values Other than the most recent API version in each group, older API versions must be supported after their announced deprecation for a duration of no less than: API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed. The following rules apply to all tier 1 APIs: API elements can only be removed by incrementing the version of the group. API objects must be able to round-trip between API versions without information loss, with the exception of whole REST resources that do not exist in some versions. In cases where equivalent fields do not exist between versions, data will be preserved in the form of annotations during conversion. API versions in a given group can not deprecate until a new API version at least as stable is released, except in cases where the entire API object is being removed. 1.3.2. Deprecating CLI elements Client-facing CLI commands are not versioned in the same way as the API, but are user-facing component systems. The two major ways a user interacts with a CLI are through a command or flag, which is referred to in this context as CLI elements. All CLI elements default to API tier 1 unless otherwise noted or the CLI depends on a lower tier API. Element API tier Generally available (GA) Flags and commands Tier 1 Technology Preview Flags and commands Tier 3 Developer Preview Flags and commands Tier 4 1.3.3. Deprecating an entire component The duration and schedule for deprecating an entire component maps directly to the duration associated with the highest API tier of an API exposed by that component. For example, a component that surfaced APIs with tier 1 and 2 could not be removed until the tier 1 deprecation schedule was met. API tier Duration Tier 1 Stable within a major release. They may be deprecated within a major release, but they will not be removed until a subsequent major release. Tier 2 9 months or 3 releases from the announcement of deprecation, whichever is longer. Tier 3 See the component-specific schedule. Tier 4 None. No compatibility is guaranteed.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/api_overview/understanding-api-support-tiers
Chapter 7. MTR 1.2.2
Chapter 7. MTR 1.2.2 7.1. Known issues For a complete list of all known issues, see the list of MTR 1.2.2 known issues in Jira. 7.2. Resolved issues CVE-2023-44487 netty-codec-http2: HTTP/2: Multiple HTTP/2 enabled web servers are vulnerable to a DDoS attack (Rapid Reset Attack) A flaw was found in handling multiplexed streams in the HTTP/2 protocol, which was utilized by Migration Toolkit for Runtimes (MTR). A client could repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This creates additional workload for the server in terms of setting up and dismantling streams, while avoiding any server-side limitations on the maximum number of active streams per connection, resulting in a denial of service due to server resource consumption. (WINDUP-4072) For more details, see (CVE-2023-44487) CVE-2023-37460 plexus-archiver: Arbitrary File Creation in AbstractUnArchiver A flaw was found in the Plexus Archiver, which was utilized by MTR. While using AbstractUnArchiver for extracting, an archive could lead to arbitrary file creation and possible remote code execution (RCE). This flaw will bypass directory destination verification if an archive with an entry in the destination directory as a symbolic link whose target does not exist. The plexus-archiver is a test scoped artifact so not included in any of the MTR distributions. (WINDUP-4053) For more details, see (CVE-2023-37460) EAP 7.3 and EAP 7.4 rules with target EAP 7.0 and above This MTR release makes a correction to some rules to support migrating to EAP 7.3 and above, to ensure the rules are ignored if the target is EAP 7.2 or below. (WINDUPRULE-1038)
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/release_notes/mtr_1_2_2
22.2. Defining the Type of Rollover
22.2. Defining the Type of Rollover Disaster recovery, as the introduction says, is the process for transitioning from one system to another system with as little interruption of service as possible. That's called a rollover , and there are three different ways of doing a rollover: A hot rollover means that the infrastructure is completely mirrored at another site and that the backup site is always up and current with the primary site. This requires only a few adjustments to switch operations from the primary to the backup. A warm rollover means that all of the elements for the backup site are in place (adequate network connections, all required applications and hardware) but the system is not actively running or necessarily configured. This can require some extra time to configure the machines and get the system running. A cold rollover means that a site is available but there are few resources immediately available to set it up. The obvious difference in the types of rollover is the time and expense necessary to set up the backup site. Hot and warm sites have higher initial expenditures to set up and run. A mix of rollover types can be used, depending on the specific disaster scenario being planned. For example, a rollover plan for the loss of a single server could use a hot rollover easily and relatively cheaply by creating and keeping a virtual machine copy of the Directory Server instance which can be brought online within minutes. It would not even require keeping the virtual machine in a separate facility or network. On the other hand, a cold rollover could be planned for the loss of an entire data center or office. Match the rollover process to the severity of the disaster scenario, your budget and available resources, and the likelihood of encountering problems.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/defining-rollover
Chapter 1. Overview of model registries
Chapter 1. Overview of model registries Important Model registry is currently available in Red Hat OpenShift AI as a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . A model registry is an important component in the lifecycle of an artificial intelligence/machine learning (AI/ML) model, and a vital part of any machine learning operations (MLOps) platform or ML workflow. A model registry acts as a central repository, holding metadata related to machine learning models from inception to deployment. This metadata ranges from high-level information like the deployment environment and project origins, to intricate details like training hyperparameters, performance metrics, and deployment events. A model registry acts as a bridge between model experimentation and serving, offering a secure, collaborative metadata store interface for stakeholders of the ML lifecycle. Model registries provide a structured and organized way to store, share, version, deploy, and track models. To use model registries in OpenShift AI, an OpenShift cluster administrator must configure the model registry component. For more information, see Configuring the model registry component . After the model registry component is configured, an OpenShift AI administrator can create model registries in OpenShift AI and grant model registry access to the data scientists that will work with them. For more information, see Managing model registries . Data scientists with access to a model registry can store, share, version, deploy, and track models using the model registry feature. For more information, see Working with model registries .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_model_registries/overview-of-model-registries_working-model-registry
Chapter 1. Introduction to AMQ Broker on OpenShift Container Platform
Chapter 1. Introduction to AMQ Broker on OpenShift Container Platform Red Hat AMQ Broker 7.10 is available as a containerized image for use with OpenShift Container Platform (OCP) 4.12, 4.13, 4.14 or 4.15. AMQ Broker is based on Apache ActiveMQ Artemis. It provides a message broker that is JMS-compliant. After you have set up the initial broker pod, you can quickly deploy duplicates by using OpenShift Container Platform features. 1.1. Version compatibility and support For details about OpenShift Container Platform image version compatibility, see: OpenShift Container Platform 4.x Tested Integrations Note All deployments of AMQ Broker on OpenShift Container Platform now use RHEL 8 based images. 1.2. Unsupported features Master-slave-based high availability High availability (HA) achieved by configuring master and slave pairs is not supported. Instead, AMQ Broker uses the HA capabilities provided in OpenShift Container Platform. External clients cannot use the topology information provided by AMQ Broker When an AMQ Core Protocol JMS Client or an AMQ JMS Client connects to a broker in an OpenShift Container Platform cluster, the broker can send the client the IP address and port information for each of the other brokers in the cluster, which serves as a failover list for clients if the connection to the current broker is lost. The IP address provided for each broker is an internal IP address, which is not accessible to clients that are external to the OpenShift Container Platform cluster. To prevent external clients from trying to connect to a broker using an internal IP address, set the following configuration in the URI used by the client to initially connect to a broker. Client Configuration AMQ Core Protocol JMS Client useTopologyForLoadBalancing=false AMQ JMS Client failover.amqpOpenServerListAction=IGNORE 1.3. Document conventions This document uses the following conventions for the sudo command, file paths, and replaceable values. The sudo command In this document, sudo is used for any command that requires root privileges. You should always exercise caution when using sudo , as any changes can affect the entire system. For more information about using sudo , see The sudo Command . About the use of file paths in this document In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/... ). If you are using Microsoft Windows, you should use the equivalent Microsoft Windows paths (for example, C:\Users\... ). Replaceable values This document sometimes uses replaceable values that you must replace with values specific to your environment. Replaceable values are lowercase, enclosed by angle brackets ( < > ), and are styled using italics and monospace font. Multiple words are separated by underscores ( _ ) . For example, in the following command, replace <project_name> with your own project name. USD oc new-project <project_name>
[ "oc new-project <project_name>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/deploying_amq_broker_on_openshift/con_br-intro-to-broker-on-ocp-broker-ocp
Using the Streams for Apache Kafka Bridge
Using the Streams for Apache Kafka Bridge Red Hat Streams for Apache Kafka 2.9 Use the Streams for Apache Kafka Bridge to connect with a Kafka cluster
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_the_streams_for_apache_kafka_bridge/index
Support
Support Red Hat Advanced Cluster Security for Kubernetes 4.7 Getting support for Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
[ "export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" -p \"USDROX_PASSWORD\" central debug download-diagnostics", "export ROX_API_TOKEN= <api_token>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug download-diagnostics" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html-single/support/index
Chapter 1. OpenShift Data Foundation deployed using dynamic devices
Chapter 1. OpenShift Data Foundation deployed using dynamic devices 1.1. OpenShift Data Foundation deployed on AWS To replace an operational node, see: Section 1.1.1, "Replacing an operational AWS node on user-provisioned infrastructure" . Section 1.1.2, "Replacing an operational AWS node on installer-provisioned infrastructure" . To replace a failed node, see: Section 1.1.3, "Replacing a failed AWS node on user-provisioned infrastructure" . Section 1.1.4, "Replacing a failed AWS node on installer-provisioned infrastructure" . 1.1.1. Replacing an operational AWS node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Note When replacing an AWS node on user-provisioned infrastructure, the new node needs to be created in the same AWS zone as the original node. Procedure Identify the node that you need to replace. Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Delete the node: Create a new Amazon Web Service (AWS) machine instance with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new AWS machine instance. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.2. Replacing an operational AWS node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.3. Replacing a failed AWS node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the Amazon Web Service (AWS) machine instance of the node that you need to replace. Log in to AWS, and terminate the AWS machine instance that you identified. Create a new AWS machine instance with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform node using the new AWS machine instance. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.1.4. Replacing a failed AWS node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Amazon Web Service (AWS) instance is not removed automatically, terminate the instance from the AWS console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2. OpenShift Data Foundation deployed on VMware To replace an operational node, see: Section 1.2.1, "Replacing an operational VMware node on user-provisioned infrastructure" . Section 1.2.2, "Replacing an operational VMware node on installer-provisioned infrastructure" . To replace a failed node, see: Section 1.2.3, "Replacing a failed VMware node on user-provisioned infrastructure" . Section 1.2.4, "Replacing a failed VMware node on installer-provisioned infrastructure" . 1.2.1. Replacing an operational VMware node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node and its Virtual Machine (VM) that you need replace. Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Delete the node: Log in to VMware vSphere, and terminate the VM that you identified: Important Delete the VM only from the inventory and not from the disk. Create a new VM on VMware vSphere with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.2. Replacing an operational VMware node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.3. Replacing a failed VMware node on user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node and its Virtual Machine (VM) that you need to replace. Delete the node: <node_name> Specify the name of node that you need to replace. Log in to VMware vSphere and terminate the VM that you identified. Important Delete the VM only from the inventory and not from the disk. Create a new VM on VMware vSphere with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.2.4. Replacing a failed VMware node on installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for te new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Virtual Machine (VM) is not removed automatically, terminate the VM from VMware vSphere. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.3. OpenShift Data Foundation deployed on Microsoft Azure 1.3.1. Replacing operational nodes on Azure installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads -> Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.3.2. Replacing failed nodes on Azure installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for the new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Azure instance is not removed automatically, terminate the instance from the Azure console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new the Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.4. OpenShift Data Foundation deployed on Google cloud 1.4.1. Replacing operational nodes on Google Cloud installer-provisioned infrastructure Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the node that needs to be replaced. Take a note of its Machine Name . Mark the node as unschedulable using the following command: Drain the node using the following command: Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for new machine to start and transition into Running state. Important This activity may take at least 5-10 minutes or more. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 1.4.2. Replacing failed nodes on Google Cloud installer-provisioned infrastructure Procedure Log in to OpenShift Web Console and click Compute Nodes . Identify the faulty node and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity may take at least 5-10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when the new node is labeled and functional. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the web user interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From the command line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Google Cloud instance is not removed automatically, terminate the instance from Google Cloud console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new the Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support .
[ "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete nodes <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete nodes <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc delete nodes <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_nodes/openshift_data_foundation_deployed_using_dynamic_devices
1.7. Post-installation Script
1.7. Post-installation Script You have the option of adding commands to run on the system once the installation is complete. This section must be at the end of the kickstart file and must start with the %post command. This section is useful for functions such as installing additional software and configuring an additional nameserver. Note If you configured the network with static IP information, including a nameserver, you can access the network and resolve IP addresses in the %post section. If you configured the network for DHCP, the /etc/resolv.conf file has not been completed when the installation executes the %post section. You can access the network, but you can not resolve IP addresses. Thus, if you are using DHCP, you must specify IP addresses in the %post section. Note The post-install script is run in a chroot environment; therefore, performing tasks such as copying scripts or RPMs from the installation media do not work. --nochroot Allows you to specify commands that you would like to run outside of the chroot environment. The following example copies the file /etc/resolv.conf to the file system that was just installed. --interpreter /usr/bin/python Allows you to specify a different scripting language, such as Python. Replace /usr/bin/python with the scripting language of your choice. 1.7.1. Examples Turn services on and off: Run a script named runme from an NFS share: Note NFS file locking is not supported while in kickstart mode, therefore -o nolock is required when mounting an NFS mount. Add a user to the system:
[ "%post --nochroot cp /etc/resolv.conf /mnt/sysimage/etc/resolv.conf", "/sbin/chkconfig --level 345 telnet off /sbin/chkconfig --level 345 finger off /sbin/chkconfig --level 345 lpd off /sbin/chkconfig --level 345 httpd on", "mkdir /mnt/temp mount -o nolock 10.10.0.2:/usr/new-machines /mnt/temp open -s -w -- /mnt/temp/runme umount /mnt/temp", "/usr/sbin/useradd bob /usr/bin/chfn -f \"Bob Smith\" bob /usr/sbin/usermod -p 'kjdfUSD04930FTH/ ' bob" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Kickstart_Installations-Post_installation_Script
Recommended Practices for Container Development
Recommended Practices for Container Development Red Hat Enterprise Linux Atomic Host 7 Recommended Practices Guide for Container Development Red Hat Atomic Host Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/recommended_practices_for_container_development/index
Chapter 5. SelfSubjectRulesReview [authorization.openshift.io/v1]
Chapter 5. SelfSubjectRulesReview [authorization.openshift.io/v1] Description SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object SelfSubjectRulesReviewSpec adds information about how to conduct the check status object SubjectRulesReviewStatus is contains the result of a rules check 5.1.1. .spec Description SelfSubjectRulesReviewSpec adds information about how to conduct the check Type object Required scopes Property Type Description scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil means "use the scopes on this request". 5.1.2. .status Description SubjectRulesReviewStatus is contains the result of a rules check Type object Required rules Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It means some error happened during evaluation that may have prevented additional rules from being populated. rules array Rules is the list of rules (no particular sort) that are allowed for the subject rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 5.1.3. .status.rules Description Rules is the list of rules (no particular sort) that are allowed for the subject Type array 5.1.4. .status.rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 5.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 5.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/selfsubjectrulesreviews Table 5.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 5.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a SelfSubjectRulesReview Table 5.3. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/authorization_apis/selfsubjectrulesreview-authorization-openshift-io-v1
probe::tty.init
probe::tty.init Name probe::tty.init - Called when a tty is being initalized Synopsis Values driver_name the driver name name the driver .dev_name name module the module name
[ "tty.init" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tty-init
Managing secrets with the Key Manager service
Managing secrets with the Key Manager service Red Hat OpenStack Platform 17.1 Integrating the Key Manager service (barbican) with your OpenStack deployment. OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_secrets_with_the_key_manager_service/index
Appendix B. Using KVM Virtualization on Multiple Architectures
Appendix B. Using KVM Virtualization on Multiple Architectures By default, KVM virtualization on Red Hat Enterprise Linux 7 is compatible with the AMD64 and Intel 64 architectures. However, starting with Red Hat Enterprise Linux 7.5, KVM virtualization is also supported on the following architectures, thanks to the introduction of the kernel-alt packages: IBM POWER IBM Z ARM systems (not supported) Note that when using virtualization on these architectures, the installation, usage, and feature support differ from AMD64 and Intel 64 in certain respects. For more information, see the sections below: B.1. Using KVM Virtualization on IBM POWER Systems Starting with Red Hat Enterprise Linux 7.5, KVM virtualization is supported on IBM POWER8 Systems and IBM POWER9 systems. However, IBM POWER8 does not use kernel-alt , which means that these two architectures differ in certain aspects. Installation To install KVM virtualization on Red Hat Enterprise Linux 7 for IBM POWER 8 and POWER9 Systems: Install the host system from the bootable image on the Customer Portal: IBM POWER8 IBM POWER9 For detailed instructions, see the Red Hat Enterprise Linux 7 Installation Guide . Ensure that your host system meets the hypervisor requirements: Verify that you have the correct machine type: The output of this command must include the PowerNV entry, which indicates that you are running on a supported PowerNV machine type: Load the KVM-HV kernel module: Verify that the KVM-HV kernel module is loaded: If KVM-HV was loaded successfully, the output of this command includes kvm_hv . Install the qemu-kvm-ma package in addition to other virtualization packages described in Chapter 2, Installing the Virtualization Packages . Architecture Specifics KVM virtualization on Red Hat Enterprise Linux 7.5 for IBM POWER differs from KVM on AMD64 and Intel 64 systems in the following: The recommended minimum memory allocation for a guest on an IBM POWER host is 2GB RAM . The SPICE protocol is not supported on IBM POWER systems. To display the graphical output of a guest, use the VNC protocol. In addition, only the following virtual graphics card devices are supported: vga - only supported in -vga std mode and not in -vga cirrus mode virtio-vga virtio-gpu The following virtualization features are disabled on AMD64 and Intel 64 hosts, but work on IBM POWER. However, they are not supported by Red Hat, and therefore not recommended for use: I/O threads SMBIOS configuration is not available. POWER8 guests, including compatibility mode guests, may fail to start with an error similar to: This is significantly more likely to occur on guests that use Red Hat Enterprise Linux 7.3 or prior. To fix this problem, increase the CMA memory pool available for the guest's hashed page table (HPT) by adding kvm_cma_resv_ratio= memory to the host's kernel command line, where memory is the percentage of host memory that should be reserved for the CMA pool (defaults to 5). Transparent huge pages (THPs) currently do not provide any notable performance benefits on IBM POWER8 guests Also note that the sizes of static huge pages on IBM POWER8 systems are 16MiB and 16GiB, as opposed to 2MiB and 1GiB on AMD64 and Intel 64 and on IBM POWER9. As a consequence, migrating a guest from an IBM POWER8 host to an IBM POWER9 host fails if the guest is configured with static huge pages. In addition, to be able to use static huge pages or THPs on IBM POWER8 guests, you must first set up huge pages on the host . A number of virtual peripheral devices that are supported on AMD64 and Intel 64 systems are not supported on IBM POWER systems, or a different device is supported as a replacement: Devices used for PCI-E hierarchy, including the ioh3420 and xio3130-downstream devices, are not supported. This functionality is replaced by multiple independent PCI root bridges, provided by the spapr-pci-host-bridge device. UHCI and EHCI PCI controllers are not supported. Use OHCI and XHCI controllers instead. IDE devices, including the virtual IDE CD-ROM ( ide-cd ) and the virtual IDE disk ( ide-hd ), are not supported. Use the virtio-scsi and virtio-blk devices instead. Emulated PCI NICs ( rtl8139 ) are not supported. Use the virtio-net device instead. Sound devices, including intel-hda , hda-output , and AC97 , are not supported. USB redirection devices, including usb-redir and usb-tablet , are not supported. The kvm-clock service does not have to be configured for time management on IBM POWER systems. The pvpanic device is not supported on IBM POWER systems. However, an equivalent functionality is available and activated on this architecture by default. To enable it on a guest, use the <on_crash> configuration element with the preserve value. In addition, make sure to remove the <panic> element from the <devices> section, as its presence can lead to the guest failing to boot on IBM POWER systems. On IBM POWER8 systems, the host machine must run in single-threaded mode to support guests. This is automatically configured if the qemu-kvm-ma packages are installed. However, guests running on single-threaded hosts can still use multiple threads. When an IBM POWER virtual machine (VM) running on a RHEL 7 host is configured with a NUMA node that uses zero memory ( memory='0' ), the VM does not work correctly. As a consequence, Red Hat does not support IBM POWER VMs with zero-memory NUMA nodes on RHEL 7
[ "grep ^platform /proc/cpuinfo", "platform : PowerNV", "modprobe kvm_hv", "lsmod | grep kvm", "qemu-kvm: Failed to allocate KVM HPT of order 33 (try smaller maxmem?): Cannot allocate memory" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/appe-kvm_on_multiarch
macro::json_output_array_string_value
macro::json_output_array_string_value Name macro::json_output_array_string_value - Output a string value for metric in an array. Synopsis Arguments array_name The name of the array. array_index The array index (as a string) indicating where to store the string value. metric_name The name of the string metric. value The string value to output. Description The json_output_array_string_value macro is designed to be called from the 'json_data' probe in the user's script to output a metric's string value that is in an array. This metric should have been added with json_add_array_string_metric .
[ "@json_output_array_string_value(array_name,array_index,metric_name,value)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-json-output-array-string-value
Chapter 3. BareMetalHost [metal3.io/v1alpha1]
Chapter 3. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object BareMetalHostSpec defines the desired state of BareMetalHost status object BareMetalHostStatus defines the observed state of BareMetalHost 3.1.1. .spec Description BareMetalHostSpec defines the desired state of BareMetalHost Type object Required online Property Type Description architecture string CPU architecture of the host, e.g. "x86_64" or "aarch64". If unset, eventually populated by inspection. automatedCleaningMode string When set to disabled, automated cleaning will be avoided during provisioning and deprovisioning. bmc object How do we connect to the BMC? bootMACAddress string Which MAC address will PXE boot? This is optional for some types, but required for libvirt VMs driven by vbmc. bootMode string Select the method of initializing the hardware during boot. Defaults to UEFI. consumerRef object ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". customDeploy object A custom deploy procedure. description string Description is a human-entered text used to help identify the host externallyProvisioned boolean ExternallyProvisioned means something else is managing the image running on the host and the operator should only manage the power status and hardware inventory inspection. If the Image field is filled in, this field is ignored. firmware object BIOS configuration for bare metal server hardwareProfile string What is the name of the hardware profile for this host? It should only be necessary to set this when inspection cannot automatically determine the profile. image object Image holds the details of the image to be provisioned. metaData object MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. networkData object NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. online boolean Should the server be online? preprovisioningNetworkDataName string PreprovisioningNetworkDataName is the name of the Secret in the local namespace containing network configuration (e.g content of network_data.json) which is passed to the preprovisioning image, and to the Config Drive if not overridden by specifying NetworkData. raid object RAID configuration for bare metal server rootDeviceHints object Provide guidance about how to choose the device for the image being provisioned. taints array Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. userData object UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. 3.1.2. .spec.bmc Description How do we connect to the BMC? Type object Required address credentialsName Property Type Description address string Address holds the URL for accessing the controller on the network. credentialsName string The name of the secret containing the BMC credentials (requires keys "username" and "password"). disableCertificateVerification boolean DisableCertificateVerification disables verification of server certificates when using HTTPS to connect to the BMC. This is required when the server certificate is self-signed, but is insecure because it allows a man-in-the-middle to intercept the connection. 3.1.3. .spec.consumerRef Description ConsumerRef can be used to store information about something that is using a host. When it is not empty, the host is considered "in use". Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 3.1.4. .spec.customDeploy Description A custom deploy procedure. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.5. .spec.firmware Description BIOS configuration for bare metal server Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.6. .spec.image Description Image holds the details of the image to be provisioned. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image. e.g md5, sha256, sha512 format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.7. .spec.metaData Description MetaData holds the reference to the Secret containing host metadata (e.g. meta_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.8. .spec.networkData Description NetworkData holds the reference to the Secret containing network configuration (e.g content of network_data.json) which is passed to the Config Drive. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.9. .spec.raid Description RAID configuration for bare metal server Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.10. .spec.rootDeviceHints Description Provide guidance about how to choose the device for the image being provisioned. Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.11. .spec.taints Description Taints is the full, authoritative list of taints to apply to the corresponding Machine. This list will overwrite any modifications made to the Machine on an ongoing basis. Type array 3.1.12. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required effect key Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. key string Required. The taint key to be applied to a node. timeAdded string TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 3.1.13. .spec.userData Description UserData holds the reference to the Secret containing the user data to be passed to the host before it boots. Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.14. .status Description BareMetalHostStatus defines the observed state of BareMetalHost Type object Required errorCount errorMessage hardwareProfile operationalStatus poweredOn provisioning Property Type Description errorCount integer ErrorCount records how many times the host has encoutered an error since the last successful operation errorMessage string the last error message reported by the provisioning subsystem errorType string ErrorType indicates the type of failure encountered when the OperationalStatus is OperationalStatusError goodCredentials object the last credentials we were able to validate as working hardware object The hardware discovered to exist on the host. hardwareProfile string The name of the profile matching the hardware details. lastUpdated string LastUpdated identifies when this status was last observed. operationHistory object OperationHistory holds information about operations performed on this host. operationalStatus string OperationalStatus holds the status of the host poweredOn boolean indicator for whether or not the host is powered on provisioning object Information tracked by the provisioner. triedCredentials object the last credentials we sent to the provisioning backend 3.1.15. .status.goodCredentials Description the last credentials we were able to validate as working Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.16. .status.goodCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.1.17. .status.hardware Description The hardware discovered to exist on the host. Type object Property Type Description cpu object CPU describes one processor on the host. firmware object Firmware describes the firmware on the host. hostname string nics array nics[] object NIC describes one network interface on the host. ramMebibytes integer storage array storage[] object Storage describes one storage device (disk, SSD, etc.) on the host. systemVendor object HardwareSystemVendor stores details about the whole hardware system. 3.1.18. .status.hardware.cpu Description CPU describes one processor on the host. Type object Property Type Description arch string clockMegahertz number ClockSpeed is a clock speed in MHz count integer flags array (string) model string 3.1.19. .status.hardware.firmware Description Firmware describes the firmware on the host. Type object Property Type Description bios object The BIOS for this firmware 3.1.20. .status.hardware.firmware.bios Description The BIOS for this firmware Type object Property Type Description date string The release/build date for this BIOS vendor string The vendor name for this BIOS version string The version of the BIOS 3.1.21. .status.hardware.nics Description Type array 3.1.22. .status.hardware.nics[] Description NIC describes one network interface on the host. Type object Property Type Description ip string The IP address of the interface. This will be an IPv4 or IPv6 address if one is present. If both IPv4 and IPv6 addresses are present in a dual-stack environment, two nics will be output, one with each IP. mac string The device MAC address model string The vendor and product IDs of the NIC, e.g. "0x8086 0x1572" name string The name of the network interface, e.g. "en0" pxe boolean Whether the NIC is PXE Bootable speedGbps integer The speed of the device in Gigabits per second vlanId integer The untagged VLAN ID vlans array The VLANs available vlans[] object VLAN represents the name and ID of a VLAN 3.1.23. .status.hardware.nics[].vlans Description The VLANs available Type array 3.1.24. .status.hardware.nics[].vlans[] Description VLAN represents the name and ID of a VLAN Type object Property Type Description id integer VLANID is a 12-bit 802.1Q VLAN identifier name string 3.1.25. .status.hardware.storage Description Type array 3.1.26. .status.hardware.storage[] Description Storage describes one storage device (disk, SSD, etc.) on the host. Type object Property Type Description hctl string The SCSI location of the device model string Hardware model name string The Linux device name of the disk, e.g. "/dev/sda". Note that this may not be stable across reboots. rotational boolean Whether this disk represents rotational storage. This field is not recommended for usage, please prefer using 'Type' field instead, this field will be deprecated eventually. serialNumber string The serial number of the device sizeBytes integer The size of the disk in Bytes type string Device type, one of: HDD, SSD, NVME. vendor string The name of the vendor of the device wwn string The WWN of the device wwnVendorExtension string The WWN Vendor extension of the device wwnWithExtension string The WWN with the extension 3.1.27. .status.hardware.systemVendor Description HardwareSystemVendor stores details about the whole hardware system. Type object Property Type Description manufacturer string productName string serialNumber string 3.1.28. .status.operationHistory Description OperationHistory holds information about operations performed on this host. Type object Property Type Description deprovision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. inspect object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. provision object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. register object OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. 3.1.29. .status.operationHistory.deprovision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.30. .status.operationHistory.inspect Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.31. .status.operationHistory.provision Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.32. .status.operationHistory.register Description OperationMetric contains metadata about an operation (inspection, provisioning, etc.) used for tracking metrics. Type object Property Type Description end `` start `` 3.1.33. .status.provisioning Description Information tracked by the provisioner. Type object Required ID state Property Type Description ID string The machine's UUID from the underlying provisioning tool bootMode string BootMode indicates the boot mode used to provision the node customDeploy object Custom deploy procedure applied to the host. firmware object The Bios set by the user image object Image holds the details of the last image successfully provisioned to the host. raid object The Raid set by the user rootDeviceHints object The RootDevicehints set by the user state string An indiciator for what the provisioner is doing with the host. 3.1.34. .status.provisioning.customDeploy Description Custom deploy procedure applied to the host. Type object Required method Property Type Description method string Custom deploy method name. This name is specific to the deploy ramdisk used. If you don't have a custom deploy ramdisk, you shouldn't use CustomDeploy. 3.1.35. .status.provisioning.firmware Description The Bios set by the user Type object Property Type Description simultaneousMultithreadingEnabled boolean Allows a single physical processor core to appear as several logical processors. This supports following options: true, false. sriovEnabled boolean SR-IOV support enables a hypervisor to create virtual instances of a PCI-express device, potentially increasing performance. This supports following options: true, false. virtualizationEnabled boolean Supports the virtualization of platform hardware. This supports following options: true, false. 3.1.36. .status.provisioning.image Description Image holds the details of the last image successfully provisioned to the host. Type object Required url Property Type Description checksum string Checksum is the checksum for the image. checksumType string ChecksumType is the checksum algorithm for the image. e.g md5, sha256, sha512 format string DiskFormat contains the format of the image (raw, qcow2, ... ). Needs to be set to raw for raw images streaming. Note live-iso means an iso referenced by the url will be live-booted and not deployed to disk, and in this case the checksum options are not required and if specified will be ignored. url string URL is a location of an image to deploy. 3.1.37. .status.provisioning.raid Description The Raid set by the user Type object Property Type Description hardwareRAIDVolumes `` The list of logical disks for hardware RAID, if rootDeviceHints isn't used, first volume is root volume. You can set the value of this field to [] to clear all the hardware RAID configurations. softwareRAIDVolumes `` The list of logical disks for software RAID, if rootDeviceHints isn't used, first volume is root volume. If HardwareRAIDVolumes is set this item will be invalid. The number of created Software RAID devices must be 1 or 2. If there is only one Software RAID device, it has to be a RAID-1. If there are two, the first one has to be a RAID-1, while the RAID level for the second one can be 0, 1, or 1+0. As the first RAID device will be the deployment device, enforcing a RAID-1 reduces the risk of ending up with a non-booting node in case of a disk failure. Software RAID will always be deleted. 3.1.38. .status.provisioning.rootDeviceHints Description The RootDevicehints set by the user Type object Property Type Description deviceName string A Linux device name like "/dev/vda", or a by-path link to it like "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0". The hint must match the actual value exactly. hctl string A SCSI bus address like 0:0:0:0. The hint must match the actual value exactly. minSizeGigabytes integer The minimum size of the device in Gigabytes. model string A vendor-specific device identifier. The hint can be a substring of the actual value. rotational boolean True if the device should use spinning media, false otherwise. serialNumber string Device serial number. The hint must match the actual value exactly. vendor string The name of the vendor or manufacturer of the device. The hint can be a substring of the actual value. wwn string Unique storage identifier. The hint must match the actual value exactly. wwnVendorExtension string Unique vendor storage identifier. The hint must match the actual value exactly. wwnWithExtension string Unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. 3.1.39. .status.triedCredentials Description the last credentials we sent to the provisioning backend Type object Property Type Description credentials object SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace credentialsVersion string 3.1.40. .status.triedCredentials.credentials Description SecretReference represents a Secret Reference. It has enough information to retrieve secret in any namespace Type object Property Type Description name string name is unique within a namespace to reference a secret resource. namespace string namespace defines the space within which the secret name must be unique. 3.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/baremetalhosts GET : list objects of kind BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts DELETE : delete collection of BareMetalHost GET : list objects of kind BareMetalHost POST : create a BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} DELETE : delete a BareMetalHost GET : read the specified BareMetalHost PATCH : partially update the specified BareMetalHost PUT : replace the specified BareMetalHost /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status GET : read status of the specified BareMetalHost PATCH : partially update status of the specified BareMetalHost PUT : replace status of the specified BareMetalHost 3.2.1. /apis/metal3.io/v1alpha1/baremetalhosts Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind BareMetalHost Table 3.2. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty 3.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of BareMetalHost Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind BareMetalHost Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK BareMetalHostList schema 401 - Unauthorized Empty HTTP method POST Description create a BareMetalHost Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body BareMetalHost schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 202 - Accepted BareMetalHost schema 401 - Unauthorized Empty 3.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the BareMetalHost namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a BareMetalHost Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified BareMetalHost Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified BareMetalHost Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified BareMetalHost Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body BareMetalHost schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty 3.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/baremetalhosts/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the BareMetalHost namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified BareMetalHost Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified BareMetalHost Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified BareMetalHost Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body BareMetalHost schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK BareMetalHost schema 201 - Created BareMetalHost schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/provisioning_apis/baremetalhost-metal3-io-v1alpha1
4.4.5. Disabling/Enabling Quota Accounting
4.4.5. Disabling/Enabling Quota Accounting By default, quota accounting is enabled; therefore, GFS keeps track of disk usage for every user and group even when no quota limits have been set. Quota accounting incurs unnecessary overhead if quotas are not used. You can disable quota accounting completely by setting the quota_account tunable parameter to 0. This must be done on each node and after each mount. (The 0 setting is not persistent across unmounts.) Quota accounting can be enabled by setting the quota_account tunable parameter to 1. Usage MountPoint Specifies the GFS file system to which the actions apply. quota_account {0|1} 0 = disabled 1 = enabled Comments To enable quota accounting on a file system, the quota_account parameter must be set back to 1. Afterward, the GFS quota file must be initialized to account for all current disk usage for users and groups on the file system. The quota file is initialized by running: gfs_quota init -f MountPoint . Note Initializing the quota file requires scanning the entire file system and may take a long time. Examples This example disables quota accounting on file system /gfs on a single node. This example enables quota accounting on file system /gfs on a single node and initializes the quota file.
[ "fs_tool settune MountPoint quota_account {0|1}", "gfs_tool settune /gfs quota_account 0", "gfs_tool settune /gfs quota_account 1 gfs_quota init -f /gfs" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s2-manage-quotaaccount
Chapter 5. OLMConfig [operators.coreos.com/v1]
Chapter 5. OLMConfig [operators.coreos.com/v1] Description OLMConfig is a resource responsible for configuring OLM. Type object Required metadata 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OLMConfigSpec is the spec for an OLMConfig resource. status object OLMConfigStatus is the status for an OLMConfig resource. 5.1.1. .spec Description OLMConfigSpec is the spec for an OLMConfig resource. Type object Property Type Description features object Features contains the list of configurable OLM features. 5.1.2. .spec.features Description Features contains the list of configurable OLM features. Type object Property Type Description disableCopiedCSVs boolean DisableCopiedCSVs is used to disable OLM's "Copied CSV" feature for operators installed at the cluster scope, where a cluster scoped operator is one that has been installed in an OperatorGroup that targets all namespaces. When reenabled, OLM will recreate the "Copied CSVs" for each cluster scoped operator. packageServerSyncInterval string PackageServerSyncInterval is used to define the sync interval for packagerserver pods. Packageserver pods periodically check the status of CatalogSources; this specifies the period using duration format (e.g. "60m"). For this parameter, only hours ("h"), minutes ("m"), and seconds ("s") may be specified. When not specified, the period defaults to the value specified within the packageserver. 5.1.3. .status Description OLMConfigStatus is the status for an OLMConfig resource. Type object Property Type Description conditions array conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 5.1.4. .status.conditions Description Type array 5.1.5. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 5.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1/olmconfigs DELETE : delete collection of OLMConfig GET : list objects of kind OLMConfig POST : create an OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name} DELETE : delete an OLMConfig GET : read the specified OLMConfig PATCH : partially update the specified OLMConfig PUT : replace the specified OLMConfig /apis/operators.coreos.com/v1/olmconfigs/{name}/status GET : read status of the specified OLMConfig PATCH : partially update status of the specified OLMConfig PUT : replace status of the specified OLMConfig 5.2.1. /apis/operators.coreos.com/v1/olmconfigs HTTP method DELETE Description delete collection of OLMConfig Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OLMConfig Table 5.2. HTTP responses HTTP code Reponse body 200 - OK OLMConfigList schema 401 - Unauthorized Empty HTTP method POST Description create an OLMConfig Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body OLMConfig schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 202 - Accepted OLMConfig schema 401 - Unauthorized Empty 5.2.2. /apis/operators.coreos.com/v1/olmconfigs/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method DELETE Description delete an OLMConfig Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OLMConfig Table 5.9. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OLMConfig Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OLMConfig Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body OLMConfig schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty 5.2.3. /apis/operators.coreos.com/v1/olmconfigs/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the OLMConfig HTTP method GET Description read status of the specified OLMConfig Table 5.16. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OLMConfig Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OLMConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body OLMConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK OLMConfig schema 201 - Created OLMConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operatorhub_apis/olmconfig-operators-coreos-com-v1