title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
10.3. Implementing the Incident Response Plan
10.3. Implementing the Incident Response Plan Once a plan of action is created, it must be agreed upon and actively implemented. Any aspect of the plan that is questioned during an active implementation can result in poor response time and downtime in the event of a breach. This is where practice exercises become invaluable. Unless something is brought to attention before the plan is actively set in production, the implementation should be agreed upon by all directly connected parties and executed with confidence. If a breach is detected and the CERT team is present for quick reaction, potential responses can vary. The team can decide to disable the network connections, disconnect the affected systems, patch the exploit, and then reconnect quickly without further, potential complications. The team can also watch the perpetrators and track their actions. The team could even redirect the perpetrator to a honeypot - a system or segment of a network containing intentionally false data - used to track incursion safely and without disruption to production resources. Responding to an incident should also be accompanied by information gathering whenever possible. Running processes, network connections, files, directories, and more should be actively audited in real-time. Having a snapshot of production resources for comparison can be helpful in tracking rogue services or processes. CERT members and in-house experts are great resources in tracking such anomalies in a system. System administrators know what processes should and should not appear when running top or ps . Network administrators are aware of what normal network traffic should look like when running snort or even tcpdump . These team members should know their systems and should be able to spot an anomaly more quickly than someone unfamiliar with the infrastructure.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-response-implement
Chapter 1. Operators overview
Chapter 1. Operators overview Operators are among the most important components of OpenShift Container Platform. They are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and the OpenShift CLI ( oc ). They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators are designed specifically for Kubernetes-native applications to implement and automate common Day 1 operations, such as installation and configuration. Operators can also automate Day 2 operations, such as autoscaling up or down and creating backups. All of these activities are directed by a piece of software running on your cluster. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators Managed by the Cluster Version Operator (CVO) and installed by default to perform cluster functions. Optional add-on Operators Managed by Operator Lifecycle Manager (OLM) and can be made accessible for users to run in their applications. Also known as OLM-based Operators . 1.1. For developers As an Operator author, you can perform the following development tasks for OLM-based Operators: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , Java-based Operators , and Helm-based Operators . Use Operator SDK to build, test, and deploy an Operator . Install and subscribe an Operator to your namespace . Create an application from an installed Operator through the web console . Additional resources Machine deletion lifecycle hook examples for Operator developers 1.2. For administrators As a cluster administrator, you can perform the following administrative tasks for OLM-based Operators: Manage custom catalogs . Allow non-cluster administrators to install Operators . Install an Operator from OperatorHub . View Operator status . Manage Operator conditions . Upgrade installed Operators . Delete installed Operators . Configure proxy support . Using Operator Lifecycle Manager in disconnected environments . For information about the cluster Operators that Red Hat provides, see Cluster Operators reference . 1.3. steps To understand more about Operators, see What are Operators?
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operators/operators-overview
Developer Guide
Developer Guide Red Hat Enterprise Linux 6 An introduction to application development tools in Red Hat Enterprise Linux 6 Robert Kratky Red Hat Customer Content Services [email protected] Don Domingo Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/index
Appendix A. Component Versions
Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7.9 release. Table A.1. Component Versions Component Version kernel 3.10.0-1160 kernel-alt 4.14.0-115 QLogic qla2xxx driver 10.01.00.22.07.9-k QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:12.0.0.13 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.874-19 DM-Multipath ( device-mapper-multipath ) 0.4.9-133 LVM ( lvm2 ) 2.02.187-6 qemu-kvm [a] 1.5.3-175 qemu-kvm-ma [b] 2.12.0-33 [a] The qemu-kvm packages provide KVM virtualization on AMD64 and Intel 64 systems. [b] The qemu-kvm-ma packages provide KVM virtualization on IBM POWER8, IBM POWER9, and IBM Z. Note that KVM virtualization on IBM POWER9 and IBM Z also requires using the kernel-alt packages.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/component_versions
Chapter 4. Configure storage for OpenShift Container Platform services
Chapter 4. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 4.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 4.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 4.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 4.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 4.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 4.3. Persistent Volume Claims attached to prometheus-k8s-* pod 4.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 4.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 4.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 4.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 4.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
[ "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'", "oc describe noobaa", "oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d", "oc describe pod <image-registry-name>", "oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>", "oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>", "apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]", "oc get clusterresourcequota -A oc describe clusterresourcequota -A", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_and_allocating_storage_resources/configure-storage-for-openshift-container-platform-services_rhodf
Chapter 8. Repository mirroring
Chapter 8. Repository mirroring 8.1. Repository mirroring Red Hat Quay repository mirroring lets you mirror images from external container registries, or another local registry, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags. From your Red Hat Quay cluster with repository mirroring enabled, you can perform the following: Choose a repository from an external registry to mirror Add credentials to access the external registry Identify specific container image repository names and tags to sync Set intervals at which a repository is synced Check the current state of synchronization To use the mirroring functionality, you need to perform the following actions: Enable repository mirroring in the Red Hat Quay configuration file Run a repository mirroring worker Create mirrored repositories All repository mirroring configurations can be performed using the configuration tool UI or by the Red Hat Quay API. 8.2. Repository mirroring compared to geo-replication Red Hat Quay geo-replication mirrors the entire image storage backend data between 2 or more different storage backends while the database is shared, for example, one Red Hat Quay registry with two different blob storage endpoints. The primary use cases for geo-replication include the following: Speeding up access to the binary blobs for geographically dispersed setups Guaranteeing that the image content is the same across regions Repository mirroring synchronizes selected repositories, or subsets of repositories, from one registry to another. The registries are distinct, with each registry having a separate database and separate image storage. The primary use cases for mirroring include the following: Independent registry deployments in different data centers or regions, where a certain subset of the overall content is supposed to be shared across the data centers and regions Automatic synchronization or mirroring of selected (allowlisted) upstream repositories from external registries into a local Red Hat Quay deployment Note Repository mirroring and geo-replication can be used simultaneously. Table 8.1. Red Hat Quay Repository mirroring and geo-replication comparison Feature / Capability Geo-replication Repository mirroring What is the feature designed to do? A shared, global registry Distinct, different registries What happens if replication or mirroring has not been completed yet? The remote copy is used (slower) No image is served Is access to all storage backends in both regions required? Yes (all Red Hat Quay nodes) No (distinct storage) Can users push images from both sites to the same repository? Yes No Is all registry content and configuration identical across all regions (shared database)? Yes No Can users select individual namespaces or repositories to be mirrored? No Yes Can users apply filters to synchronization rules? No Yes Are individual / different role-base access control configurations allowed in each region No Yes 8.3. Using repository mirroring The following list shows features and limitations of Red Hat Quay repository mirroring: With repository mirroring, you can mirror an entire repository or selectively limit which images are synced. Filters can be based on a comma-separated list of tags, a range of tags, or other means of identifying tags through Unix shell-style wildcards. For more information, see the documentation for wildcards . When a repository is set as mirrored, you cannot manually add other images to that repository. Because the mirrored repository is based on the repository and tags you set, it will hold only the content represented by the repository and tag pair. For example if you change the tag so that some images in the repository no longer match, those images will be deleted. Only the designated robot can push images to a mirrored repository, superseding any role-based access control permissions set on the repository. Mirroring can be configured to rollback on failure, or to run on a best-effort basis. With a mirrored repository, a user with read permissions can pull images from the repository but cannot push images to the repository. Changing settings on your mirrored repository can be performed in the Red Hat Quay user interface, using the Repositories Mirrors tab for the mirrored repository you create. Images are synced at set intervals, but can also be synced on demand. 8.4. Mirroring configuration UI Start the Quay container in configuration mode and select the Enable Repository Mirroring check box. If you want to require HTTPS communications and verify certificates during mirroring, select the HTTPS and cert verification check box. Validate and download the configuration file, and then restart Quay in registry mode using the updated config file. 8.5. Mirroring configuration fields Table 8.2. Mirroring configuration Field Type Description FEATURE_REPO_MIRROR Boolean Enable or disable repository mirroring Default: false REPO_MIRROR_INTERVAL Number The number of seconds between checking for repository mirror candidates Default: 30 REPO_MIRROR_SERVER_HOSTNAME String Replaces the SERVER_HOSTNAME as the destination for mirroring. Default: None Example : openshift-quay-service REPO_MIRROR_TLS_VERIFY Boolean Require HTTPS and verify certificates of Quay registry during mirror. Default: false REPO_MIRROR_ROLLBACK Boolean When set to true , the repository rolls back after a failed mirror attempt. Default : false 8.6. Mirroring worker Use the following procedure to start the repository mirroring worker. Procedure If you have not configured TLS communications using a /root/ca.crt certificate, enter the following command to start a Quay pod with the repomirror option: If you have configured TLS communications using a /root/ca.crt certificate, enter the following command to start the repository mirroring worker: 8.7. Creating a mirrored repository When mirroring a repository from an external container registry, you must create a new private repository. Typically, the same name is used as the target repository, for example, quay-rhel8 . 8.7.1. Repository mirroring settings Use the following procedure to adjust the settings of your mirrored repository. Prerequisites You have enabled repository mirroring in your Red Hat Quay configuration file. You have deployed a mirroring worker. Procedure In the Settings tab, set the Repository State to Mirror : In the Mirror tab, enter the details for connecting to the external registry, along with the tags, scheduling and access information: Enter the details as required in the following fields: Registry Location: The external repository you want to mirror, for example, registry.redhat.io/quay/quay-rhel8 Tags: This field is required. You may enter a comma-separated list of individual tags or tag patterns. (See Tag Patterns section for details.) Start Date: The date on which mirroring begins. The current date and time is used by default. Sync Interval: Defaults to syncing every 24 hours. You can change that based on hours or days. Robot User: Create a new robot account or choose an existing robot account to do the mirroring. Username: The username for accessing the external registry holding the repository you are mirroring. Password: The password associated with the Username. Note that the password cannot include characters that require an escape character (\). 8.7.2. Advanced settings In the Advanced Settings section, you can configure SSL/TLS and proxy with the following options: Verify TLS: Select this option if you want to require HTTPS and to verify certificates when communicating with the target remote registry. Accept Unsigned Images: Selecting this option allows unsigned images to be mirrored. HTTP Proxy: Select this option if you want to require HTTPS and to verify certificates when communicating with the target remote registry. HTTPS PROXY: Identify the HTTPS proxy server needed to access the remote site, if a proxy server is needed. No Proxy: List of locations that do not require proxy. 8.7.3. Synchronize now Use the following procedure to initiate the mirroring operation. Procedure To perform an immediate mirroring operation, press the Sync Now button on the repository's Mirroring tab. The logs are available on the Usage Logs tab: When the mirroring is complete, the images will appear in the Tags tab: Below is an example of a completed Repository Mirroring screen: 8.8. Event notifications for mirroring There are three notification events for repository mirroring: Repository Mirror Started Repository Mirror Success Repository Mirror Unsuccessful The events can be configured inside of the Settings tab for each repository, and all existing notification methods such as email, Slack, Quay UI, and webhooks are supported. 8.9. Mirroring tag patterns At least one tag must be entered. The following table references possible image tag patterns. 8.9.1. Pattern syntax Pattern Description * Matches all characters ? Matches any single character [seq] Matches any character in seq [!seq] Matches any character not in seq 8.9.2. Example tag patterns Example Pattern Example Matches v3* v32, v3.1, v3.2, v3.2-4beta, v3.3 v3.* v3.1, v3.2, v3.2-4beta v3.? v3.1, v3.2, v3.3 v3.[12] v3.1, v3.2 v3.[12]* v3.1, v3.2, v3.2-4beta v3.[!1]* v3.2, v3.2-4beta, v3.3 8.10. Working with mirrored repositories Once you have created a mirrored repository, there are several ways you can work with that repository. Select your mirrored repository from the Repositories page and do any of the following: Enable/disable the repository : Select the Mirroring button in the left column, then toggle the Enabled check box to enable or disable the repository temporarily. Check mirror logs : To make sure the mirrored repository is working properly, you can check the mirror logs. To do that, select the Usage Logs button in the left column. Here's an example: Sync mirror now : To immediately sync the images in your repository, select the Sync Now button. Change credentials : To change the username and password, select DELETE from the Credentials line. Then select None and add the username and password needed to log into the external registry when prompted. Cancel mirroring : To stop mirroring, which keeps the current images available but stops new ones from being synced, select the CANCEL button. Set robot permissions : Red Hat Quay robot accounts are named tokens that hold credentials for accessing external repositories. By assigning credentials to a robot, that robot can be used across multiple mirrored repositories that need to access the same external registry. You can assign an existing robot to a repository by going to Account Settings, then selecting the Robot Accounts icon in the left column. For the robot account, choose the link under the REPOSITORIES column. From the pop-up window, you can: Check which repositories are assigned to that robot. Assign read, write or Admin privileges to that robot from the PERMISSION field shown in this figure: Change robot credentials : Robots can hold credentials such as Kubernetes secrets, Docker login information, and Mesos bundles. To change robot credentials, select the Options gear on the robot's account line on the Robot Accounts window and choose View Credentials. Add the appropriate credentials for the external repository the robot needs to access. Check and change general setting : Select the Settings button (gear icon) from the left column on the mirrored repository page. On the resulting page, you can change settings associated with the mirrored repository. In particular, you can change User and Robot Permissions, to specify exactly which users and robots can read from or write to the repo. 8.11. Repository mirroring recommendations Best practices for repository mirroring include the following: Repository mirroring pods can run on any node. This means that you can run mirroring on nodes where Red Hat Quay is already running. Repository mirroring is scheduled in the database and runs in batches. As a result, repository workers check each repository mirror configuration file and reads when the sync needs to be. More mirror workers means more repositories can be mirrored at the same time. For example, running 10 mirror workers means that a user can run 10 mirroring operators in parallel. If a user only has 2 workers with 10 mirror configurations, only 2 operators can be performed. The optimal number of mirroring pods depends on the following conditions: The total number of repositories to be mirrored The number of images and tags in the repositories and the frequency of changes Parallel batching For example, if a user is mirroring a repository that has 100 tags, the mirror will be completed by one worker. Users must consider how many repositories one wants to mirror in parallel, and base the number of workers around that. Multiple tags in the same repository cannot be mirrored in parallel.
[ "sudo podman run -d --name mirroring-worker -v USDQUAY/config:/conf/stack:Z registry.redhat.io/quay/quay-rhel8:v3.9.10 repomirror", "sudo podman run -d --name mirroring-worker -v USDQUAY/config:/conf/stack:Z -v /root/ca.crt:/etc/pki/ca-trust/source/anchors/ca.crt:Z registry.redhat.io/quay/quay-rhel8:v3.9.10 repomirror" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/manage_red_hat_quay/repo-mirroring-in-red-hat-quay
Chapter 1. Deploying OpenShift Data Foundation using IBM Cloud
Chapter 1. Deploying OpenShift Data Foundation using IBM Cloud You can use Red Hat OpenShift Data Foundation for your workloads that run in IBM Cloud. These workloads might run in Red Hat OpenShift on IBM Cloud clusters that are in the public cloud or in your own IBM Cloud Satellite location. 1.1. Deploying on IBM Cloud public When you create a Red Hat OpenShift on IBM Cloud cluster, you can choose between classic or Virtual Private Cloud (VPC) infrastructure. The Red Hat OpenShift Data Foundation managed cluster add-on supports both infrastructure providers. For classic clusters, the add-on deploys the OpenShift Data Foundation operator with the Local Storage operator. For VPC clusters, the add-on deploys the OpenShift Data Foundation operator which you can use with IBM Cloud Block Storage on VPC storage volumes. Benefits of using the OpenShift Data Foundation managed cluster add-on to install OpenShift Data Foundation instead of installing from OperatorHub Deploy OpenShift Data Foundation from a single CRD instead of manually creating separate resources. For example, in the single CRD that add-on enables, you configure the namespaces, storagecluster, and other resources you need to run OpenShift Data Foundation. Classic - Automatically create PVs using the storage devices that you specify in your OpenShift Data Foundation CRD. VPC - Dynamically provision IBM Cloud Block Storage on VPC storage volumes for your OpenShift Data Foundation storage cluster. Get patch updates automatically for the managed add-on. Update the OpenShift Data Foundation version by modifying a single field in the CRD. Integrate with IBM Cloud Object Storage by providing credentials in the CRD. 1.1.1. Deploying on classic infrastructure in IBM Cloud You can deploy OpenShift Data Foundation on IBM Cloud classic clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator and the Local Storage operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a single custom resource definition that contains your storage device configuration details. For more information, see the Preparing your cluster for OpenShift Data Foundation . 1.1.2. Deploying on VPC infrastructure in IBM Cloud You can deploy OpenShift Data Foundation on IBM Cloud VPC clusters by using the managed cluster add-on to install the OpenShift Data Foundation operator. After you install the OpenShift Data Foundation add-on in your IBM Cloud classic cluster, you create a custom resource definition that contains your worker node information and the IBM Cloud Block Storage for VPC storage classes that you want to use to dynamically provision the OpenShift Data Foundation storage devices. For more information, see the Preparing your cluster OpenShift Data Foundation . 1.2. Deploying on IBM Cloud Satellite With IBM Cloud Satellite, you can create a location with your own infrastructure, such as an on-premises data center or another cloud provider, to bring IBM Cloud services anywhere, including where your data resides. If you store your data by using Red Hat OpenShift Data Foundation, you can use Satellite storage templates to consistently install OpenShift Data Foundation across the clusters in your Satellite location. The templates help you create a Satellite configuration of the various OpenShift Data Foundation parameters, such as the device paths to your local disks or the storage classes that you want to use to dynamically provision volumes. Then, you assign the Satellite configuration to the clusters where you want to install OpenShift Data Foundation. Benefits of using Satellite storage to install OpenShift Data Foundation instead of installing from OperatorHub Create versions your OpenShift Data Foundation configuration to install across multiple clusters or expand your existing configuration. Update OpenShift Data Foundation across multiple clusters consistently. Standardize storage classes that developers can use for persistent storage across clusters. Use a similar deployment pattern for your apps with Satellite Config. Choose from templates for an OpenShift Data Foundation cluster using local disks on your worker nodes or an OpenShift Data Foundation cluster that uses dynamically provisioned volumes from your storage provider. Integrate with IBM Cloud Object Storage by providing credentials in the template. 1.2.1. Using OpenShift Data Foundation with the local storage present on your worker nodes in IBM Cloud Satellite For an OpenShift Data Foundation configuration that uses the local storage present on your worker nodes, you can use a Satellite template to configure your OpenShift Data Foundation configuration. Your cluster must meet certain requirements, such as CPU and memory requirements and size requirements of the available raw unformatted, unmounted disks. Choose a local OpenShift Data Foundation configuration when you want to use the local storage devices already present on your worker nodes, or statically provisioned raw volumes that you attach to your worker nodes. For more information, see the IBM Cloud Satellite local OpenShift Data Foundation storage documentation . 1.2.2. Using OpenShift Data Foundation with remote, dynamically provisioned storage volumes in IBM Cloud Satellite For an OpenShift Data Foundation configuration that uses remote, dynamically provisioned storage volumes from your preferred storage provider, you can use a Satellite storage template to create your storage configuration. In your OpenShift Data Foundation configuration, you specify the storage classes that you want use and the volume sizes that you want to provision. Your cluster must meet certain requirements, such as CPU and memory requirements. Choose the OpenShift Data Foundation-remote storage template when you want to use dynamically provisioned remote volumes from your storage provider in your OpenShift Data Foundation configuration. For more information, see the IBM Cloud Satellite remote OpenShift Data Foundation storage documentation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_cloud/deploying_openshift_container_storage_using_ibm_cloud_rhodf
Multitenancy
Multitenancy Red Hat OpenShift GitOps 1.15 Understanding multitenancy support in GitOps Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/multitenancy/index
8.4. The pacemaker_remote Service
8.4. The pacemaker_remote Service The pacemaker_remote service allows nodes not running corosync to integrate into the cluster and have the cluster manage their resources just as if they were real cluster nodes. Among the capabilities that the pacemaker_remote service provides are the following: The pacemaker_remote service allows you to scale beyond the corosync 16-node limit. The pacemaker_remote service allows you to manage a virtual environment as a cluster resource and also to manage individual services within the virtual environment as cluster resources. The following terms are used to describe the pacemaker_remote service. cluster node - A node running the High Availability services ( pacemaker and corosync ). remote node - A node running pacemaker_remote to remotely integrate into the cluster without requiring corosync cluster membership. A remote node is configured as a cluster resource that uses the ocf:pacemaker:remote resource agent. guest node - A virtual guest node running the pacemaker_remote service. A guest node is configured using the remote-node metadata option of a resource agent such as ocf:pacemaker:VirtualDomain . The virtual guest resource is managed by the cluster; it is both started by the cluster and integrated into the cluster as a remote node. pacemaker_remote - A service daemon capable of performing remote application management within remote nodes and guest nodes (KVM and LXC) in a Pacemaker cluster environment. This service is an enhanced version of Pacemaker's local resource management daemon (LRMD) that is capable of managing resources remotely on a node not running corosync. LXC - A Linux Container defined by the libvirt-lxc Linux container driver. A Pacemaker cluster running the pacemaker_remote service has the following characteristics. The remote nodes and/or the guest nodes run the pacemaker_remote service (with very little configuration required on the virtual machine side). The cluster stack ( pacemaker and corosync ), running on the cluster nodes, connects to the pacemaker_remote service on the remote nodes, allowing them to integrate into the cluster. The cluster stack ( pacemaker and corosync ), running on the cluster nodes, launches the guest nodes and immediately connects to the pacemaker_remote service on the guest nodes, allowing them to integrate into the cluster. The key difference between the cluster nodes and the remote and guest nodes that the cluster nodes manage is that the remote and guest nodes are not running the cluster stack. This means the remote and guest nodes have the following limitations: they do not take place in quorum they do not execute fencing device actions they are not eligible to be be the cluster's Designated Controller (DC) they do not themselves run the full range of pcs commands On the other hand, remote nodes and guest nodes are not bound to the scalability limits associated with the cluster stack. Other than these noted limitations, the remote nodes behave just like cluster nodes in respect to resource management, and the remote and guest nodes can themselves be fenced. The cluster is fully capable of managing and monitoring resources on each remote and guest node: You can build constraints against them, put them in standby, or perform any other action you perform on cluster nodes with the pcs commands. Remote and guest nodes appear in cluster status output just as cluster nodes do. 8.4.1. Host and Guest Authentication The connection between cluster nodes and pacemaker_remote is secured using Transport Layer Security (TLS) with pre-shared key (PSK) encryption and authentication over TCP (using port 3121 by default). This means both the cluster node and the node running pacemaker_remote must share the same private key. By default this key must be placed at /etc/pacemaker/authkey on both cluster nodes and remote nodes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/pacemaker_remote
5.6. Adding Bricks to Volumes
5.6. Adding Bricks to Volumes Click Add Bricks to add bricks to your volume. A brick is the basic unit of storage, represented by an export directory on a server in the storage cluster. You can expand or shrink your cluster by adding new bricks or deleting existing bricks. Figure 5.5. Add Bricks Enter the path for the brick and click OK . In the Allow Access From field, specify volume access control as a comma-separated list of IP addresses or hostnames. By default, an asterisk (*) is used as a wildcard to specify ranges of addresses such as IP addresses or hostnames. You need to use IP-based authentication for Gluster exports. Click OK to create the volume. The new volume is added and it appears on the Volumes tab. You can reuse a brick by selecting Allow bricks in root partition and reuse the bricks by clearing xattrs You can create a storage domain using the optimized volume and manage it using Red Hat Virtualization Manager.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/adding_bricks_to_volumes
Chapter 124. Google Pubsub Component
Chapter 124. Google Pubsub Component Available as of Camel version 2.19 The Google Pubsub component provides access to Cloud Pub/Sub Infrastructure via the Google Client Services API . The current implementation does not use gRPC. Maven users will need to add the following dependency to their pom.xml for this component: 124.1. URI Format The Google Pubsub Component uses the following URI format: Destination Name can be either a topic or a subscription name. 124.2. Options The Google Pubsub component supports 2 options, which are listed below. Name Description Default Type connectionFactory (common) Sets the connection factory to use: provides the ability to explicitly manage connection credentials: - the path to the key file - the Service Account Key / Email pair GooglePubsubConnection Factory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Google Pubsub endpoint is configured using URI syntax: with the following path and query parameters: 124.2.1. Path Parameters (2 parameters): Name Description Default Type projectId Required Project Id String destinationName Required Destination Name String 124.2.2. Query Parameters (9 parameters): Name Description Default Type ackMode (common) AUTO = exchange gets ack'ed/nack'ed on completion. NONE = downstream process has to ack/nack explicitly AUTO AckMode concurrentConsumers (common) The number of parallel streams consuming from the subscription 1 Integer connectionFactory (common) ConnectionFactory to obtain connection to PubSub Service. If non provided the default will be used. GooglePubsubConnection Factory loggerId (common) Logger ID to use when a match to the parent route required String maxMessagesPerPoll (common) The max number of messages to receive from the server in a single API call 1 Integer bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 124.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.google-pubsub.enabled Enable google-pubsub component true Boolean camel.component.google-pubsub.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 124.4. Producer Endpoints Producer endpoints can accept and deliver to PubSub individual and grouped exchanges alike. Grouped exchanges have Exchange.GROUPED_EXCHANGE property set. Google PubSub expects the payload to be byte[] array, Producer endpoints will send: String body as byte[] encoded as UTF-8 byte[] body as is Everything else will be serialised into byte[] array A Map set as message header GooglePubsubConstants.ATTRIBUTES will be sent as PubSub attributes. Once exchange has been delivered to PubSub the PubSub Message ID will be assigned to the header GooglePubsubConstants.MESSAGE_ID . 124.5. Consumer Endpoints Google PubSub will redeliver the message if it has not been acknowledged within the time period set as a configuration option on the subscription. The component will acknowledge the message once exchange processing has been completed. If the route throws an exception, the exchange is marked as failed and the component will NACK the message - it will be redelivered immediately. To ack/nack the message the component uses Acknowledgement ID stored as header GooglePubsubConstants.ACK_ID . If the header is removed or tampered with, the ack will fail and the message will be redelivered again after the ack deadline. 124.6. Message Headers Headers set by the consumer endpoints: GooglePubsubConstants.MESSAGE_ID GooglePubsubConstants.ATTRIBUTES GooglePubsubConstants.PUBLISH_TIME GooglePubsubConstants.ACK_ID 124.7. Message Body The consumer endpoint returns the content of the message as byte[] - exactly as the underlying system sends it. It is up for the route to convert/unmarshall the contents. 124.8. Authentication Configuration Google Pubsub component authentication is targeted for use with the GCP Service Accounts. For more information please refer to Google Cloud Platform Auth Guide Google security credentials can be set explicitly via one of the two options: Service Account Email and Service Account Key (PEM format) GCP credentials file location If both are set, the Service Account Email/Key will take precedence. Or implicitly, where the connection factory falls back on Application Default Credentials . OBS! The location of the default credentials file is configurable - via GOOGLE_APPLICATION_CREDENTIALS environment variable. Service Account Email and Service Account Key can be found in the GCP JSON credentials file as client_email and private_key respectively. 124.9. Rollback and Redelivery The rollback for Google PubSub relies on the idea of the Acknowledgement Deadline - the time period where Google PubSub expects to receive the acknowledgement. If the acknowledgement has not been received, the message is redelivered. Google provides an API to extend the deadline for a message. More information in Google PubSub Documentation So, rollback is effectively a deadline extension API call with zero value - i.e. deadline is reached now and message can be redelivered to the consumer. It is possible to delay the message redelivery by setting the acknowledgement deadline explicitly for the rollback by setting the message header GooglePubsubConstants.ACK_DEADLINE to the value in seconds.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-google-pubsub</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "google-pubsub://project-id:destinationName?[options]", "google-pubsub:projectId:destinationName" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/google-pubsub-component
15.5. Live KVM Migration with virsh
15.5. Live KVM Migration with virsh A guest virtual machine can be migrated to another host physical machine with the virsh command. The migrate command accepts parameters in the following format: Note that the --live option may be eliminated when live migration is not required. Additional options are listed in Section 15.5.2, "Additional Options for the virsh migrate Command" . The GuestName parameter represents the name of the guest virtual machine which you want to migrate. The DestinationURL parameter is the connection URL of the destination host physical machine. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have libvirt running. Note The DestinationURL parameter for normal migration and peer2peer migration has different semantics: normal migration: the DestinationURL is the URL of the target host physical machine as seen from the source guest virtual machine. peer2peer migration: DestinationURL is the URL of the target host physical machine as seen from the source host physical machine. Once the command is entered, you will be prompted for the root password of the destination system. Important Name resolution must be working on both sides (source and destination) in order for migration to succeed. Each side must be able to find the other. Make sure that you can ping one side to the other to check that the name resolution is working. Example: live migration with virsh This example migrates from host1.example.com to host2.example.com . Change the host physical machine names for your environment. This example migrates a virtual machine named guest1-rhel6-64 . This example assumes you have fully configured shared storage and meet all the prerequisites (listed here: Migration requirements ). Verify the guest virtual machine is running From the source system, host1.example.com , verify guest1-rhel6-64 is running: Migrate the guest virtual machine Execute the following command to live migrate the guest virtual machine to the destination, host2.example.com . Append /system to the end of the destination URL to tell libvirt that you need full access. Once the command is entered you will be prompted for the root password of the destination system. Wait The migration may take some time depending on load and the size of the guest virtual machine. virsh only reports errors. The guest virtual machine continues to run on the source host physical machine until fully migrated. Verify the guest virtual machine has arrived at the destination host From the destination system, host2.example.com , verify guest1-rhel7-64 is running: The live migration is now complete. Note libvirt supports a variety of networking methods including TLS/SSL, UNIX sockets, SSH, and unencrypted TCP. For more information on using other methods, see Chapter 18, Remote Management of Guests . Note Non-running guest virtual machines can be migrated using the following command: 15.5.1. Additional Tips for Migration with virsh It is possible to perform multiple, concurrent live migrations where each migration runs in a separate command shell. However, this should be done with caution and should involve careful calculations as each migration instance uses one MAX_CLIENT from each side (source and target). As the default setting is 20, there is enough to run 10 instances without changing the settings. Should you need to change the settings, see the procedure Procedure 15.1, "Configuring libvirtd.conf" . Open the libvirtd.conf file as described in Procedure 15.1, "Configuring libvirtd.conf" . Look for the Processing controls section. Change the max_clients and max_workers parameters settings. It is recommended that the number be the same in both parameters. The max_clients will use 2 clients per migration (one per side) and max_workers will use 1 worker on the source and 0 workers on the destination during the perform phase and 1 worker on the destination during the finish phase. Important The max_clients and max_workers parameters settings are affected by all guest virtual machine connections to the libvirtd service. This means that any user that is using the same guest virtual machine and is performing a migration at the same time will also obey the limits set in the max_clients and max_workers parameters settings. This is why the maximum value needs to be considered carefully before performing a concurrent live migration. Important The max_clients parameter controls how many clients are allowed to connect to libvirt. When a large number of containers are started at once, this limit can be easily reached and exceeded. The value of the max_clients parameter could be increased to avoid this, but doing so can leave the system more vulnerable to denial of service (DoS) attacks against instances. To alleviate this problem, a new max_anonymous_clients setting has been introduced in Red Hat Enterprise Linux 7.0 that specifies a limit of connections which are accepted but not yet authenticated. You can implement a combination of max_clients and max_anonymous_clients to suit your workload. Save the file and restart the service. Note There may be cases where a migration connection drops because there are too many ssh sessions that have been started, but not yet authenticated. By default, sshd allows only 10 sessions to be in a "pre-authenticated state" at any time. This setting is controlled by the MaxStartups parameter in the sshd configuration file (located here: /etc/ssh/sshd_config ), which may require some adjustment. Adjusting this parameter should be done with caution as the limitation is put in place to prevent DoS attacks (and over-use of resources in general). Setting this value too high will negate its purpose. To change this parameter, edit the file /etc/ssh/sshd_config , remove the # from the beginning of the MaxStartups line, and change the 10 (default value) to a higher number. Remember to save the file and restart the sshd service. For more information, see the sshd_config man page. 15.5.2. Additional Options for the virsh migrate Command In addition to --live , virsh migrate accepts the following options: --direct - used for direct migration --p2p - used for peer-to-peer migration --tunneled - used for tunneled migration --offline - migrates domain definition without starting the domain on destination and without stopping it on source host. Offline migration may be used with inactive domains and it must be used with the --persistent option. --persistent - leaves the domain persistent on destination host physical machine --undefinesource - undefines the domain on the source host physical machine --suspend - leaves the domain paused on the destination host physical machine --change-protection - enforces that no incompatible configuration changes will be made to the domain while the migration is underway; this flag is implicitly enabled when supported by the hypervisor, but can be explicitly used to reject the migration if the hypervisor lacks change protection support. --unsafe - forces the migration to occur, ignoring all safety procedures. --verbose - displays the progress of migration as it is occurring --compressed - activates compression of memory pages that have to be transferred repeatedly during live migration. --abort-on-error - cancels the migration if a soft error (for example I/O error) happens during the migration. --domain [name] - sets the domain name, id or uuid. --desturi [URI] - connection URI of the destination host as seen from the client (normal migration) or source (p2p migration). --migrateuri [URI] - the migration URI, which can usually be omitted. --graphicsuri [URI] - graphics URI to be used for seamless graphics migration. --listen-address [address] - sets the listen address that the hypervisor on the destination side should bind to for incoming migration. --timeout [seconds] - forces a guest virtual machine to suspend when the live migration counter exceeds N seconds. It can only be used with a live migration. Once the timeout is initiated, the migration continues on the suspended guest virtual machine. --dname [newname] - is used for renaming the domain during migration, which also usually can be omitted --xml [filename] - the filename indicated can be used to supply an alternative XML file for use on the destination to supply a larger set of changes to any host-specific portions of the domain XML, such as accounting for naming differences between source and destination in accessing underlying storage. This option is usually omitted. --migrate-disks [disk_identifiers] - this option can be used to select which disks are copied during the migration. This allows for more efficient live migration when copying certain disks is undesirable, such as when they already exist on the destination, or when they are no longer useful. [disk_identifiers] should be replaced by a comma-separated list of disks to be migrated, identified by their arguments found in the <target dev= /> line of the Domain XML file. In addition, the following commands may help as well: virsh migrate-setmaxdowntime [domain] [downtime] - will set a maximum tolerable downtime for a domain which is being live-migrated to another host. The specified downtime is in milliseconds. The domain specified must be the same domain that is being migrated. virsh migrate-compcache [domain] --size - will set and or get the size of the cache in bytes which is used for compressing repeatedly transferred memory pages during a live migration. When the --size is not used the command displays the current size of the compression cache. When --size is used, and specified in bytes, the hypervisor is asked to change compression to match the indicated size, following which the current size is displayed. The --size argument is supposed to be used while the domain is being live migrated as a reaction to the migration progress and increasing number of compression cache misses obtained from the domjobinfo . virsh migrate-setspeed [domain] [bandwidth] - sets the migration bandwidth in Mib/sec for the specified domain which is being migrated to another host. virsh migrate-getspeed [domain] - gets the maximum migration bandwidth that is available in Mib/sec for the specified domain. For more information, see Migration Limitations or the virsh man page.
[ "virsh migrate --live GuestName DestinationURL", "virsh list Id Name State ---------------------------------- 10 guest1-rhel6-64 running", "virsh migrate --live guest1-rhel7-64 qemu+ssh://host2.example.com/system", "virsh list Id Name State ---------------------------------- 10 guest1-rhel7-64 running", "virsh migrate --offline --persistent", "################################################################# # Processing controls # The maximum number of concurrent client connections to allow over all sockets combined. #max_clients = 5000 The maximum length of queue of connections waiting to be accepted by the daemon. Note, that some protocols supporting retransmission may obey this so that a later reattempt at connection succeeds. #max_queued_clients = 1000 The minimum limit sets the number of workers to start up initially. If the number of active clients exceeds this, then more threads are spawned, upto max_workers limit. Typically you'd want max_workers to equal maximum number of clients allowed #min_workers = 5 #max_workers = 20 The number of priority workers. If all workers from above pool will stuck, some calls marked as high priority (notably domainDestroy) can be executed in this pool. #prio_workers = 5 Total global limit on concurrent RPC calls. Should be at least as large as max_workers. Beyond this, RPC requests will be read into memory and queued. This directly impact memory usage, currently each request requires 256 KB of memory. So by default upto 5 MB of memory is used # XXX this isn't actually enforced yet, only the per-client limit is used so far #max_requests = 20 Limit on concurrent requests from a single client connection. To avoid one client monopolizing the server this should be a small fraction of the global max_requests and max_workers parameter #max_client_requests = 5 #################################################################" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_live_migration-live_kvm_migration_with_virsh
Chapter 7. Ceph Monitor and OSD interaction configuration
Chapter 7. Ceph Monitor and OSD interaction configuration As a storage administrator, you must properly configure the interactions between the Ceph Monitors and OSDs to ensure a stable working environment. 7.1. Prerequisites Installation of the Red Hat Ceph Storage software. 7.2. Ceph Monitor and OSD interaction After you have completed your initial Ceph configuration, you can deploy and run Ceph. When you execute a command such as ceph health or ceph -s , the Ceph Monitor reports on the current state of the Ceph storage cluster. The Ceph Monitor knows about the Ceph storage cluster by requiring reports from each Ceph OSD daemon, and by receiving reports from Ceph OSD daemons about the status of their neighboring Ceph OSD daemons. If the Ceph Monitor does not receive reports, or if it receives reports of changes in the Ceph storage cluster, the Ceph Monitor updates the status of the Ceph cluster map. Ceph provides reasonable default settings for Ceph Monitor and OSD interaction. However, you can override the defaults. The following sections describe how Ceph Monitors and Ceph OSD daemons interact for the purposes of monitoring the Ceph storage cluster. 7.3. OSD heartbeat Each Ceph OSD daemon checks the heartbeat of other Ceph OSD daemons every 6 seconds. To change the heartbeat interval, add the osd heartbeat interval setting under the [osd] section of the Ceph configuration file, or change its value at runtime. If a neighboring Ceph OSD daemon does not send heartbeat packets within a 20 second grace period, the Ceph OSD daemon might consider the neighboring Ceph OSD daemon down . It can report it back to a Ceph Monitor, which will update the Ceph cluster map. To change this grace period, add the osd heartbeat grace setting under the [osd] section of the Ceph configuration file, or set its value at runtime. 7.4. Reporting an OSD as down By default, two Ceph OSD Daemons from different hosts must report to the Ceph Monitors that another Ceph OSD Daemon is down before the Ceph Monitors acknowledge that the reported Ceph OSD Daemon is down . However, there is chance that all the OSDs reporting the failure are in different hosts in a rack with a bad switch that causes connection problems between OSDs. To avoid a "false alarm," Ceph considers the peers reporting the failure as a proxy for a "subcluster" that is similarly laggy. While this is not always the case, it may help administrators localize the grace correction to a subset of the system that is performing poorly. Ceph uses the mon_osd_reporter_subtree_level setting to group the peers into the "subcluster" by their common ancestor type in the CRUSH map. By default, only two reports from a different subtree are required to report another Ceph OSD Daemon down . Administrators can change the number of reporters from unique subtrees and the common ancestor type required to report a Ceph OSD Daemon down to a Ceph Monitor by adding the mon_osd_min_down_reporters and mon_osd_reporter_subtree_level settings under the [mon] section of the Ceph configuration file, or by setting the value at runtime. 7.5. Reporting a peering failure If a Ceph OSD daemon cannot peer with any of the Ceph OSD daemons defined in its Ceph configuration file or the cluster map, it will ping a Ceph Monitor for the most recent copy of the cluster map every 30 seconds. You can change the Ceph Monitor heartbeat interval by adding the osd mon heartbeat interval setting under the [osd] section of the Ceph configuration file, or by setting the value at runtime. 7.6. OSD reporting status If a Ceph OSD Daemon does not report to a Ceph Monitor, the Ceph Monitor will consider the Ceph OSD Daemon down after the mon osd report timeout elapses. A Ceph OSD Daemon sends a report to a Ceph Monitor when a reportable event such as a failure, a change in placement group stats, a change in up_thru or when it boots within 5 seconds. You can change the Ceph OSD Daemon minimum report interval by adding the osd mon report interval min setting under the [osd] section of the Ceph configuration file, or by setting the value at runtime. A Ceph OSD Daemon sends a report to a Ceph Monitor every 120 seconds irrespective of whether any notable changes occur. You can change the Ceph Monitor report interval by adding the osd mon report interval max setting under the [osd] section of the Ceph configuration file, or by setting the value at runtime. 7.7. Additional Resources See all the Red Hat Ceph Storage Ceph Monitor and OSD configuration options in Appendix G for specific option descriptions and usage.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/configuration_guide/ceph-monitor-and-osd-interaction-configuration
Chapter 6. Using SOAP 1.1 Messages
Chapter 6. Using SOAP 1.1 Messages Abstract Apache CXF provides a tool to generate a SOAP 1.1 binding which does not use any SOAP headers. However, you can add SOAP headers to your binding using any text or XML editor. 6.1. Adding a SOAP 1.1 Binding Using wsdl2soap To generate a SOAP 1.1 binding using wsdl2soap use the following command: wsdl2soap -i port-type-name -b binding-name -d output-directory -o output-file -n soap-body-namespace -style (document/rpc)-use (literal/encoded)-v-verbose-quiet wsdlurl Note To use wsdl2soap you will need to download the Apache CXF distribution. The command has the following options: Option Interpretation -i port-type-name Specifies the portType element for which a binding is generated. wsdlurl The path and name of the WSDL file containing the portType element definition. The tool has the following optional arguments: Option Interpretation -b binding-name Specifies the name of the generated SOAP binding. -d output-directory Specifies the directory to place the generated WSDL file. -o output-file Specifies the name of the generated WSDL file. -n soap-body-namespace Specifies the SOAP body namespace when the style is RPC. -style (document/rpc) Specifies the encoding style (document or RPC) to use in the SOAP binding. The default is document. -use (literal/encoded) Specifies the binding use (encoded or literal) to use in the SOAP binding. The default is literal. -v Displays the version number for the tool. -verbose Displays comments during the code generation process. -quiet Suppresses comments during the code generation process. The -i port-type-name and wsdlurl arguments are required. If the -style rpc argument is specified, the -n soap-body-namspace argument is also required. All other arguments are optional and may be listed in any order. Important wsdl2soap does not support the generation of document/encoded SOAP bindings. Example If your system has an interface that takes orders and offers a single operation to process the orders it is defined in a WSDL fragment similar to the one shown in Example 6.1, "Ordering System Interface" . Example 6.1. Ordering System Interface The SOAP binding generated for orderWidgets is shown in Example 6.2, "SOAP 1.1 Binding for orderWidgets " . Example 6.2. SOAP 1.1 Binding for orderWidgets This binding specifies that messages are sent using the document/literal message style. 6.2. Adding SOAP Headers to a SOAP 1.1 Binding Overview SOAP headers are defined by adding soap:header elements to your default SOAP 1.1 binding. The soap:header element is an optional child of the input , output , and fault elements of the binding. The SOAP header becomes part of the parent message. A SOAP header is defined by specifying a message and a message part. Each SOAP header can only contain one message part, but you can insert as many SOAP headers as needed. Syntax The syntax for defining a SOAP header is shown in Example 6.3, "SOAP Header Syntax" . The message attribute of soap:header is the qualified name of the message from which the part being inserted into the header is taken. The part attribute is the name of the message part inserted into the SOAP header. Because SOAP headers are always document style, the WSDL message part inserted into the SOAP header must be defined using an element. Together the message and the part attributes fully describe the data to insert into the SOAP header. Example 6.3. SOAP Header Syntax As well as the mandatory message and part attributes, soap:header also supports the namespace , the use , and the encodingStyle attributes. These attributes function the same for soap:header as they do for soap:body . Splitting messages between body and header The message part inserted into the SOAP header can be any valid message part from the contract. It can even be a part from the parent message which is being used as the SOAP body. Because it is unlikely that you would want to send information twice in the same message, the SOAP binding provides a means for specifying the message parts that are inserted into the SOAP body. The soap:body element has an optional attribute, parts , that takes a space delimited list of part names. When parts is defined, only the message parts listed are inserted into the SOAP body. You can then insert the remaining parts into the SOAP header. Note When you define a SOAP header using parts of the parent message, Apache CXF automatically fills in the SOAP headers for you. Example Example 6.4, "SOAP 1.1 Binding with a SOAP Header" shows a modified version of the orderWidgets service shown in Example 6.1, "Ordering System Interface" . This version has been modified so that each order has an xsd:base64binary value placed in the SOAP header of the request and response. The SOAP header is defined as being the keyVal part from the widgetKey message. In this case you are responsible for adding the SOAP header to your application logic because it is not part of the input or output message. Example 6.4. SOAP 1.1 Binding with a SOAP Header You can also modify Example 6.4, "SOAP 1.1 Binding with a SOAP Header" so that the header value is a part of the input and output messages.
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"widgetOrderForm.wsdl\" targetNamespace=\"http://widgetVendor.com/widgetOrderForm\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:tns=\"http://widgetVendor.com/widgetOrderForm\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://widgetVendor.com/types/widgetTypes\" xmlns:SOAP-ENC=\"http://schemas.xmlsoap.org/soap/encoding/\"> <message name=\"widgetOrder\"> <part name=\"numOrdered\" type=\"xsd:int\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"price\" type=\"xsd:float\"/> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\"/> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> <fault message=\"tns:badSize\" name=\"sizeFault\"/> </operation> </portType> </definitions>", "<binding name=\"orderWidgetsBinding\" type=\"tns:orderWidgets\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"placeWidgetOrder\"> <soap:operation soapAction=\"\" style=\"document\"/> <input name=\"order\"> <soap:body use=\"literal\"/> </input> <output name=\"bill\"> <soap:body use=\"literal\"/> </output> <fault name=\"sizeFault\"> <soap:body use=\"literal\"/> </fault> </operation> </binding>", "<binding name=\"headwig\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"weave\"> <soap:operation soapAction=\"\" style=\"document\"/> <input name=\"grain\"> <soap:body ... /> <soap:header message=\" QName \" part=\" partName \"/> </input> </binding>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <definitions name=\"widgetOrderForm.wsdl\" targetNamespace=\"http://widgetVendor.com/widgetOrderForm\" xmlns=\"http://schemas.xmlsoap.org/wsdl/\" xmlns:soap=\"http://schemas.xmlsoap.org/wsdl/soap/\" xmlns:tns=\"http://widgetVendor.com/widgetOrderForm\" xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\" xmlns:xsd1=\"http://widgetVendor.com/types/widgetTypes\" xmlns:SOAP-ENC=\"http://schemas.xmlsoap.org/soap/encoding/\"> <types> <schema targetNamespace=\"http://widgetVendor.com/types/widgetTypes\" xmlns=\"http://www.w3.org/2001/XMLSchema\" xmlns:wsdl=\"http://schemas.xmlsoap.org/wsdl/\"> <element name=\"keyElem\" type=\"xsd:base64Binary\"/> </schema> </types> <message name=\"widgetOrder\"> <part name=\"numOrdered\" type=\"xsd:int\"/> </message> <message name=\"widgetOrderBill\"> <part name=\"price\" type=\"xsd:float\"/> </message> <message name=\"badSize\"> <part name=\"numInventory\" type=\"xsd:int\"/> </message> <message name=\"widgetKey\"> <part name=\"keyVal\" element=\"xsd1:keyElem\"/> </message> <portType name=\"orderWidgets\"> <operation name=\"placeWidgetOrder\"> <input message=\"tns:widgetOrder\" name=\"order\"/> <output message=\"tns:widgetOrderBill\" name=\"bill\"/> <fault message=\"tns:badSize\" name=\"sizeFault\"/> </operation> </portType> <binding name=\"orderWidgetsBinding\" type=\"tns:orderWidgets\"> <soap:binding style=\"document\" transport=\"http://schemas.xmlsoap.org/soap/http\"/> <operation name=\"placeWidgetOrder\"> <soap:operation soapAction=\"\" style=\"document\"/> <input name=\"order\"> <soap:body use=\"literal\"/> <soap:header message=\"tns:widgetKey\" part=\"keyVal\"/> </input> <output name=\"bill\"> <soap:body use=\"literal\"/> <soap:header message=\"tns:widgetKey\" part=\"keyVal\"/> </output> <fault name=\"sizeFault\"> <soap:body use=\"literal\"/> </fault> </operation> </binding> </definitions>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/FUSECXFSOAP11
Chapter 1. Red Hat Ansible Automation Platform installation overview
Chapter 1. Red Hat Ansible Automation Platform installation overview The Red Hat Ansible Automation Platform installation program offers you flexibility, allowing you to install Ansible Automation Platform using a number of supported installation scenarios. Regardless of the installation scenario you choose, installing Ansible Automation Platform involves the following steps: Editing the Red Hat Ansible Automation Platform installer inventory file The Ansible Automation Platform installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment. Running the Red Hat Ansible Automation Platform installer setup script The setup script installs your Private Automation Hub using the required parameters defined in the inventory file. Verifying automation controller installation After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation controller. Verifying automation hub installation After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation hub. Post-installation steps After successful installation, you can begin using the features of Ansible Automation Platform. Additional resources For more information about the supported installation scenarios, see the Red Hat Ansible Automation Platform Planning Guide . 1.1. Prerequisites You chose and obtained a platform installer from the Red Hat Ansible Automation Platform Product Software . You are installing on a machine that meets base system requirements. You have updated all of the packages to the recent version of your RHEL nodes. Warning You may experience errors if you do not fully upgrade your RHEL nodes prior to your Ansible Automation Platform installation. You have created a Red Hat Registry Service Account, using the instructions in the Creating Registry Service Accounts guide . Additional resources For more information about obtaining a platform installer or system requirements, refer to the Red Hat Ansible Automation Platform system requirements in the Red Hat Ansible Automation Platform Planning Guide .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-overview
Chapter 21. Signing a kernel and modules for Secure Boot
Chapter 21. Signing a kernel and modules for Secure Boot You can enhance the security of your system by using a signed kernel and signed kernel modules. On UEFI-based build systems where Secure Boot is enabled, you can self-sign a privately built kernel or kernel modules. Furthermore, you can import your public key into a target system where you want to deploy your kernel or kernel modules. If Secure Boot is enabled, all of the following components have to be signed with a private key and authenticated with the corresponding public key: UEFI operating system boot loader The Red Hat Enterprise Linux kernel All kernel modules If any of these components are not signed and authenticated, the system cannot finish the booting process. RHEL 9 includes: Signed boot loaders Signed kernels Signed kernel modules In addition, the signed first-stage boot loader and the signed kernel include embedded Red Hat public keys. These signed executable binaries and embedded keys enable RHEL 9 to install, boot, and run with the Microsoft UEFI Secure Boot Certification Authority keys. These keys are provided by the UEFI firmware on systems that support UEFI Secure Boot. Note Not all UEFI-based systems include support for Secure Boot. The build system, where you build and sign your kernel module, does not need to have UEFI Secure Boot enabled and does not even need to be a UEFI-based system. 21.1. Prerequisites To be able to sign externally built kernel modules, install the utilities from the following packages: Table 21.1. Required utilities Utility Provided by package Used on Purpose efikeygen pesign Build system Generates public and private X.509 key pair openssl openssl Build system Exports the unencrypted private key sign-file kernel-devel Build system Executable file used to sign a kernel module with the private key mokutil mokutil Target system Optional utility used to manually enroll the public key keyctl keyutils Target system Optional utility used to display public keys in the system keyring 21.2. What is UEFI Secure Boot With the Unified Extensible Firmware Interface (UEFI) Secure Boot technology, you can prevent the execution of the kernel-space code that is not signed by a trusted key. The system boot loader is signed with a cryptographic key. The database of public keys in the firmware authorizes the process of signing the key. You can subsequently verify a signature in the -stage boot loader and the kernel. UEFI Secure Boot establishes a chain of trust from the firmware to the signed drivers and kernel modules as follows: An UEFI private key signs, and a public key authenticates the shim first-stage boot loader. A certificate authority (CA) in turn signs the public key. The CA is stored in the firmware database. The shim file contains the Red Hat public key Red Hat Secure Boot (CA key 1) to authenticate the GRUB boot loader and the kernel. The kernel in turn contains public keys to authenticate drivers and modules. Secure Boot is the boot path validation component of the UEFI specification. The specification defines: Programming interface for cryptographically protected UEFI variables in non-volatile storage. Storing the trusted X.509 root certificates in UEFI variables. Validation of UEFI applications such as boot loaders and drivers. Procedures to revoke known-bad certificates and application hashes. UEFI Secure Boot helps in the detection of unauthorized changes but does not : Prevent installation or removal of second-stage boot loaders. Require explicit user confirmation of such changes. Stop boot path manipulations. Signatures are verified during booting but, not when the boot loader is installed or updated. If the boot loader or the kernel are not signed by a system trusted key, Secure Boot prevents them from starting. 21.3. UEFI Secure Boot support You can install and run RHEL 9 on systems with enabled UEFI Secure Boot if the kernel and all the loaded drivers are signed with a trusted key. Red Hat provides kernels and drivers that are signed and authenticated by the relevant Red Hat keys. If you want to load externally built kernels or drivers, you must sign them as well. Restrictions imposed by UEFI Secure Boot The system only runs the kernel-mode code after its signature has been properly authenticated. GRUB module loading is disabled because there is no infrastructure for signing and verification of GRUB modules. Allowing module loading would run untrusted code within the security perimeter defined by Secure Boot. Red Hat provides a signed GRUB binary that has all supported modules on RHEL 9. Additional resources Restrictions Imposed by UEFI Secure Boot 21.4. Requirements for authenticating kernel modules with X.509 keys In RHEL 9, when a kernel module is loaded, the kernel checks the signature of the module against the public X.509 keys from the kernel system keyring ( .builtin_trusted_keys ) and the kernel platform keyring ( .platform ). The .platform keyring provides keys from third-party platform providers and custom public keys. The keys from the kernel system .blacklist keyring are excluded from verification. You need to meet certain conditions to load kernel modules on systems with enabled UEFI Secure Boot functionality: If UEFI Secure Boot is enabled or if the module.sig_enforce kernel parameter has been specified: You can only load those signed kernel modules whose signatures were authenticated against keys from the system keyring ( .builtin_trusted_keys ) and the platform keyring ( .platform ). The public key must not be on the system revoked keys keyring ( .blacklist ). If UEFI Secure Boot is disabled and the module.sig_enforce kernel parameter has not been specified: You can load unsigned kernel modules and signed kernel modules without a public key. If the system is not UEFI-based or if UEFI Secure Boot is disabled: Only the keys embedded in the kernel are loaded onto .builtin_trusted_keys and .platform . You have no ability to augment that set of keys without rebuilding the kernel. Table 21.2. Kernel module authentication requirements for loading Module signed Public key found and signature valid UEFI Secure Boot state sig_enforce Module load Kernel tainted Unsigned - Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed No Not enabled Not enabled Succeeds Yes Not enabled Enabled Fails - Enabled - Fails - Signed Yes Not enabled Not enabled Succeeds No Not enabled Enabled Succeeds No Enabled - Succeeds No 21.5. Sources for public keys During boot, the kernel loads X.509 keys from a set of persistent key stores into the following keyrings: The system keyring ( .builtin_trusted_keys ) The .platform keyring The system .blacklist keyring Table 21.3. Sources for system keyrings Source of X.509 keys User can add keys UEFI Secure Boot state Keys loaded during boot Embedded in kernel No - .builtin_trusted_keys UEFI db Limited Not enabled No Enabled .platform Embedded in the shim boot loader No Not enabled No Enabled .platform Machine Owner Key (MOK) list Yes Not enabled No Enabled .platform .builtin_trusted_keys A keyring that is built on boot. Provides trusted public keys. root privileges are required to view the keys. .platform A keyring that is built on boot. Provides keys from third-party platform providers and custom public keys. root privileges are required to view the keys. .blacklist A keyring with X.509 keys which have been revoked. A module signed by a key from .blacklist will fail authentication even if your public key is in .builtin_trusted_keys . UEFI Secure Boot db A signature database. Stores keys (hashes) of UEFI applications, UEFI drivers, and boot loaders. The keys can be loaded on the machine. UEFI Secure Boot dbx A revoked signature database. Prevents keys from getting loaded. The revoked keys from this database are added to the .blacklist keyring. 21.6. Generating a public and private key pair To use a custom kernel or custom kernel modules on a Secure Boot-enabled system, you must generate a public and private X.509 key pair. You can use the generated private key to sign the kernel or the kernel modules. You can also validate the signed kernel or kernel modules by adding the corresponding public key to the Machine Owner Key (MOK) for Secure Boot. Warning Apply strong security measures and access policies to guard the contents of your private key. In the wrong hands, the key could be used to compromise any system which is authenticated by the corresponding public key. Procedure Create an X.509 public and private key pair: If you only want to sign custom kernel modules : If you want to sign custom kernel : When the RHEL system is running FIPS mode: Note In FIPS mode, you must use the --token option so that efikeygen finds the default "NSS Certificate DB" token in the PKI database. The public and private keys are now stored in the /etc/pki/pesign/ directory. Important It is a good security practice to sign the kernel and the kernel modules within the validity period of its signing key. However, the sign-file utility does not warn you and the key will be usable in RHEL 9 regardless of the validity dates. Additional resources openssl(1) manual page RHEL Security Guide Enrolling public key on target system by adding the public key to the MOK list 21.7. Example output of system keyrings You can display information about the keys on the system keyrings using the keyctl utility from the keyutils package. Prerequisites You have root permissions. You have installed the keyctl utility from the keyutils package. Example 21.1. Keyrings output The following is a shortened example output of .builtin_trusted_keys , .platform , and .blacklist keyrings from a RHEL 9 system where UEFI Secure Boot is enabled. The .builtin_trusted_keys keyring in the example shows the addition of two keys from the UEFI Secure Boot db keys as well as the Red Hat Secure Boot (CA key 1) , which is embedded in the shim boot loader. Example 21.2. Kernel console output The following example shows the kernel console output. The messages identify the keys with an UEFI Secure Boot related source. These include UEFI Secure Boot db , embedded shim , and MOK list. Additional resources keyctl(1) , dmesg(1) manual pages 21.8. Enrolling public key on target system by adding the public key to the MOK list You must authenticate your public key on a system for kernel or kernel module access and enroll it in the platform keyring ( .platform ) of the target system. When RHEL 9 boots on a UEFI-based system with Secure Boot enabled, the kernel imports public keys from the db key database and excludes revoked keys from the dbx database. The Machine Owner Key (MOK) facility allows expanding the UEFI Secure Boot key database. When booting RHEL 9 on UEFI-enabled systems with Secure Boot enabled, keys on the MOK list are added to the platform keyring ( .platform ), along with the keys from the Secure Boot database. The list of MOK keys is stored securely and persistently in the same way, but it is a separate facility from the Secure Boot databases. The MOK facility is supported by shim , MokManager , GRUB , and the mokutil utility that enables secure key management and authentication for UEFI-based systems. Note To get the authentication service of your kernel module on your systems, consider requesting your system vendor to incorporate your public key into the UEFI Secure Boot key database in their factory firmware image. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . Procedure Export your public key to the sb_cert.cer file: Import your public key into the MOK list: Enter a new password for this MOK enrollment request. Reboot the machine. The shim boot loader notices the pending MOK key enrollment request and it launches MokManager.efi to enable you to complete the enrollment from the UEFI console. Choose Enroll MOK , enter the password you previously associated with this request when prompted, and confirm the enrollment. Your public key is added to the MOK list, which is persistent. Once a key is on the MOK list, it will be automatically propagated to the .platform keyring on this and subsequent boots when UEFI Secure Boot is enabled. 21.9. Signing a kernel with the private key You can obtain enhanced security benefits on your system by loading a signed kernel if the UEFI Secure Boot mechanism is enabled. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . You have enrolled your public key on the target system. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have a kernel image in the ELF format available for signing. Procedure On the x64 architecture: Create a signed image: Replace version with the version suffix of your vmlinuz file, and Custom Secure Boot key with the name that you chose earlier. Optional: Check the signatures: Overwrite the unsigned image with the signed image: On the 64-bit ARM architecture: Decompress the vmlinuz file: Create a signed image: Optional: Check the signatures: Compress the vmlinux file: Remove the uncompressed vmlinux file: 21.10. Signing a GRUB build with the private key On a system where the UEFI Secure Boot mechanism is enabled, you can sign a GRUB build with a custom existing private key. You must do this if you are using a custom GRUB build, or if you have removed the Microsoft trust anchor from your system. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . You have enrolled your public key on the target system. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have a GRUB EFI binary available for signing. Procedure On the x64 architecture: Create a signed GRUB EFI binary: Replace Custom Secure Boot key with the name that you chose earlier. Optional: Check the signatures: Overwrite the unsigned binary with the signed binary: On the 64-bit ARM architecture: Create a signed GRUB EFI binary: Replace Custom Secure Boot key with the name that you chose earlier. Optional: Check the signatures: Overwrite the unsigned binary with the signed binary: 21.11. Signing kernel modules with the private key You can enhance the security of your system by loading signed kernel modules if the UEFI Secure Boot mechanism is enabled. Your signed kernel module is also loadable on systems where UEFI Secure Boot is disabled or on a non-UEFI system. As a result, you do not need to provide both, a signed and unsigned version of your kernel module. Prerequisites You have generated a public and private key pair and know the validity dates of your public keys. For details, see Generating a public and private key pair . You have enrolled your public key on the target system. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have a kernel module in ELF image format available for signing. Procedure Export your public key to the sb_cert.cer file: Extract the key from the NSS database as a PKCS #12 file: When the command prompts, enter a new password that encrypts the private key. Export the unencrypted private key: Important Keep the unencrypted private key secure. Sign your kernel module. The following command appends the signature directly to the ELF image in your kernel module file: Your kernel module is now ready for loading. Important In RHEL 9, the validity dates of the key pair matter. The key does not expire, but the kernel module must be signed within the validity period of its signing key. The sign-file utility will not warn you of this. For example, a key that is only valid in 2021 can be used to authenticate a kernel module signed in 2021 with that key. However, users cannot use that key to sign a kernel module in 2022. Verification Display information about the kernel module's signature: Check that the signature lists your name as entered during generation. Note The appended signature is not contained in an ELF image section and is not a formal part of the ELF image. Therefore, utilities such as readelf cannot display the signature on your kernel module. Load the module: Remove (unload) the module: Additional resources Displaying information about kernel modules 21.12. Loading signed kernel modules After enrolling your public key in the system keyring ( .builtin_trusted_keys ) and the MOK list, and signing kernel modules with your private key, you can load them using the modprobe command. Prerequisites You have generated the public and private key pair. For details, see Generating a public and private key pair . You have enrolled the public key into the system keyring. For details, see Enrolling public key on target system by adding the public key to the MOK list . You have signed a kernel module with the private key. For details, see Signing kernel modules with the private key . Install the kernel-modules-extra package, which creates the /lib/modules/USD(uname -r)/extra/ directory: Procedure Verify that your public keys are on the system keyring: Copy the kernel module into the extra/ directory of the kernel that you want: Update the modular dependency list: Load the kernel module: Optional: To load the module on boot, add it to the /etc/modules-loaded.d/ my_module .conf file: Verification Verify that the module was successfully loaded: Additional resources Managing kernel modules
[ "dnf install pesign openssl kernel-devel mokutil keyutils", "efikeygen --dbdir /etc/pki/pesign --self-sign --module --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key '", "efikeygen --dbdir /etc/pki/pesign --self-sign --kernel --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key '", "efikeygen --dbdir /etc/pki/pesign --self-sign --kernel --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key ' --token 'NSS FIPS 140-2 Certificate DB'", "keyctl list %:.builtin_trusted_keys 6 keys in keyring: ...asymmetric: Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87 ...asymmetric: Red Hat Secure Boot (CA key 1): 4016841644ce3a810408050766e8f8a29 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed ...asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e ...asymmetric: Red Hat Enterprise Linux kernel signing key: 4249689eefc77e95880b ...asymmetric: Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b7 keyctl list %:.platform 4 keys in keyring: ...asymmetric: VMware, Inc.: 4ad8da0472073 ...asymmetric: Red Hat Secure Boot CA 5: cc6fafe72 ...asymmetric: Microsoft Windows Production PCA 2011: a929f298e1 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4e0bd82 keyctl list %:.blacklist 4 keys in keyring: ...blacklist: bin:f5ff83a ...blacklist: bin:0dfdbec ...blacklist: bin:38f1d22 ...blacklist: bin:51f831f", "dmesg | egrep 'integrity.*cert' [1.512966] integrity: Loading X.509 certificate: UEFI:db [1.513027] integrity: Loaded X.509 cert 'Microsoft Windows Production PCA 2011: a929023 [1.513028] integrity: Loading X.509 certificate: UEFI:db [1.513057] integrity: Loaded X.509 cert 'Microsoft Corporation UEFI CA 2011: 13adbf4309 [1.513298] integrity: Loading X.509 certificate: UEFI:MokListRT (MOKvar table) [1.513549] integrity: Loaded X.509 cert 'Red Hat Secure Boot CA 5: cc6fa5e72868ba494e93", "certutil -d /etc/pki/pesign -n ' Custom Secure Boot key ' -Lr > sb_cert.cer", "mokutil --import sb_cert.cer", "pesign --certificate ' Custom Secure Boot key ' --in vmlinuz- version --sign --out vmlinuz- version .signed", "pesign --show-signature --in vmlinuz- version .signed", "mv vmlinuz- version .signed vmlinuz- version", "zcat vmlinuz- version > vmlinux- version", "pesign --certificate ' Custom Secure Boot key ' --in vmlinux- version --sign --out vmlinux- version .signed", "pesign --show-signature --in vmlinux- version .signed", "gzip --to-stdout vmlinux- version .signed > vmlinuz- version", "rm vmlinux- version *", "pesign --in /boot/efi/EFI/redhat/grubx64.efi --out /boot/efi/EFI/redhat/grubx64.efi.signed --certificate ' Custom Secure Boot key ' --sign", "pesign --in /boot/efi/EFI/redhat/grubx64.efi.signed --show-signature", "mv /boot/efi/EFI/redhat/grubx64.efi.signed /boot/efi/EFI/redhat/grubx64.efi", "pesign --in /boot/efi/EFI/redhat/grubaa64.efi --out /boot/efi/EFI/redhat/grubaa64.efi.signed --certificate ' Custom Secure Boot key ' --sign", "pesign --in /boot/efi/EFI/redhat/grubaa64.efi.signed --show-signature", "mv /boot/efi/EFI/redhat/grubaa64.efi.signed /boot/efi/EFI/redhat/grubaa64.efi", "certutil -d /etc/pki/pesign -n ' Custom Secure Boot key ' -Lr > sb_cert.cer", "pk12util -o sb_cert.p12 -n ' Custom Secure Boot key ' -d /etc/pki/pesign", "openssl pkcs12 -in sb_cert.p12 -out sb_cert.priv -nocerts -noenc", "/usr/src/kernels/USD(uname -r)/scripts/sign-file sha256 sb_cert.priv sb_cert.cer my_module .ko", "modinfo my_module .ko | grep signer signer: Your Name Key", "insmod my_module .ko", "modprobe -r my_module .ko", "dnf -y install kernel-modules-extra", "keyctl list %:.platform", "cp my_module .ko /lib/modules/USD(uname -r)/extra/", "depmod -a", "modprobe -v my_module", "echo \" my_module \" > /etc/modules-load.d/ my_module .conf", "lsmod | grep my_module" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/signing-a-kernel-and-modules-for-secure-boot_managing-monitoring-and-updating-the-kernel
Chapter 3. The Ceph client components
Chapter 3. The Ceph client components Ceph clients differ in their materially in how they present data storage interfaces. A Ceph block device presents block storage that mounts just like a physical storage drive. A Ceph gateway presents an object storage service with S3-compliant and Swift-compliant RESTful interfaces with its own user management. However, all Ceph clients use the Reliable Autonomic Distributed Object Store (RADOS) protocol to interact with the Red Hat Ceph Storage cluster. They all have the same basic needs: The Ceph configuration file, and the Ceph monitor address. The pool name. The user name and the path to the secret key. Ceph clients tend to follow some similar patters, such as object-watch-notify and striping. The following sections describe a little bit more about RADOS, librados and common patterns used in Ceph clients. 3.1. Prerequisites A basic understanding of distributed storage systems. 3.2. Ceph client native protocol Modern applications need a simple object storage interface with asynchronous communication capability. The Ceph Storage Cluster provides a simple object storage interface with asynchronous communication capability. The interface provides direct, parallel access to objects throughout the cluster. Pool Operations Snapshots Read/Write Objects Create or Remove Entire Object or Byte Range Append or Truncate Create/Set/Get/Remove XATTRs Create/Set/Get/Remove Key/Value Pairs Compound operations and dual-ack semantics 3.3. Ceph client object watch and notify A Ceph client can register a persistent interest with an object and keep a session to the primary OSD open. The client can send a notification message and payload to all watchers and receive notification when the watchers receive the notification. This enables a client to use any object as a synchronization/communication channel. 3.4. Ceph client Mandatory Exclusive Locks Mandatory Exclusive Locks is a feature that locks an RBD to a single client, if multiple mounts are in place. This helps address the write conflict situation when multiple mounted client try to write to the same object. This feature is built on object-watch-notify explained in the section. So, when writing, if one client first establishes an exclusive lock on an object, another mounted client will first check to see if a peer has placed a lock on the object before writing. With this feature enabled, only one client can modify an RBD device at a time, especially when changing internal RBD structures during operations like snapshot create/delete . It also provides some protection for failed clients. For instance, if a virtual machine seems to be unresponsive and you start a copy of it with the same disk elsewhere, the first one will be blacklisted in Ceph and unable to corrupt the new one. Mandatory Exclusive Locks is not enabled by default. You have to explicitly enable it with --image-feature parameter when creating an image. Example Here, the numeral 5 is a summation of 1 and 4 where 1 enables layering support and 4 enables exclusive locking support. So, the above command will create a 100 GB rbd image, enable layering and exclusive lock. Mandatory Exclusive Locks is also a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. Mandatory Exclusive Locks also does some ground work for mirroring. 3.5. Ceph client object map Object map is a feature that tracks the presence of backing RADOS objects when a client writes to an rbd image. When a write occurs, that write is translated to an offset within a backing RADOS object. When the object map feature is enabled, the presence of these RADOS objects is tracked. So, we can know if the objects actually exist. Object map is kept in-memory on the librbd client so it can avoid querying the OSDs for objects that it knows don't exist. In other words, object map is an index of the objects that actually exists. Object map is beneficial for certain operations, viz: Resize Export Copy Flatten Delete Read A shrink resize operation is like a partial delete where the trailing objects are deleted. An export operation knows which objects are to be requested from RADOS. A copy operation knows which objects exist and need to be copied. It does not have to iterate over potentially hundreds and thousands of possible objects. A flatten operation performs a copy-up for all parent objects to the clone so that the clone can be detached from the parent i.e, the reference from the child clone to the parent snapshot can be removed. So, instead of all potential objects, copy-up is done only for the objects that exist. A delete operation deletes only the objects that exist in the image. A read operation skips the read for objects it knows doesn't exist. So, for operations like resize, shrinking only, exporting, copying, flattening, and deleting, these operations would need to issue an operation for all potentially affected RADOS objects, whether they exist or not. With object map enabled, if the object doesn't exist, the operation need not be issued. For example, if we have a 1 TB sparse RBD image, it can have hundreds and thousands of backing RADOS objects. A delete operation without object map enabled would need to issue a remove object operation for each potential object in the image. But if object map is enabled, it only needs to issue remove object operations for the objects that exist. Object map is valuable against clones that don't have actual objects but gets object from parent. When there is a cloned image, the clone initially has no objects and all reads are redirected to the parent. So, object map can improve reads as without the object map, first it needs to issue a read operation to the OSD for the clone, when that fails, it issues another read to the parent - with object map enabled. It skips the read for objects it knows doesn't exist. Object map is not enabled by default. You have to explicitly enable it with --image-features parameter when creating an image. Also, Mandatory Exclusive Locks is a prerequisite for object map . Without enabling exclusive locking support, object map support cannot be enabled. To enable object map support when creating a image, execute: Here, the numeral 13 is a summation of 1 , 4 and 8 where 1 enables layering support, 4 enables exclusive locking support and 8 enables object map support. So, the above command will create a 100 GB rbd image, enable layering, exclusive lock and object map. 3.6. Ceph client data stripping Storage devices have throughput limitations, which impact performance and scalability. So storage systems often support striping- storing sequential pieces of information across multiple storage devices- to increase throughput and performance. The most common form of data striping comes from RAID. The RAID type most similar to Ceph's striping is RAID 0, or a 'striped volume.' Ceph's striping offers the throughput of RAID 0 striping, the reliability of n-way RAID mirroring and faster recovery. Ceph provides three types of clients: Ceph Block Device, Ceph Filesystem, and Ceph Object Storage. A Ceph Client converts its data from the representation format it provides to its users, such as a block device image, RESTful objects, CephFS filesystem directories, into objects for storage in the Ceph Storage Cluster. Tip The objects Ceph stores in the Ceph Storage Cluster are not striped. Ceph Object Storage, Ceph Block Device, and the Ceph Filesystem stripe their data over multiple Ceph Storage Cluster objects. Ceph Clients that write directly to the Ceph storage cluster using librados must perform the striping, and parallel I/O for themselves to obtain these benefits. The simplest Ceph striping format involves a stripe count of 1 object. Ceph Clients write stripe units to a Ceph Storage Cluster object until the object is at its maximum capacity, and then create another object for additional stripes of data. The simplest form of striping may be sufficient for small block device images, S3 or Swift objects. However, this simple form doesn't take maximum advantage of Ceph's ability to distribute data across placement groups, and consequently doesn't improve performance very much. The following diagram depicts the simplest form of striping: If you anticipate large images sizes, large S3 or Swift objects for example, video, you may see considerable read/write performance improvements by striping client data over multiple objects within an object set. Significant write performance occurs when the client writes the stripe units to their corresponding objects in parallel. Since objects get mapped to different placement groups and further mapped to different OSDs, each write occurs in parallel at the maximum write speed. A write to a single disk would be limited by the head movement for example, 6ms per seek and bandwidth of that one device for example, 100MB/s. By spreading that write over multiple objects, which map to different placement groups and OSDs, Ceph can reduce the number of seeks per drive and combine the throughput of multiple drives to achieve much faster write or read speeds. Note Striping is independent of object replicas. Since CRUSH replicates objects across OSDs, stripes get replicated automatically. In the following diagram, client data gets striped across an object set ( object set 1 in the following diagram) consisting of 4 objects, where the first stripe unit is stripe unit 0 in object 0 , and the fourth stripe unit is stripe unit 3 in object 3 . After writing the fourth stripe, the client determines if the object set is full. If the object set is not full, the client begins writing a stripe to the first object again, see object 0 in the following diagram. If the object set is full, the client creates a new object set, see object set 2 in the following diagram, and begins writing to the first stripe, with a stripe unit of 16, in the first object in the new object set, see object 4 in the diagram below. Three important variables determine how Ceph stripes data: Object Size: Objects in the Ceph Storage Cluster have a maximum configurable size, such as 2 MB, or 4 MB. The object size should be large enough to accommodate many stripe units, and should be a multiple of the stripe unit. IMPORTANT: Red Hat recommends a safe maximum value of 16 MB. Stripe Width: Stripes have a configurable unit size, for example 64 KB. The Ceph Client divides the data it will write to objects into equally sized stripe units, except for the last stripe unit. A stripe width, should be a fraction of the Object Size so that an object may contain many stripe units. Stripe Count: The Ceph Client writes a sequence of stripe units over a series of objects determined by the stripe count. The series of objects is called an object set. After the Ceph Client writes to the last object in the object set, it returns to the first object in the object set. Important Test the performance of your striping configuration before putting your cluster into production. You CANNOT change these striping parameters after you stripe the data and write it to objects. Once the Ceph Client has striped data to stripe units and mapped the stripe units to objects, Ceph's CRUSH algorithm maps the objects to placement groups, and the placement groups to Ceph OSD Daemons before the objects are stored as files on a storage disk. Note Since a client writes to a single pool, all data striped into objects get mapped to placement groups in the same pool. So they use the same CRUSH map and the same access controls.
[ "rbd create --size 102400 mypool/myimage --image-feature 5", "rbd -p mypool create myimage --size 102400 --image-features 13" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/architecture_guide/the-ceph-client-components
Appendix A. Component Versions
Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 7.4 release. Table A.1. Component Versions Component Version Kernel 3.10.0-693 QLogic qla2xxx driver 8.07.00.38.07.4-k1 QLogic qla4xxx driver 5.04.00.00.07.02-k0 Emulex lpfc driver 0:11.2.0.6 iSCSI initiator utils iscsi-initiator-utils-6.2.0.874-4 DM-Multipath device-mapper-multipath-0.4.9-111 LVM lvm2-2.02.171-8
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/component_versions
Hardware Considerations for Implementing SR-IOV
Hardware Considerations for Implementing SR-IOV Red Hat Virtualization 4.4 Hardware considerations for implementing SR-IOV with Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document outlines hardware considerations for implementing SR-IOV with Red Hat Enterprise Linux, and for device assignment with Red Hat Virtualization.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/hardware_considerations_for_implementing_sr-iov/index
Chapter 1. About the Red Hat OpenStack Platform framework for upgrades
Chapter 1. About the Red Hat OpenStack Platform framework for upgrades The Red Hat OpenStack Platform (RHOSP) framework for upgrades is a workflow to upgrade your RHOSP environment from one long life version to the long life version. This workflow is an in-place solution and the upgrade occurs within your existing environment. 1.1. Upgrade framework for long life versions You can use the Red Hat OpenStack Platform (RHOSP) upgrade framework to perform an in-place upgrade path through multiple versions of the overcloud. The goal is to provide you with an opportunity to remain on certain OpenStack versions that are considered long life versions and upgrade when the long life version is available. The Red Hat OpenStack Platform upgrade process also upgrades the version of Red Hat Enterprise Linux (RHEL) on your nodes. This guide provides an upgrade framework through the following versions: Current Version Target Version Red Hat OpenStack Platform 13 latest Red Hat OpenStack Platform 16.2 latest 1.2. Lifecycle support for long life versions For detailed support dates and information on the lifecycle support for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Life Cycle . 1.3. Upgrade paths for long life releases Familiarize yourself with the possible update and upgrade paths before you begin an update or an upgrade. Note You can view your current RHOSP and RHEL versions in the /etc/rhosp-release and /etc/redhat-release files. Table 1.1. Updates version path Current version Target version RHOSP 10.0.x on RHEL 7.x RHOSP 10.0 latest on RHEL 7.7 latest RHOSP 13.0.x on RHEL 7.x RHOSP 13.0 latest on RHEL 7.9 latest RHOSP 16.1.x on RHEL 8.2 RHOSP 16.1 latest on RHEL 8.2 latest RHOSP 16.1.x on RHEL 8.2 RHOSP 16.2 latest on RHEL 8.4 latest RHOSP 16.2.x on RHEL 8.4 RHOSP 16.2 latest on RHEL 8.4 latest For more information, see Keeping Red Hat OpenStack Platform Updated . Table 1.2. Upgrades version path Current version Target version RHOSP 10 on RHEL 7.7 RHOSP 13 latest on RHEL 7.9 latest RHOSP 13 on RHEL 7.9 RHOSP 16.1 latest on RHEL 8.2 latest RHOSP 13 on RHEL 7.9 RHOSP 16.2 latest on RHEL 8.4 latest Red Hat provides two options for upgrading your environment to the long life release: In-place upgrade Perform an upgrade of the services in your existing environment. This guide primarily focuses on this option. Parallel migration Create a new Red Hat OpenStack Platform 16.2 environment and migrate your workloads from your current environment to the new environment. For more information about Red Hat OpenStack Platform parallel migration, contact Red Hat Global Professional Services. Important The durations in this table are minimal estimates based on internal testing and might not apply to all productions environments. For example, if your hardware has low specifications or an extended boot period, allow for more time with these durations. To accurately gauge the upgrade duration for each task, perform these procedures in a test environment with hardware similar to your production environment. Table 1.3. Impact and duration of upgrade paths In-place upgrade Parallel migration Upgrade duration for undercloud Estimated duration for each major action includes the following: 30 minutes for Leapp upgrade command 30 minutes for Leapp reboot 40 minutes for director upgrade None. You are creating a new undercloud in addition to your existing undercloud. Upgrade duration for overcloud control plane Estimates for each Controller node: 60 minutes for Leapp upgrade and reboot 60 minutes for service upgrade None. You are creating a new control plane in addition to your existing control plane. Outage duration for control plane The duration of the service upgrade of the bootstrap Controller node, which is approximately 60 minutes. None. Both overclouds are operational during the workload migration. Consequences of control plane outage You cannot perform OpenStack operations during the outage. No outage. Upgrade duration for overcloud data plane Estimates for each Compute node and Ceph Storage node: 60 minutes for Leapp upgrade and reboot 30 minutes for service upgrade None. You are creating a new data plane in addition to your existing data plane. Outage duration for data plane The outage is minimal due to workload migration from node to node. The outage is minimal due to workload migration from overcloud to overcloud. Additional hardware requirements No additional hardware is required. Additional hardware is required to create a new undercloud and overcloud.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/about-the-red-hat-openstack-platform-framework-for-upgrades
function::substr
function::substr Name function::substr - Returns a substring. Synopsis Arguments str The string to take a substring from start Starting position. 0 = start of the string. length Length of string to return. General Syntax substr:string (str:string, start:long, stop:long) Description Returns the substring of the up to the given length starting at the given start position and ending at given stop position.
[ "function substr:string(str:string,start:long,length:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-substr
E.2.12. /proc/ioports
E.2.12. /proc/ioports The output of /proc/ioports provides a list of currently registered port regions used for input or output communication with a device. This file can be quite long. The following is a partial listing: The first column gives the I/O port address range reserved for the device listed in the second column.
[ "0000-001f : dma1 0020-003f : pic1 0040-005f : timer 0060-006f : keyboard 0070-007f : rtc 0080-008f : dma page reg 00a0-00bf : pic2 00c0-00df : dma2 00f0-00ff : fpu 0170-0177 : ide1 01f0-01f7 : ide0 02f8-02ff : serial(auto) 0376-0376 : ide1 03c0-03df : vga+ 03f6-03f6 : ide0 03f8-03ff : serial(auto) 0cf8-0cff : PCI conf1 d000-dfff : PCI Bus #01 e000-e00f : VIA Technologies, Inc. Bus Master IDE e000-e007 : ide0 e008-e00f : ide1 e800-e87f : Digital Equipment Corporation DECchip 21140 [FasterNet] e800-e87f : tulip" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-ioports
Chapter 6. PodSecurityPolicySubjectReview [security.openshift.io/v1]
Chapter 6. PodSecurityPolicySubjectReview [security.openshift.io/v1] Description PodSecurityPolicySubjectReview checks whether a particular user/SA tuple can create the PodTemplateSpec. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object PodSecurityPolicySubjectReviewSpec defines specification for PodSecurityPolicySubjectReview status object PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. 6.1.1. .spec Description PodSecurityPolicySubjectReviewSpec defines specification for PodSecurityPolicySubjectReview Type object Required template Property Type Description groups array (string) groups is the groups you're testing for. template PodTemplateSpec template is the PodTemplateSpec to check. If template.spec.serviceAccountName is empty it will not be defaulted. If its non-empty, it will be checked. user string user is the user you're testing for. If you specify "user" but not "group", then is it interpreted as "What if user were not a member of any groups. If user and groups are empty, then the check is performed using only the serviceAccountName in the template. 6.1.2. .status Description PodSecurityPolicySubjectReviewStatus contains information/status for PodSecurityPolicySubjectReview. Type object Property Type Description allowedBy ObjectReference allowedBy is a reference to the rule that allows the PodTemplateSpec. A rule can be a SecurityContextConstraint or a PodSecurityPolicy A nil , indicates that it was denied. reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. template PodTemplateSpec template is the PodTemplateSpec after the defaulting is applied. 6.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicysubjectreviews POST : create a PodSecurityPolicySubjectReview 6.2.1. /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicysubjectreviews Table 6.1. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create a PodSecurityPolicySubjectReview Table 6.3. Body parameters Parameter Type Description body PodSecurityPolicySubjectReview schema Table 6.4. HTTP responses HTTP code Reponse body 200 - OK PodSecurityPolicySubjectReview schema 201 - Created PodSecurityPolicySubjectReview schema 202 - Accepted PodSecurityPolicySubjectReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_apis/podsecuritypolicysubjectreview-security-openshift-io-v1
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) AWS clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process for your environment based on your requirement: Deploy using dynamic storage devices Deploy standalone Multicloud Object Gateway component
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_amazon_web_services/preface-aws
Pipelines
Pipelines OpenShift Container Platform 4.14 A cloud-native continuous integration and continuous delivery solution based on Kubernetes resources Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html-single/pipelines/index
Chapter 5. Installing a cluster on OpenStack in a restricted network
Chapter 5. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.15, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.15 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 5.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 5.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 5.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 5.7. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 5.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.15 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 5.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 5.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.9.2. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 5.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 5.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 5.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 5.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 5.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.15. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "openstack role add --user <user> --project <project> swiftoperator", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #", "oc edit configmap -n openshift-config cloud-provider-config", "file <name_of_downloaded_file>", "openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}", "./openshift-install create install-config --dir <installation_directory> 1", "platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_openstack/installing-openstack-installer-restricted
Deploy Red Hat Quay - High Availability
Deploy Red Hat Quay - High Availability Red Hat Quay 3.12 Deploy Red Hat Quay HA Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/deploy_red_hat_quay_-_high_availability/index
Chapter 9. Removing the kubeadmin user
Chapter 9. Removing the kubeadmin user 9.1. The kubeadmin user OpenShift Container Platform creates a cluster administrator, kubeadmin , after the installation process completes. This user has the cluster-admin role automatically applied and is treated as the root user for the cluster. The password is dynamically generated and unique to your OpenShift Container Platform environment. After installation completes the password is provided in the installation program's output. For example: INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> 9.2. Removing the kubeadmin user After you define an identity provider and create a new cluster-admin user, you can remove the kubeadmin to improve cluster security. Warning If you follow this procedure before another user is a cluster-admin , then OpenShift Container Platform must be reinstalled. It is not possible to undo this command. Prerequisites You must have configured at least one identity provider. You must have added the cluster-admin role to a user. You must be logged in as an administrator. Procedure Remove the kubeadmin secrets: USD oc delete secrets kubeadmin -n kube-system
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>", "oc delete secrets kubeadmin -n kube-system" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/removing-kubeadmin
Chapter 4. Inventory File Importing
Chapter 4. Inventory File Importing Automation controller enables you to choose an inventory file from source control, rather than creating one from scratch. This function is the same as for custom inventory scripts, except that the contents are obtained from source control instead of editing their contents in a browser. This means that the files are non-editable, and as inventories are updated at the source, the inventories within the projects are also updated accordingly, including the group_vars and host_vars files or directory associated with them. SCM types can consume both inventory files and scripts. Both inventory files and custom inventory types use scripts. Imported hosts have a description of imported by default. This can be overridden by setting the _awx_description variable on a given host. For example, if importing from a sourced .ini file, you can add the following host variables: [main] 127.0.0.1 _awx_description="my host 1" 127.0.0.2 _awx_description="my host 2" Similarly, group descriptions also default to imported , but can also be overridden by _awx_description . To use old inventory scripts in source control, see Export old inventory scripts in the Automation controller User Guide . 4.1. Custom Dynamic Inventory Scripts A custom dynamic inventory script stored in version control can be imported and run. This makes it much easier to make changes to an inventory script. Rather than having to copy and paste a script into automation controller, it is pulled directly from source control and then executed. The script must handle any credentials required for its task. You are responsible for installing any Python libraries required by the script. (Custom dynamic inventory scripts have the same requirement.) This applies to both user-defined inventory source scripts and SCM sources as they are both exposed to Ansible virtualenv requirements related to playbooks. You can specify environment variables when you edit the SCM inventory source. For some scripts, this is sufficient. However, this is not a secure way to store secret information that gives access to cloud providers or inventory. A better way is to create a new credential type for the inventory script you are going to use. The credential type must specify all the necessary types of inputs. Then, when you create a credential of this type, the secrets are stored in an encrypted form. If you apply that credential to the inventory source, the script has access to those inputs. For more information, see Custom Credential Types in the Automation controller User Guide. 4.2. SCM Inventory Source Fields The source fields used are: source_project : the project to use. source_path : the relative path inside the project indicating a directory or a file. If left blank, "" is still a relative path indicating the root directory of the project. source_vars : if set on a "file" type inventory source then they are passed to the environment variables when running. Additionally: An update of the project automatically triggers an inventory update where it is used. An update of the project is scheduled immediately after creation of the inventory source. Neither inventory nor project updates are blocked while a related job is running. In cases where you have a large project (around 10 GB), disk space on /tmp can be an issue. You can specify a location manually in the automation controller UI from the Create Inventory Source page. Refer to Adding a source for instructions on creating an inventory source. When you update a project, refresh the listing to use the latest SCM information. If no inventory sources use a project as an SCM inventory source, then the inventory listing might not be refreshed on update. For inventories with SCM sources, the Job Details page for inventory updates displays a status indicator for the project update and the name of the project. The status indicator links to the project update job. The project name links to the project. You can perform an inventory update while a related job is running. 4.2.1. Supported File Syntax Automation controller uses the ansible-inventory module from Ansible to process inventory files, and supports all valid inventory syntax that automation controller requires. Important You do not need to write inventory scripts in Python. You can enter any executable file in the source field and must run chmod +x for that file and check it into Git. The following is a working example of JSON output that automation controller can read for the import: { "_meta": { "hostvars": { "host1": { "fly_rod": true } } }, "all": { "children": [ "groupA", "ungrouped" ] }, "groupA": { "hosts": [ "host1", "host10", "host11", "host12", "host13", "host14", "host15", "host16", "host17", "host18", "host19", "host2", "host20", "host21", "host22", "host23", "host24", "host25", "host3", "host4", "host5", "host6", "host7", "host8", "host9" ] } } Additional resources For examples of inventory files, see test-playbooks/inventories . For an example of an inventory script inside of that, see inventories/changes.py . For information about how to implement the inventory script, see the support article, How to migrate inventory scripts from Red Hat Ansible tower to Red Hat Ansible Automation Platform? .
[ "[main] 127.0.0.1 _awx_description=\"my host 1\" 127.0.0.2 _awx_description=\"my host 2\"", "{ \"_meta\": { \"hostvars\": { \"host1\": { \"fly_rod\": true } } }, \"all\": { \"children\": [ \"groupA\", \"ungrouped\" ] }, \"groupA\": { \"hosts\": [ \"host1\", \"host10\", \"host11\", \"host12\", \"host13\", \"host14\", \"host15\", \"host16\", \"host17\", \"host18\", \"host19\", \"host2\", \"host20\", \"host21\", \"host22\", \"host23\", \"host24\", \"host25\", \"host3\", \"host4\", \"host5\", \"host6\", \"host7\", \"host8\", \"host9\" ] } }" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_administration_guide/assembly-inventory-file-importing
Chapter 3. Configuring Dev Spaces
Chapter 3. Configuring Dev Spaces This section describes configuration methods and options for Red Hat OpenShift Dev Spaces. 3.1. Understanding the CheCluster Custom Resource A default deployment of OpenShift Dev Spaces consists of a CheCluster Custom Resource parameterized by the Red Hat OpenShift Dev Spaces Operator. The CheCluster Custom Resource is a Kubernetes object. You can configure it by editing the CheCluster Custom Resource YAML file. This file contains sections to configure each component: devWorkspace , cheServer , pluginRegistry , devfileRegistry , dashboard and imagePuller . The Red Hat OpenShift Dev Spaces Operator translates the CheCluster Custom Resource into a config map usable by each component of the OpenShift Dev Spaces installation. The OpenShift platform applies the configuration to each component, and creates the necessary Pods. When OpenShift detects changes in the configuration of a component, it restarts the Pods accordingly. Example 3.1. Configuring the main properties of the OpenShift Dev Spaces server component Apply the CheCluster Custom Resource YAML file with suitable modifications in the cheServer component section. The Operator generates the che ConfigMap . OpenShift detects changes in the ConfigMap and triggers a restart of the OpenShift Dev Spaces Pod. Additional resources Understanding Operators " Understanding Custom Resources " 3.1.1. Using dsc to configure the CheCluster Custom Resource during installation To deploy OpenShift Dev Spaces with a suitable configuration, edit the CheCluster Custom Resource YAML file during the installation of OpenShift Dev Spaces. Otherwise, the OpenShift Dev Spaces deployment uses the default configuration parameterized by the Operator. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . dsc . See: Section 1.2, "Installing the dsc management tool" . Procedure Create a che-operator-cr-patch.yaml YAML file that contains the subset of the CheCluster Custom Resource to configure: spec: <component> : <property_to_configure> : <value> Deploy OpenShift Dev Spaces and apply the changes described in che-operator-cr-patch.yaml file: Verification Verify the value of the configured property: Additional resources Section 3.1.3, " CheCluster Custom Resource fields reference" . Section 3.3.2, "Advanced configuration options for Dev Spaces server" . 3.1.2. Using the CLI to configure the CheCluster Custom Resource To configure a running instance of OpenShift Dev Spaces, edit the CheCluster Custom Resource YAML file. Prerequisites An instance of OpenShift Dev Spaces on OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Edit the CheCluster Custom Resource on the cluster: Save and close the file to apply the changes. Verification Verify the value of the configured property: Additional resources Section 3.1.3, " CheCluster Custom Resource fields reference" . Section 3.3.2, "Advanced configuration options for Dev Spaces server" . 3.1.3. CheCluster Custom Resource fields reference This section describes all fields available to customize the CheCluster Custom Resource. Example 3.2, "A minimal CheCluster Custom Resource example." Table 3.1, "Development environment configuration options." Table 3.2, " defaultNamespace options." Table 3.3, " defaultPlugins options." Table 3.4, " gatewayContainer options." Table 3.5, " storage options." Table 3.6, " per-user PVC strategy options." Table 3.7, " per-workspace PVC strategy options." Table 3.8, " trustedCerts options." Table 3.9, " containerBuildConfiguration options." Table 3.10, "OpenShift Dev Spaces components configuration." Table 3.11, "General configuration settings related to the OpenShift Dev Spaces server component." Table 3.12, " proxy options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.13, "Configuration settings related to the Plug-in registry component used by the OpenShift Dev Spaces installation." Table 3.14, " externalPluginRegistries options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.15, "Configuration settings related to the Devfile registry component used by the OpenShift Dev Spaces installation." Table 3.16, " externalDevfileRegistries options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.17, "Configuration settings related to the Dashboard component used by the OpenShift Dev Spaces installation." Table 3.18, " headerMessage options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.19, "Kubernetes Image Puller component configuration." Table 3.20, "OpenShift Dev Spaces server metrics component configuration." Table 3.21, "Configuration settings that allows users to work with remote Git repositories." Table 3.22, " github options." Table 3.23, " gitlab options." Table 3.24, " bitbucket options." Table 3.25, " azure options." Table 3.26, "Networking, OpenShift Dev Spaces authentication and TLS configuration." Table 3.27, " auth options." Table 3.28, " gateway options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.29, "Configuration of an alternative registry that stores OpenShift Dev Spaces images." Table 3.36, " CheCluster Custom Resource status defines the observed state of OpenShift Dev Spaces installation" Example 3.2. A minimal CheCluster Custom Resource example. apiVersion: org.eclipse.che/v2 kind: CheCluster metadata: name: devspaces namespace: openshift-devspaces spec: components: {} devEnvironments: {} networking: {} Table 3.1. Development environment configuration options. Property Description Default containerBuildConfiguration Container build configuration. defaultComponents Default components applied to DevWorkspaces. These default components are meant to be used when a Devfile, that does not contain any components. defaultEditor The default editor to workspace create with. It could be a plugin ID or a URI. The plugin ID must have publisher/name/version format. The URI must start from http:// or https:// . defaultNamespace User's default namespace. { "autoProvision": true, "template": "<username>-che"} defaultPlugins Default plug-ins applied to DevWorkspaces. deploymentStrategy DeploymentStrategy defines the deployment strategy to use to replace existing workspace pods with new ones. The available deployment stragies are Recreate and RollingUpdate . With the Recreate deployment strategy, the existing workspace pod is killed before the new one is created. With the RollingUpdate deployment strategy, a new workspace pod is created and the existing workspace pod is deleted only when the new workspace pod is in a ready state. If not specified, the default Recreate deployment strategy is used. disableContainerBuildCapabilities Disables the container build capabilities. When set to false (the default value), the devEnvironments.security.containerSecurityContext field is ignored, and the following container SecurityContext is applied: containerSecurityContext: allowPrivilegeEscalation: true capabilities: add: - SETGID - SETUID gatewayContainer GatewayContainer configuration. ignoredUnrecoverableEvents IgnoredUnrecoverableEvents defines a list of Kubernetes event names that should be ignored when deciding to fail a workspace that is starting. This option should be used if a transient cluster issue is triggering false-positives (for example, if the cluster occasionally encounters FailedScheduling events). Events listed here will not trigger workspace failures. imagePullPolicy ImagePullPolicy defines the imagePullPolicy used for containers in a DevWorkspace. maxNumberOfRunningWorkspacesPerUser The maximum number of running workspaces per user. The value, -1, allows users to run an unlimited number of workspaces. maxNumberOfWorkspacesPerUser Total number of workspaces, both stopped and running, that a user can keep. The value, -1, allows users to keep an unlimited number of workspaces. -1 nodeSelector The node selector limits the nodes that can run the workspace pods. persistUserHome PersistUserHome defines configuration options for persisting the user home directory in workspaces. podSchedulerName Pod scheduler for the workspace pods. If not specified, the pod scheduler is set to the default scheduler on the cluster. projectCloneContainer Project clone container configuration. secondsOfInactivityBeforeIdling Idle timeout for workspaces in seconds. This timeout is the duration after which a workspace will be idled if there is no activity. To disable workspace idling due to inactivity, set this value to -1. 1800 secondsOfRunBeforeIdling Run timeout for workspaces in seconds. This timeout is the maximum duration a workspace runs. To disable workspace run timeout, set this value to -1. -1 security Workspace security configuration. serviceAccount ServiceAccount to use by the DevWorkspace operator when starting the workspaces. serviceAccountTokens List of ServiceAccount tokens that will be mounted into workspace pods as projected volumes. startTimeoutSeconds StartTimeoutSeconds determines the maximum duration (in seconds) that a workspace can take to start before it is automatically failed. If not specified, the default value of 300 seconds (5 minutes) is used. 300 storage Workspaces persistent storage. { "pvcStrategy": "per-user"} tolerations The pod tolerations of the workspace pods limit where the workspace pods can run. trustedCerts Trusted certificate settings. user User configuration. workspacesPodAnnotations WorkspacesPodAnnotations defines additional annotations for workspace pods. Table 3.2. defaultNamespace options. Property Description Default autoProvision Indicates if is allowed to automatically create a user namespace. If it set to false, then user namespace must be pre-created by a cluster administrator. true template If you don't create the user namespaces in advance, this field defines the Kubernetes namespace created when you start your first workspace. You can use <username> and <userid> placeholders, such as che-workspace-<username>. "<username>-che" Table 3.3. defaultPlugins options. Property Description Default editor The editor ID to specify default plug-ins for. The plugin ID must have publisher/name/version format. plugins Default plug-in URIs for the specified editor. Table 3.4. gatewayContainer options. Property Description Default env List of environment variables to set in the container. image Container image. Omit it or leave it empty to use the default container image provided by the Operator. imagePullPolicy Image pull policy. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. name Container name. resources Compute resources required by this container. Table 3.5. storage options. Property Description Default perUserStrategyPvcConfig PVC settings when using the per-user PVC strategy. perWorkspaceStrategyPvcConfig PVC settings when using the per-workspace PVC strategy. pvcStrategy Persistent volume claim strategy for the OpenShift Dev Spaces server. The supported strategies are: per-user (all workspaces PVCs in one volume), per-workspace (each workspace is given its own individual PVC) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.) "per-user" Table 3.6. per-user PVC strategy options. Property Description Default claimSize Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. storageClass Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. Table 3.7. per-workspace PVC strategy options. Property Description Default claimSize Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. storageClass Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. Table 3.8. trustedCerts options. Property Description Default gitTrustedCertsConfigMapName The ConfigMap contains certificates to propagate to the OpenShift Dev Spaces components and to provide a particular configuration for Git. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/deploying-che-with-support-for-git-repositories-with-self-signed-certificates/ The ConfigMap must have a app.kubernetes.io/part-of=che.eclipse.org label. Table 3.9. containerBuildConfiguration options. Property Description Default openShiftSecurityContextConstraint OpenShift security context constraint to build containers. "container-build" Table 3.10. OpenShift Dev Spaces components configuration. Property Description Default cheServer General configuration settings related to the OpenShift Dev Spaces server. { "debug": false, "logLevel": "INFO"} dashboard Configuration settings related to the dashboard used by the OpenShift Dev Spaces installation. devWorkspace DevWorkspace Operator configuration. devfileRegistry Configuration settings related to the devfile registry used by the OpenShift Dev Spaces installation. imagePuller Kubernetes Image Puller configuration. metrics OpenShift Dev Spaces server metrics configuration. { "enable": true} pluginRegistry Configuration settings related to the plug-in registry used by the OpenShift Dev Spaces installation. Table 3.11. General configuration settings related to the OpenShift Dev Spaces server component. Property Description Default clusterRoles Additional ClusterRoles assigned to OpenShift Dev Spaces ServiceAccount. Each role must have a app.kubernetes.io/part-of=che.eclipse.org label. The defaults roles are: - <devspaces-namespace>-cheworkspaces-clusterrole - <devspaces-namespace>-cheworkspaces-namespaces-clusterrole - <devspaces-namespace>-cheworkspaces-devworkspace-clusterrole where the <devspaces-namespace> is the namespace where the CheCluster CR is created. The OpenShift Dev Spaces Operator must already have all permissions in these ClusterRoles to grant them. debug Enables the debug mode for OpenShift Dev Spaces server. false deployment Deployment override options. extraProperties A map of additional environment variables applied in the generated che ConfigMap to be used by the OpenShift Dev Spaces server in addition to the values already generated from other fields of the CheCluster custom resource (CR). If the extraProperties field contains a property normally generated in che ConfigMap from other CR fields, the value defined in the extraProperties is used instead. logLevel The log level for the OpenShift Dev Spaces server: INFO or DEBUG . "INFO" proxy Proxy server settings for Kubernetes cluster. No additional configuration is required for OpenShift cluster. By specifying these settings for the OpenShift cluster, you override the OpenShift proxy configuration. Table 3.12. proxy options. Property Description Default credentialsSecretName The secret name that contains user and password for a proxy server. The secret must have a app.kubernetes.io/part-of=che.eclipse.org label. nonProxyHosts A list of hosts that can be reached directly, bypassing the proxy. Specify wild card domain use the following form .<DOMAIN> , for example: - localhost - 127.0.0.1 - my.host.com - 123.42.12.32 Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining nonProxyHosts in a custom resource leads to merging non-proxy hosts lists from the cluster proxy configuration, and the ones defined in the custom resources. See the following page: https://docs.openshift.com/container-platform/latest/networking/enable-cluster-wide-proxy.html . In some proxy configurations, localhost may not translate to 127.0.0.1. Both localhost and 127.0.0.1 should be specified in this situation. port Proxy server port. url URL (protocol+hostname) of the proxy server. Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining url in a custom resource leads to overriding the cluster proxy configuration. See the following page: https://docs.openshift.com/container-platform/latest/networking/enable-cluster-wide-proxy.html . Table 3.13. Configuration settings related to the Plug-in registry component used by the OpenShift Dev Spaces installation. Property Description Default deployment Deployment override options. disableInternalRegistry Disables internal plug-in registry. externalPluginRegistries External plugin registries. openVSXURL Open VSX registry URL. If omitted an embedded instance will be used. Table 3.14. externalPluginRegistries options. Property Description Default url Public URL of the plug-in registry. Table 3.15. Configuration settings related to the Devfile registry component used by the OpenShift Dev Spaces installation. Property Description Default deployment Deprecated deployment override options. disableInternalRegistry Disables internal devfile registry. externalDevfileRegistries External devfile registries serving sample ready-to-use devfiles. Table 3.16. externalDevfileRegistries options. Property Description Default url The public UR of the devfile registry that serves sample ready-to-use devfiles. Table 3.17. Configuration settings related to the Dashboard component used by the OpenShift Dev Spaces installation. Property Description Default branding Dashboard branding resources. deployment Deployment override options. headerMessage Dashboard header message. logLevel The log level for the Dashboard. "ERROR" Table 3.18. headerMessage options. Property Description Default show Instructs dashboard to show the message. text Warning message displayed on the user dashboard. Table 3.19. Kubernetes Image Puller component configuration. Property Description Default enable Install and configure the community supported Kubernetes Image Puller Operator. When you set the value to true without providing any specs, it creates a default Kubernetes Image Puller object managed by the Operator. When you set the value to false , the Kubernetes Image Puller object is deleted, and the Operator uninstalled, regardless of whether a spec is provided. If you leave the spec.images field empty, a set of recommended workspace-related images is automatically detected and pre-pulled after installation. Note that while this Operator and its behavior is community-supported, its payload may be commercially-supported for pulling commercially-supported images. spec A Kubernetes Image Puller spec to configure the image puller in the CheCluster. Table 3.20. OpenShift Dev Spaces server metrics component configuration. Property Description Default enable Enables metrics for the OpenShift Dev Spaces server endpoint. true Table 3.21. Configuration settings that allows users to work with remote Git repositories. Property Description Default azure Enables users to work with repositories hosted on Azure DevOps Service (dev.azure.com). bitbucket Enables users to work with repositories hosted on Bitbucket (bitbucket.org or self-hosted). github Enables users to work with repositories hosted on GitHub (github.com or GitHub Enterprise). gitlab Enables users to work with repositories hosted on GitLab (gitlab.com or self-hosted). Table 3.22. github options. Property Description Default disableSubdomainIsolation Disables subdomain isolation. Deprecated in favor of che.eclipse.org/scm-github-disable-subdomain-isolation annotation. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/ . endpoint GitHub server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/ . secretName Kubernetes secret, that contains Base64-encoded GitHub OAuth Client id and GitHub OAuth Client secret. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/ . Table 3.23. gitlab options. Property Description Default endpoint GitLab server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/ . secretName Kubernetes secret, that contains Base64-encoded GitHub Application id and GitLab Application Client secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/ . Table 3.24. bitbucket options. Property Description Default endpoint Bitbucket server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ . secretName Kubernetes secret, that contains Base64-encoded Bitbucket OAuth 1.0 or OAuth 2.0 data. See the following pages for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ and https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-the-bitbucket-cloud/ . Table 3.25. azure options. Property Description Default secretName Kubernetes secret, that contains Base64-encoded Azure DevOps Service Application ID and Client Secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-microsoft-azure-devops-services Table 3.26. Networking, OpenShift Dev Spaces authentication and TLS configuration. Property Description Default annotations Defines annotations which will be set for an Ingress (a route for OpenShift platform). The defaults for kubernetes platforms are: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600", nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600", nginx.ingress.kubernetes.io/ssl-redirect: "true" auth Authentication settings. { "gateway": { "configLabels": { "app": "che", "component": "che-gateway-config" } }} domain For an OpenShift cluster, the Operator uses the domain to generate a hostname for the route. The generated hostname follows this pattern: che-<devspaces-namespace>.<domain>. The <devspaces-namespace> is the namespace where the CheCluster CRD is created. In conjunction with labels, it creates a route served by a non-default Ingress controller. For a Kubernetes cluster, it contains a global ingress domain. There are no default values: you must specify them. hostname The public hostname of the installed OpenShift Dev Spaces server. ingressClassName IngressClassName is the name of an IngressClass cluster resource. If a class name is defined in both the IngressClassName field and the kubernetes.io/ingress.class annotation, IngressClassName field takes precedence. labels Defines labels which will be set for an Ingress (a route for OpenShift platform). tlsSecretName The name of the secret used to set up Ingress TLS termination. If the field is an empty string, the default cluster certificate is used. The secret must have a app.kubernetes.io/part-of=che.eclipse.org label. Table 3.27. auth options. Property Description Default advancedAuthorization Advance authorization settings. Determines which users and groups are allowed to access Che. User is allowed to access OpenShift Dev Spaces if he/she is either in the allowUsers list or is member of group from allowGroups list and not in neither the denyUsers list nor is member of group from denyGroups list. If allowUsers and allowGroups are empty, then all users are allowed to access Che. if denyUsers and denyGroups are empty, then no users are denied to access Che. gateway Gateway settings. { "configLabels": { "app": "che", "component": "che-gateway-config" }} identityProviderURL Public URL of the Identity Provider server. identityToken Identity token to be passed to upstream. There are two types of tokens supported: id_token and access_token . Default value is id_token . This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift. oAuthAccessTokenInactivityTimeoutSeconds Inactivity timeout for tokens to set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. 0 means tokens for this client never time out. oAuthAccessTokenMaxAgeSeconds Access token max age for tokens to set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. 0 means no expiration. oAuthClientName Name of the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. oAuthScope Access Token Scope. This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift. oAuthSecret Name of the secret set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. For Kubernetes, this can either be the plain text oAuthSecret value, or the name of a kubernetes secret which contains a key oAuthSecret and the value is the secret. NOTE: this secret must exist in the same namespace as the CheCluster resource and contain the label app.kubernetes.io/part-of=che.eclipse.org . Table 3.28. gateway options. Property Description Default configLabels Gateway configuration labels. { "app": "che", "component": "che-gateway-config"} deployment Deployment override options. Since gateway deployment consists of several containers, they must be distinguished in the configuration by their names: - gateway - configbump - oauth-proxy - kube-rbac-proxy kubeRbacProxy Configuration for kube-rbac-proxy within the OpenShift Dev Spaces gateway pod. oAuthProxy Configuration for oauth-proxy within the OpenShift Dev Spaces gateway pod. traefik Configuration for Traefik within the OpenShift Dev Spaces gateway pod. Table 3.29. Configuration of an alternative registry that stores OpenShift Dev Spaces images. Property Description Default hostname An optional hostname or URL of an alternative container registry to pull images from. This value overrides the container registry hostname defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. organization An optional repository name of an alternative registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. Table 3.30. deployment options. Property Description Default containers List of containers belonging to the pod. securityContext Security options the pod should run with. Table 3.31. containers options. Property Description Default env List of environment variables to set in the container. image Container image. Omit it or leave it empty to use the default container image provided by the Operator. imagePullPolicy Image pull policy. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. name Container name. resources Compute resources required by this container. Table 3.32. containers options. Property Description Default limits Describes the maximum amount of compute resources allowed. request Describes the minimum amount of compute resources required. Table 3.33. request options. Property Description Default cpu CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. memory Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. Table 3.34. limits options. Property Description Default cpu CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. memory Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. Table 3.35. securityContext options. Property Description Default fsGroup A special supplemental group that applies to all containers in a pod. The default value is 1724 . runAsUser The UID to run the entrypoint of the container process. The default value is 1724 . Table 3.36. CheCluster Custom Resource status defines the observed state of OpenShift Dev Spaces installation Property Description Default chePhase Specifies the current phase of the OpenShift Dev Spaces deployment. cheURL Public URL of the OpenShift Dev Spaces server. cheVersion Currently installed OpenShift Dev Spaces version. devfileRegistryURL Deprecated the public URL of the internal devfile registry. gatewayPhase Specifies the current phase of the gateway deployment. message A human readable message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. pluginRegistryURL The public URL of the internal plug-in registry. reason A brief CamelCase message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. workspaceBaseDomain The resolved workspace base domain. This is either the copy of the explicitly defined property of the same name in the spec or, if it is undefined in the spec and we're running on OpenShift, the automatically resolved basedomain for routes. 3.2. Configuring projects For each user, OpenShift Dev Spaces isolates workspaces in a project. OpenShift Dev Spaces identifies the user project by the presence of labels and annotations. When starting a workspace, if the required project doesn't exist, OpenShift Dev Spaces creates the project using a template name. You can modify OpenShift Dev Spaces behavior by: Section 3.2.1, "Configuring project name" Section 3.2.2, "Provisioning projects in advance" 3.2.1. Configuring project name You can configure the project name template that OpenShift Dev Spaces uses to create the required project when starting a workspace. A valid project name template follows these conventions: The <username> or <userid> placeholder is mandatory. Usernames and IDs cannot contain invalid characters. If the formatting of a username or ID is incompatible with the naming conventions for OpenShift objects, OpenShift Dev Spaces changes the username or ID to a valid name by replacing incompatible characters with the - symbol. OpenShift Dev Spaces evaluates the <userid> placeholder into a 14 character long string, and adds a random six character long suffix to prevent IDs from colliding. The result is stored in the user preferences for reuse. Kubernetes limits the length of a project name to 63 characters. OpenShift limits the length further to 49 characters. Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_> Example 3.3. User workspaces project name template examples User workspaces project name template Resulting project example <username>-devspaces (default) user1-devspaces <userid>-namespace cge1egvsb2nhba-namespace-ul1411 <userid>-aka-<username>-namespace cgezegvsb2nhba-aka-user1-namespace-6m2w2b Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.2.2. Provisioning projects in advance You can provision workspaces projects in advance, rather than relying on automatic provisioning. Repeat the procedure for each user. Procedure Disable automatic namespace provisioning on the CheCluster level: devEnvironments: defaultNamespace: autoProvision: false Create the <project_name> project for <username> user with the following labels and annotations: kind: Namespace apiVersion: v1 metadata: name: <project_name> 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-namespace annotations: che.eclipse.org/username: <username> 1 Use a project name of your choosing. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3. Configuring server components Section 3.3.1, "Mounting a Secret or a ConfigMap as a file or an environment variable into a Red Hat OpenShift Dev Spaces container" Section 3.3.2, "Advanced configuration options for Dev Spaces server" Section 3.4.1, "Configuring number of replicas for a Red Hat OpenShift Dev Spaces container" 3.3.1. Mounting a Secret or a ConfigMap as a file or an environment variable into a Red Hat OpenShift Dev Spaces container Secrets are OpenShift objects that store sensitive data such as: usernames passwords authentication tokens in an encrypted form. Users can mount a OpenShift Secret that contains sensitive data or a ConfigMap that contains configuration in a OpenShift Dev Spaces managed containers as: a file an environment variable The mounting process uses the standard OpenShift mounting mechanism, but it requires additional annotations and labeling. 3.3.1.1. Mounting a Secret or a ConfigMap as a file into a OpenShift Dev Spaces container Prerequisites A running instance of Red Hat OpenShift Dev Spaces. Procedure Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The <DEPLOYMENT_NAME> corresponds to the one following deployments: devspaces-dashboard devfile-registry plugin-registry devspaces and <OBJECT_KIND> is either: secret or configmap Example 3.4. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... Configure the annotation values. Annotations must indicate that the given object is mounted as a file: che.eclipse.org/mount-as: file - To indicate that a object is mounted as a file. che.eclipse.org/mount-path: <TARGET_PATH> - To provide a required mount path. Example 3.5. Example: apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... The OpenShift object can contain several items whose names must match the desired file name mounted into the container. Example 3.6. Example: apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here> or apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <data content here> This results in a file named ca.crt being mounted at the /data path of the OpenShift Dev Spaces container. Important To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3.1.2. Mounting a Secret or a ConfigMap as a subPath into a OpenShift Dev Spaces container Prerequisites A running instance of Red Hat OpenShift Dev Spaces. Procedure Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The <DEPLOYMENT_NAME> corresponds to the one following deployments: devspaces-dashboard devfile-registry plugin-registry devspaces and <OBJECT_KIND> is either: secret or configmap Example 3.7. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... Configure the annotation values. Annotations must indicate that the given object is mounted as a subPath.: che.eclipse.org/mount-as: subpath - To indicate that an object is mounted as a subPath. che.eclipse.org/mount-path: <TARGET_PATH> - To provide a required mount path. Example 3.8. Example: apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... The OpenShift object can contain several items whose names must match the file name mounted into the container. Example 3.9. Example: apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here> or apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <data content here> This results in a file named ca.crt being mounted at the /data path of OpenShift Dev Spaces container. Important To make the changes in a OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3.1.3. Mounting a Secret or a ConfigMap as an environment variable into OpenShift Dev Spaces container Prerequisites A running instance of Red Hat OpenShift Dev Spaces. Procedure Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The <DEPLOYMENT_NAME> corresponds to the one following deployments: devspaces-dashboard devfile-registry plugin-registry devspaces and <OBJECT_KIND> is either: secret or configmap Example 3.10. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... Configure the annotation values. Annotations must indicate that the given object is mounted as an environment variable: che.eclipse.org/mount-as: env - to indicate that a object is mounted as an environment variable che.eclipse.org/env-name: <FOO_ENV> - to provide an environment variable name, which is required to mount a object key value Example 3.11. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret data: mykey: myvalue or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: myvalue This results in two environment variables: FOO_ENV myvalue being provisioned into the OpenShift Dev Spaces container. If the object provides more than one data item, the environment variable name must be provided for each of the data keys as follows: Example 3.12. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret stringData: mykey: <data_content_here> otherkey: <data_content_here> or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: <data content here> otherkey: <data content here> This results in two environment variables: FOO_ENV OTHER_ENV being provisioned into a OpenShift Dev Spaces container. Note The maximum length of annotation names in a OpenShift object is 63 characters, where 9 characters are reserved for a prefix that ends with / . This acts as a restriction for the maximum length of the key that can be used for the object. Important To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3.2. Advanced configuration options for Dev Spaces server The following section describes advanced deployment and configuration methods for the OpenShift Dev Spaces server component. 3.3.2.1. Understanding OpenShift Dev Spaces server advanced configuration The following section describes the OpenShift Dev Spaces server component advanced configuration method for a deployment. Advanced configuration is necessary to: Add environment variables not automatically generated by the Operator from the standard CheCluster Custom Resource fields. Override the properties automatically generated by the Operator from the standard CheCluster Custom Resource fields. The customCheProperties field, part of the CheCluster Custom Resource server settings, contains a map of additional environment variables to apply to the OpenShift Dev Spaces server component. Example 3.13. Override the default memory limit for workspaces Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . apiVersion: org.eclipse.che/v2 kind: CheCluster spec: components: cheServer: extraProperties: CHE_LOGS_APPENDERS_IMPL: json Note versions of the OpenShift Dev Spaces Operator had a ConfigMap named custom to fulfill this role. If the OpenShift Dev Spaces Operator finds a configMap with the name custom , it adds the data it contains into the customCheProperties field, redeploys OpenShift Dev Spaces, and deletes the custom configMap . Additional resources Section 3.1.3, " CheCluster Custom Resource fields reference" . 3.4. Configuring autoscaling Learn about different aspects of autoscaling for Red Hat OpenShift Dev Spaces. Section 3.4.1, "Configuring number of replicas for a Red Hat OpenShift Dev Spaces container" Section 3.4.2, "Configuring machine autoscaling" 3.4.1. Configuring number of replicas for a Red Hat OpenShift Dev Spaces container To configure the number of replicas for OpenShift Dev Spaces operands using Kubernetes HorizontalPodAutoscaler (HPA), you can define an HPA resource for deployment. The HPA dynamically adjusts the number of replicas based on specified metrics. Procedure Create an HPA resource for a deployment, specifying the target metrics and desired replica count. apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <deployment_name> 1 ... 1 The <deployment_name> corresponds to the one following deployments: devspaces che-gateway devspaces-dashboard plugin-registry devfile-registry Example 3.14. Create a HorizontalPodAutoscaler for devspaces deployment: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: devspaces-scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: devspaces minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75 In this example, the HPA is targeting the Deployment named devspaces, with a minimum of 2 replicas, a maximum of 5 replicas and scaling based on CPU utilization. Additional resources Horizontal Pod Autoscaling 3.4.2. Configuring machine autoscaling If you configured the cluster to adjust the number of nodes depending on resource needs, you need additional configuration to maintain the seamless operation of OpenShift Dev Spaces workspaces. Workspaces need special consideration when the autoscaler adds and removes nodes. When a new node is being added by the autoscaler, workspace startup can take longer than usual until the node provisioning is complete. Conversely when a node is being removed, ideally nodes that are running workspace pods should not be evicted by the autoscaler to avoid any interruptions while using the workspace and potentially losing any unsaved data. 3.4.2.1. When the autoscaler adds a new node You need to make additional configurations to the OpenShift Dev Spaces installation to ensure proper workspace startup while a new node is being added. Procedure In the CheCluster Custom Resource, set the following fields to allow proper workspace startup when the autoscaler is provisioning a new node. spec: devEnvironments: startTimeoutSeconds: 600 1 ignoredUnrecoverableEvents: 2 - FailedScheduling 1 Set to at least 600 seconds to allow time for a new node to be provisioned during workspace startup. 2 Ignore the FailedScheduling event to allow workspace startup to continue when a new node is provisioned. 3.4.2.2. When the autoscaler removes a node To prevent workspace pods from being evicted when the autoscaler needs to remove a node, add the "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation to every workspace pod. Procedure In the CheCluster Custom Resource, add the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation in the spec.devEnvironments.workspacesPodAnnotations field. spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" Verification steps Start a workspace and verify that the workspace pod contains the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation. 3.5. Configuring workspaces globally This section describes how an administrator can configure workspaces globally. Section 3.5.1, "Limiting the number of workspaces that a user can keep" Section 3.5.2, "Enabling users to run multiple workspaces simultaneously" Section 3.5.3, "Git with self-signed certificates" Section 3.5.4, "Configuring workspaces nodeSelector" 3.5.1. Limiting the number of workspaces that a user can keep By default, users can keep an unlimited number of workspaces in the dashboard, but you can limit this number to reduce demand on the cluster. This configuration is part of the CheCluster Custom Resource: spec: devEnvironments: maxNumberOfWorkspacesPerUser: <kept_workspaces_limit> 1 1 Sets the maximum number of workspaces per user. The default value, -1 , allows users to keep an unlimited number of workspaces. Use a positive integer to set the maximum number of workspaces per user. Procedure Get the name of the OpenShift Dev Spaces namespace. The default is openshift-devspaces . USD oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}" Configure the maxNumberOfWorkspacesPerUser : 1 The OpenShift Dev Spaces namespace that you got in step 1. 2 Your choice of the <kept_workspaces_limit> value. Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.5.2. Enabling users to run multiple workspaces simultaneously By default, a user can run only one workspace at a time. You can enable users to run multiple workspaces simultaneously. Note If using the default storage method, users might experience problems when concurrently running workspaces if pods are distributed across nodes in a multi-node cluster. Switching from the per-user common storage strategy to the per-workspace storage strategy or using the ephemeral storage type can avoid or solve those problems. This configuration is part of the CheCluster Custom Resource: spec: devEnvironments: maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit> 1 1 Sets the maximum number of simultaneously running workspaces per user. The -1 value enables users to run an unlimited number of workspaces. The default value is 1 . Procedure Get the name of the OpenShift Dev Spaces namespace. The default is openshift-devspaces . USD oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}" Configure the maxNumberOfRunningWorkspacesPerUser : 1 The OpenShift Dev Spaces namespace that you got in step 1. 2 Your choice of the <running_workspaces_limit> value. Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.5.3. Git with self-signed certificates You can configure OpenShift Dev Spaces to support operations on Git providers that use self-signed certificates. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . Git version 2 or later Procedure Create a new ConfigMap with details about the Git server: 1 Path to the self-signed certificate. 2 Optional parameter to specify the Git server URL e.g. https://git.example.com:8443 . When omitted, the self-signed certificate is used for all repositories over HTTPS. Note Certificate files are typically stored as Base64 ASCII files, such as. .pem , .crt , .ca-bundle . All ConfigMaps that hold certificate files should use the Base64 ASCII certificate rather than the binary data certificate. A certificate chain of trust is required. If the ca.crt is signed by a certificate authority (CA), the CA certificate must be included in the ca.crt file. Add the required labels to the ConfigMap: Configure OpenShift Dev Spaces operand to use self-signed certificates for Git repositories. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert Verification steps Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The container's /etc/gitconfig file contains information about the Git server host (its URL) and the path to the certificate in the http section (see Git documentation about git-config ). Example 3.15. Contents of an /etc/gitconfig file Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" Section 3.8.3, "Importing untrusted TLS certificates to Dev Spaces" . 3.5.4. Configuring workspaces nodeSelector This section describes how to configure nodeSelector for Pods of OpenShift Dev Spaces workspaces. Procedure Using NodeSelector OpenShift Dev Spaces uses CheCluster Custom Resource to configure nodeSelector : spec: devEnvironments: nodeSelector: key: value This section must contain a set of key=value pairs for each node label to form the nodeSelector rule. Using Taints and Tolerations This works in the opposite way to nodeSelector . Instead of specifying which nodes the Pod will be scheduled on, you specify which nodes the Pod cannot be scheduled on. For more information, see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration . OpenShift Dev Spaces uses CheCluster Custom Resource to configure tolerations : spec: devEnvironments: tolerations: - effect: NoSchedule key: key value: value operator: Equal Important nodeSelector must be configured during OpenShift Dev Spaces installation. This prevents existing workspaces from failing to run due to volumes affinity conflict caused by existing workspace PVC and Pod being scheduled in different zones. To avoid Pods and PVCs to be scheduled in different zones on large, multizone clusters, create an additional StorageClass object (pay attention to the allowedTopologies field), which will coordinate the PVC creation process. Pass the name of this newly created StorageClass to OpenShift Dev Spaces through the CheCluster Custom Resource. For more information, see: Section 3.9.1, "Configuring storage classes" . Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.5.5. Open VSX registry URL To search and install extensions, the Microsoft Visual Studio Code - Open Source editor uses an embedded Open VSX registry instance. You can also configure OpenShift Dev Spaces to use another Open VSX registry instance rather than the embedded one. Procedure Set the URL of your Open VSX registry instance in the CheCluster Custom Resource spec.components.pluginRegistry.openVSXURL field. spec: components: # [...] pluginRegistry: openVSXURL: <your_open_vsx_registy> # [...] Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" Open VSX registry 3.5.6. Configuring a user namespace This procedure walks you through the process of using OpenShift Dev Spaces to replicate ConfigMaps , Secrets and PersistentVolumeClaim from openshift-devspaces namespace to numerous user-specific namespaces. The OpenShift Dev Spaces automates the synchronization of important configuration data such as shared credentials, configuration files, and certificates to user namespaces. If you make changes to a Kubernetes resource in an openshift-devspaces namespace, OpenShift Dev Spaces will immediately replicate the changes across all users namespaces. In reverse, if a Kubernetes resource is modified in a user namespace, OpenShift Dev Spaces will immediately revert the changes. Procedure Create the ConfigMap below to replicate it to every user namespace. To enhance the configurability, you can customize the ConfigMap by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations. kind: ConfigMap apiVersion: v1 metadata: name: user-configmap namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data: ... Example 3.16. Mounting a settings.xml file to a user workspace: kind: ConfigMap apiVersion: v1 metadata: name: user-settings-xml namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 data: settings.xml: | <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository>/home/user/.m2/repository</localRepository> <interactiveMode>true</interactiveMode> <offline>false</offline> </settings> Create the Secret below to replicate it to every user namespace. To enhance the configurability, you can customize the Secret by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations. kind: Secret apiVersion: v1 metadata: name: user-secret namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data: ... Example 3.17. Mounting certificates to a user workspace: kind: Secret apiVersion: v1 metadata: name: user-certificates namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/pki/ca-trust/source/anchors stringData: trusted-certificates.crt: | ... Note Run update-ca-trust command on workspace startup to import certificates. It can be achieved manually or by adding this command to a postStart event in a devfile. See the Adding event bindings in a devfile . Example 3.18. Mounting environment variables to a user workspace: kind: Secret apiVersion: v1 metadata: name: user-env namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: env stringData: ENV_VAR_1: value_1 ENV_VAR_2: value_2 Create the PersistentVolumeClaim below to replicate it to every user namespace. To enhance the configurability, you can customize the PersistentVolumeClaim by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations. To modify the 'PersistentVolumeClaim', delete it and create a new one in openshift-devspaces namespace. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config spec: ... Example 3.19. Mounting a PersistentVolumeClaim to a user workspace: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: /home/user/data controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi volumeMode: Filesystem Additional resources https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:mounting-configmaps https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:mounting-secrets https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:requesting-persistent-storage-for-workspaces Automatically mounting volumes, configmaps, and secrets 3.6. Caching images for faster workspace start To improve the start time performance of OpenShift Dev Spaces workspaces, use the Image Puller, a OpenShift Dev Spaces-agnostic component that can be used to pre-pull images for OpenShift clusters. The Image Puller is an additional OpenShift deployment which creates a DaemonSet that can be configured to pre-pull relevant OpenShift Dev Spaces workspace images on each node. These images would already be available when a OpenShift Dev Spaces workspace starts, therefore improving the workspace start time. https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#installing-image-puller-on-kubernetes-by-using-cli Section 3.6.2, "Installing Image Puller on OpenShift by using the web console" Section 3.6.1, "Installing Image Puller on OpenShift using CLI" Section 3.6.3, "Configuring Image Puller to pre-pull default Dev Spaces images" Section 3.6.4, "Configuring Image Puller to pre-pull custom images" Section 3.6.5, "Configuring Image Puller to pre-pull additional images" Additional resources Kubernetes Image Puller source code repository 3.6.1. Installing Image Puller on OpenShift using CLI You can install the Kubernetes Image Puller on OpenShift by using OpenShift oc management tool. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . Procedure Gather a list of relevant container images to pull by navigating to the links: https:// <openshift_dev_spaces_fqdn> /plugin-registry/v3/external_images.txt https:// <openshift_dev_spaces_fqdn> /devfile-registry/devfiles/external_images.txt Define the memory requests and limits parameters to ensure pulled containers and the platform have enough memory to run. When defining the minimal value for CACHING_MEMORY_REQUEST or CACHING_MEMORY_LIMIT , consider the necessary amount of memory required to run each of the container images to pull. When defining the maximal value for CACHING_MEMORY_REQUEST or CACHING_MEMORY_LIMIT , consider the total memory allocated to the DaemonSet Pods in the cluster: Pulling 5 images on 20 nodes, with a container memory limit of 20Mi requires 2000Mi of memory. Clone the Image Puller repository and get in the directory containing the OpenShift templates: Configure the app.yaml , configmap.yaml and serviceaccount.yaml OpenShift templates using following parameters: Table 3.37. Image Puller OpenShift templates parameters in app.yaml Value Usage Default DEPLOYMENT_NAME The value of DEPLOYMENT_NAME in the ConfigMap kubernetes-image-puller IMAGE Image used for the kubernetes-image-puller deployment registry.redhat.io/devspaces/imagepuller-rhel8 IMAGE_TAG The image tag to pull latest SERVICEACCOUNT_NAME The name of the ServiceAccount created and used by the deployment kubernetes-image-puller Table 3.38. Image Puller OpenShift templates parameters in configmap.yaml Value Usage Default CACHING_CPU_LIMIT The value of CACHING_CPU_LIMIT in the ConfigMap .2 CACHING_CPU_REQUEST The value of CACHING_CPU_REQUEST in the ConfigMap .05 CACHING_INTERVAL_HOURS The value of CACHING_INTERVAL_HOURS in the ConfigMap "1" CACHING_MEMORY_LIMIT The value of CACHING_MEMORY_LIMIT in the ConfigMap "20Mi" CACHING_MEMORY_REQUEST The value of CACHING_MEMORY_REQUEST in the ConfigMap "10Mi" DAEMONSET_NAME The value of DAEMONSET_NAME in the ConfigMap kubernetes-image-puller DEPLOYMENT_NAME The value of DEPLOYMENT_NAME in the ConfigMap kubernetes-image-puller IMAGES The value of IMAGES in the ConfigMap {} NAMESPACE The value of NAMESPACE in the ConfigMap k8s-image-puller NODE_SELECTOR The value of NODE_SELECTOR in the ConfigMap "{}" Table 3.39. Image Puller OpenShift templates parameters in serviceaccount.yaml Value Usage Default SERVICEACCOUNT_NAME The name of the ServiceAccount created and used by the deployment kubernetes-image-puller KIP_IMAGE The image puller image to copy the sleep binary from registry.redhat.io/devspaces/imagepuller-rhel8:latest Create an OpenShift project to host the Image Puller: Process and apply the templates to install the puller: Verification steps Verify the existence of a <kubernetes-image-puller> deployment and a <kubernetes-image-puller> DaemonSet. The DaemonSet needs to have a Pod for each node in the cluster: USD oc get deployment,daemonset,pod --namespace <k8s-image-puller> Verify the values of the <kubernetes-image-puller> ConfigMap . USD oc get configmap <kubernetes-image-puller> --output yaml 3.6.2. Installing Image Puller on OpenShift by using the web console You can install the community supported Kubernetes Image Puller Operator on OpenShift by using the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . Procedure Install the community supported Kubernetes Image Puller Operator. See Installing from OperatorHub using the web console . Create a kubernetes-image-puller KubernetesImagePuller operand from the community supported Kubernetes Image Puller Operator. See Creating applications from installed Operators . 3.6.3. Configuring Image Puller to pre-pull default Dev Spaces images You can configure Kubernetes Image Puller to pre-pull default OpenShift Dev Spaces images. Red Hat OpenShift Dev Spaces operator will control the list of images to pre-pull and automatically updates them on OpenShift Dev Spaces upgrade. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster. Image Puller is installed on Kubernetes cluster. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Configure the Image Puller to pre-pull OpenShift Dev Spaces images. oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true } } } }' 3.6.4. Configuring Image Puller to pre-pull custom images You can configure Kubernetes Image Puller to pre-pull custom images. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster. Image Puller is installed on Kubernetes cluster. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Configure the Image Puller to pre-pull custom images. oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true, "spec": { "images": " NAME-1 = IMAGE-1 ; NAME-2 = IMAGE-2 " 1 } } } } }' 1 The semicolon separated list of images 3.6.5. Configuring Image Puller to pre-pull additional images You can configure Kubernetes Image Puller to pre-pull additional OpenShift Dev Spaces images. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster. Image Puller is installed on Kubernetes cluster. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Create k8s-image-puller namespace: oc create namespace k8s-image-puller Create KubernetesImagePuller Custom Resource: oc apply -f - <<EOF apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: k8s-image-puller-images namespace: k8s-image-puller spec: images: "__NAME-1__=__IMAGE-1__;__NAME-2__=__IMAGE-2__" 1 EOF 1 The semicolon separated list of images Addition resources Kubernetes Image Puller source code repository community supported Kubernetes Image Puller Operator source code repository 3.7. Configuring observability To configure OpenShift Dev Spaces observability features, see: Section 3.7.2.15, "Configuring server logging" Section 3.7.2.16, "Collecting logs using dsc" Section 3.7.3, "Monitoring the Dev Workspace Operator" Section 3.7.4, "Monitoring Dev Spaces Server" 3.7.1. The Woopra telemetry plugin The Woopra Telemetry Plugin is a plugin built to send telemetry from a Red Hat OpenShift Dev Spaces installation to Segment and Woopra. This plugin is used by Eclipse Che hosted by Red Hat , but any Red Hat OpenShift Dev Spaces deployment can take advantage of this plugin. There are no dependencies other than a valid Woopra domain and Segment Write key. The devfile v2 for the plugin, plugin.yaml , has four environment variables that can be passed to the plugin: WOOPRA_DOMAIN - The Woopra domain to send events to. SEGMENT_WRITE_KEY - The write key to send events to Segment and Woopra. WOOPRA_DOMAIN_ENDPOINT - If you prefer not to pass in the Woopra domain directly, the plugin will get it from a supplied HTTP endpoint that returns the Woopra Domain. SEGMENT_WRITE_KEY_ENDPOINT - If you prefer not to pass in the Segment write key directly, the plugin will get it from a supplied HTTP endpoint that returns the Segment write key. To enable the Woopra plugin on the Red Hat OpenShift Dev Spaces installation: Procedure Deploy the plugin.yaml devfile v2 file to an HTTP server with the environment variables set correctly. Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/ 1 plugins: 2 - 'https://your-web-server/plugin.yaml' 1 The editorId to set the telemetry plugin for. 2 The URL to the telemetry plugin's devfile v2 definition. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.7.2. Creating a telemetry plugin This section shows how to create an AnalyticsManager class that extends AbstractAnalyticsManager and implements the following methods: isEnabled() - determines whether the telemetry backend is functioning correctly. This can mean always returning true , or have more complex checks, for example, returning false when a connection property is missing. destroy() - cleanup method that is run before shutting down the telemetry backend. This method sends the WORKSPACE_STOPPED event. onActivity() - notifies that some activity is still happening for a given user. This is mainly used to send WORKSPACE_INACTIVE events. onEvent() - submits telemetry events to the telemetry server, such as WORKSPACE_USED or WORKSPACE_STARTED . increaseDuration() - increases the duration of a current event rather than sending many events in a small frame of time. The following sections cover: Creating a telemetry server to echo events to standard output. Extending the OpenShift Dev Spaces telemetry client and implementing a user's custom backend. Creating a plugin.yaml file representing a Dev Workspace plugin for the custom backend. Specifying of a location of a custom plugin to OpenShift Dev Spaces by setting the workspacesDefaultPlugins attribute from the CheCluster custom resource. 3.7.2.1. Getting started This document describes the steps required to extend the OpenShift Dev Spaces telemetry system to communicate with to a custom backend: Creating a server process that receives events Extending OpenShift Dev Spaces libraries to create a backend that sends events to the server Packaging the telemetry backend in a container and deploying it to an image registry Adding a plugin for your backend and instructing OpenShift Dev Spaces to load the plugin in your Dev Workspaces A finished example of the telemetry backend is available here . 3.7.2.2. Creating a server that receives events For demonstration purposes, this example shows how to create a server that receives events from our telemetry plugin and writes them to standard output. For production use cases, consider integrating with a third-party telemetry system (for example, Segment, Woopra) rather than creating your own telemetry server. In this case, use your provider's APIs to send events from your custom backend to their system. The following Go code starts a server on port 8080 and writes events to standard output: Example 3.20. main.go package main import ( "io/ioutil" "net/http" "go.uber.org/zap" ) var logger *zap.SugaredLogger func event(w http.ResponseWriter, req *http.Request) { switch req.Method { case "GET": logger.Info("GET /event") case "POST": logger.Info("POST /event") } body, err := req.GetBody() if err != nil { logger.With("err", err).Info("error getting body") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With("error", err).Info("error reading response body") return } logger.With("body", string(responseBody)).Info("got event") } func activity(w http.ResponseWriter, req *http.Request) { switch req.Method { case "GET": logger.Info("GET /activity, doing nothing") case "POST": logger.Info("POST /activity") body, err := req.GetBody() if err != nil { logger.With("error", err).Info("error getting body") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With("error", err).Info("error reading response body") return } logger.With("body", string(responseBody)).Info("got activity") } } func main() { log, _ := zap.NewProduction() logger = log.Sugar() http.HandleFunc("/event", event) http.HandleFunc("/activity", activity) logger.Info("Added Handlers") logger.Info("Starting to serve") http.ListenAndServe(":8080", nil) } Create a container image based on this code and expose it as a deployment in OpenShift in the openshift-devspaces project. The code for the example telemetry server is available at telemetry-server-example . To deploy the telemetry server, clone the repository and build the container: Both manifest_with_ingress.yaml and manifest_with_route contain definitions for a Deployment and Service. The former also defines a Kubernetes Ingress, while the latter defines an OpenShift Route. In the manifest file, replace the image and host fields to match the image you pushed, and the public hostname of your OpenShift cluster. Then run: 3.7.2.3. Creating the back-end project Note For fast feedback when developing, it is recommended to do development inside a Dev Workspace. This way, you can run the application in a cluster and receive events from the front-end telemetry plugin. Maven Quarkus project scaffolding: Remove the files under src/main/java/mygroup and src/test/java/mygroup . Consult the GitHub packages for the latest version and Maven coordinates of backend-base . Add the following dependencies to your pom.xml : Example 3.21. pom.xml <!-- Required --> <dependency> <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId> <artifactId>backend-base</artifactId> <version>LATEST VERSION FROM STEP</version> </dependency> <!-- Used to make http requests to the telemetry server --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency> Create a personal access token with read:packages permissions to download the org.eclipse.che.incubator.workspace-telemetry:backend-base dependency from GitHub packages . Add your GitHub username, personal access token and che-incubator repository details in your ~/.m2/settings.xml file: Example 3.22. settings.xml <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <servers> <server> <id>che-incubator</id> <username>YOUR GITHUB USERNAME</username> <password>YOUR GITHUB TOKEN</password> </server> </servers> <profiles> <profile> <id>github</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>che-incubator</id> <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url> </repository> </repositories> </profile> </profiles> </settings> 3.7.2.4. Creating a concrete implementation of AnalyticsManager and adding specialized logic Create two files in your project under src/main/java/mygroup : MainConfiguration.java - contains configuration provided to AnalyticsManager . AnalyticsManager.java - contains logic specific to the telemetry system. Example 3.23. MainConfiguration.java package org.my.group; import java.util.Optional; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration; import org.eclipse.microprofile.config.inject.ConfigProperty; @Dependent @Alternative public class MainConfiguration extends BaseConfiguration { @ConfigProperty(name = "welcome.message") 1 Optional<String> welcomeMessage; 2 } 1 A MicroProfile configuration annotation is used to inject the welcome.message configuration. For more details on how to set configuration properties specific to your backend, see the Quarkus Configuration Reference Guide . Example 3.24. AnalyticsManager.java package org.my.group; import java.util.HashMap; import java.util.Map; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import javax.inject.Inject; import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager; import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent; import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder; import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder; import org.eclipse.microprofile.rest.client.inject.RestClient; import org.slf4j.Logger; import static org.slf4j.LoggerFactory.getLogger; @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { private static final Logger LOG = getLogger(AbstractAnalyticsManager.class); public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) { super(mainConfiguration, devworkspaceFinder, usernameFinder); mainConfiguration.welcomeMessage.ifPresentOrElse( 1 (str) -> LOG.info("The welcome message is: {}", str), () -> LOG.info("No welcome message provided") ); } @Override public boolean isEnabled() { return true; } @Override public void destroy() {} @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { LOG.info("The received event is: {}", event); 2 } @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { } @Override public void onActivity() {} } 1 Log the welcome message if it was provided. 2 Log the event received from the front-end plugin. Since org.my.group.AnalyticsManager and org.my.group.MainConfiguration are alternative beans, specify them using the quarkus.arc.selected-alternatives property in src/main/resources/application.properties . Example 3.25. application.properties 3.7.2.5. Running the application within a Dev Workspace Set the DEVWORKSPACE_TELEMETRY_BACKEND_PORT environment variable in the Dev Workspace. Here, the value is set to 4167 . Restart the Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard. Run the following command within a Dev Workspace's terminal window to start the application. Use the --settings flag to specify path to the location of the settings.xml file that contains the GitHub access token. The application now receives telemetry events through port 4167 from the front-end plugin. Verification steps Verify that the following output is logged: To verify that the onEvent() method of AnalyticsManager receives events from the front-end plugin, press the l key to disable Quarkus live coding and edit any file within the IDE. The following output should be logged: 3.7.2.6. Implementing isEnabled() For the purposes of the example, this method always returns true whenever it is called. Example 3.26. AnalyticsManager.java @Override public boolean isEnabled() { return true; } It is possible to put more complex logic in isEnabled() . For example, the hosted OpenShift Dev Spaces Woopra backend checks that a configuration property exists before determining if the backend is enabled. 3.7.2.7. Implementing onEvent() onEvent() sends the event received by the backend to the telemetry system. For the example application, it sends an HTTP POST payload to the /event endpoint from the telemetry server. 3.7.2.7.1. Sending a POST request to the example telemetry server For the following example, the telemetry server application is deployed to OpenShift at the following URL: http://little-telemetry-server-che.apps-crc.testing , where apps-crc.testing is the ingress domain name of the OpenShift cluster. Set up the RESTEasy REST Client by creating TelemetryService.java Example 3.27. TelemetryService.java package org.my.group; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; @RegisterRestClient public interface TelemetryService { @POST @Path("/event") 1 @Consumes(MediaType.APPLICATION_JSON) Response sendEvent(Map<String, Object> payload); } 1 The endpoint to make the POST request to. Specify the base URL for TelemetryService in the src/main/resources/application.properties file: Example 3.28. application.properties Inject TelemetryService into AnalyticsManager and send a POST request in onEvent() Example 3.29. AnalyticsManager.java @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { @Inject @RestClient TelemetryService telemetryService; ... @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { Map<String, Object> payload = new HashMap<String, Object>(properties); payload.put("event", event); telemetryService.sendEvent(payload); } This sends an HTTP request to the telemetry server and automatically delays identical events for a small period of time. The default duration is 1500 milliseconds. 3.7.2.8. Implementing increaseDuration() Many telemetry systems recognize event duration. The AbstractAnalyticsManager merges similar events that happen in the same frame of time into one event. This implementation of increaseDuration() is a no-op. This method uses the APIs of the user's telemetry provider to alter the event or event properties to reflect the increased duration of an event. Example 3.30. AnalyticsManager.java @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {} 3.7.2.9. Implementing onActivity() Set an inactive timeout limit, and use onActivity() to send a WORKSPACE_INACTIVE event if the last event time is longer than the timeout. Example 3.31. AnalyticsManager.java public class AnalyticsManager extends AbstractAnalyticsManager { ... private long inactiveTimeLimit = 60000 * 3; ... @Override public void onActivity() { if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) { onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } } 3.7.2.10. Implementing destroy() When destroy() is called, send a WORKSPACE_STOPPED event and shutdown any resources such as connection pools. Example 3.32. AnalyticsManager.java @Override public void destroy() { onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } Running mvn quarkus:dev as described in Section 3.7.2.5, "Running the application within a Dev Workspace" and terminating the application with Ctrl + C sends a WORKSPACE_STOPPED event to the server. 3.7.2.11. Packaging the Quarkus application See the Quarkus documentation for the best instructions to package the application in a container. Build and push the container to a container registry of your choice. 3.7.2.11.1. Sample Dockerfile for building a Quarkus image running with JVM Example 3.33. Dockerfile.jvm FROM registry.access.redhat.com/ubi8/openjdk-11:1.11 ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/ COPY --chown=185 target/quarkus-app/*.jar /deployments/ COPY --chown=185 target/quarkus-app/app/ /deployments/app/ COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/ EXPOSE 8080 USER 185 ENTRYPOINT ["java", "-Dquarkus.http.host=0.0.0.0", "-Djava.util.logging.manager=org.jboss.logmanager.LogManager", "-Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}", "-jar", "/deployments/quarkus-run.jar"] To build the image, run: 3.7.2.11.2. Sample Dockerfile for building a Quarkus native image Example 3.34. Dockerfile.native FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 WORKDIR /work/ RUN chown 1001 /work \ && chmod "g+rwX" /work \ && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 CMD ["./application", "-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=USDDEVWORKSPACE_TELEMETRY_BACKEND_PORT}"] To build the image, run: 3.7.2.12. Creating a plugin.yaml for your plugin Create a plugin.yaml devfile v2 file representing a Dev Workspace plugin that runs your custom backend in a Dev Workspace Pod. For more information about devfile v2, see Devfile v2 documentation Example 3.35. plugin.yaml schemaVersion: 2.1.0 metadata: name: devworkspace-telemetry-backend-plugin version: 0.0.1 description: A Demo telemetry backend displayName: Devworkspace Telemetry Backend components: - name: devworkspace-telemetry-backend-plugin attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167' container: image: YOUR IMAGE 1 env: - name: WELCOME_MESSAGE 2 value: 'hello world!' 1 Specify the container image built from Section 3.7.2.11, "Packaging the Quarkus application" . 2 Set the value for the welcome.message optional configuration property from Example 4. Typically, the user deploys this file to a corporate web server. This guide demonstrates how to create an Apache web server on OpenShift and host the plugin there. Create a ConfigMap object that references the new plugin.yaml file. Create a deployment, a service, and a route to expose the web server. The deployment references this ConfigMap object and places it in the /var/www/html directory. Example 3.36. manifest.yaml kind: Deployment apiVersion: apps/v1 metadata: name: apache spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: volumes: - name: plugin-yaml configMap: name: telemetry-plugin-yaml defaultMode: 420 containers: - name: apache image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest' ports: - containerPort: 8080 protocol: TCP resources: {} volumeMounts: - name: plugin-yaml mountPath: /var/www/html strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 --- kind: Service apiVersion: v1 metadata: name: apache spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apache type: ClusterIP --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: apache spec: host: apache-che.apps-crc.testing to: kind: Service name: apache weight: 100 port: targetPort: 8080 wildcardPolicy: None Verification steps After the deployment has started, confirm that plugin.yaml is available in the web server: 3.7.2.13. Specifying the telemetry plugin in a Dev Workspace Add the following to the components field of an existing Dev Workspace: Start the Dev Workspace from the OpenShift Dev Spaces dashboard. Verification steps Verify that the telemetry plugin container is running in the Dev Workspace pod. Here, this is verified by checking the Workspace view within the editor. Edit files within the editor and observe their events in the example telemetry server's logs. 3.7.2.14. Applying the telemetry plugin for all Dev Workspaces Set the telemetry plugin as a default plugin. Default plugins are applied on Dev Workspace startup for new and existing Dev Workspaces. Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . 1 The editor identification to set the default plugins for. 2 List of URLs to devfile v2 plugins. Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . Verification steps Start a new or existing Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard. Verify that the telemetry plugin is working by following the verification steps for Section 3.7.2.13, "Specifying the telemetry plugin in a Dev Workspace" . 3.7.2.15. Configuring server logging It is possible to fine-tune the log levels of individual loggers available in the OpenShift Dev Spaces server. The log level of the whole OpenShift Dev Spaces server is configured globally using the cheLogLevel configuration property of the Operator. See Section 3.1.3, " CheCluster Custom Resource fields reference" . To set the global log level in installations not managed by the Operator, specify the CHE_LOG_LEVEL environment variable in the che ConfigMap. It is possible to configure the log levels of the individual loggers in the OpenShift Dev Spaces server using the CHE_LOGGER_CONFIG environment variable. 3.7.2.15.1. Configuring log levels Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: " <key1=value1,key2=value2> " 1 1 Comma-separated list of key-value pairs, where keys are the names of the loggers as seen in the OpenShift Dev Spaces server log output and values are the required log levels. Example 3.37. Configuring debug mode for the WorkspaceManager spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG" Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.7.2.15.2. Logger naming The names of the loggers follow the class names of the internal server classes that use those loggers. 3.7.2.15.3. Logging HTTP traffic Procedure To log the HTTP traffic between the OpenShift Dev Spaces server and the API server of the Kubernetes or OpenShift cluster, configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "che.infra.request-logging=TRACE" Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.7.2.16. Collecting logs using dsc An installation of Red Hat OpenShift Dev Spaces consists of several containers running in the OpenShift cluster. While it is possible to manually collect logs from each running container, dsc provides commands which automate the process. Following commands are available to collect Red Hat OpenShift Dev Spaces logs from the OpenShift cluster using the dsc tool: dsc server:logs Collects existing Red Hat OpenShift Dev Spaces server logs and stores them in a directory on the local machine. By default, logs are downloaded to a temporary directory on the machine. However, this can be overwritten by specifying the -d parameter. For example, to download OpenShift Dev Spaces logs to the /home/user/che-logs/ directory, use the command dsc server:logs -d /home/user/che-logs/ When run, dsc server:logs prints a message in the console specifying the directory that will store the log files: If Red Hat OpenShift Dev Spaces is installed in a non-default project, dsc server:logs requires the -n <NAMESPACE> paremeter, where <NAMESPACE> is the OpenShift project in which Red Hat OpenShift Dev Spaces was installed. For example, to get logs from OpenShift Dev Spaces in the my-namespace project, use the command dsc server:logs -n my-namespace dsc server:deploy Logs are automatically collected during the OpenShift Dev Spaces installation when installed using dsc . As with dsc server:logs , the directory logs are stored in can be specified using the -d parameter. Additional resources " dsc reference documentation " 3.7.3. Monitoring the Dev Workspace Operator You can configure the OpenShift in-cluster monitoring stack to scrape metrics exposed by the Dev Workspace Operator. 3.7.3.1. Collecting Dev Workspace Operator metrics To use the in-cluster Prometheus instance to collect, store, and query metrics about the Dev Workspace Operator: Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The devworkspace-controller-metrics Service is exposing metrics on port 8443 . This is preconfigured by default. Procedure Create the ServiceMonitor for detecting the Dev Workspace Operator metrics Service. Example 3.38. ServiceMonitor apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: devworkspace-controller namespace: openshift-devspaces 1 spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 10s 2 port: metrics scheme: https tlsConfig: insecureSkipVerify: true namespaceSelector: matchNames: - openshift-operators selector: matchLabels: app.kubernetes.io/name: devworkspace-controller 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The rate at which a target is scraped. Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is openshift-devspaces . Verification For a fresh installation of OpenShift Dev Spaces, generate metrics by creating a OpenShift Dev Spaces workspace from the Dashboard. In the Administrator view of the OpenShift web console, go to Observe Metrics . Run a PromQL query to confirm that the metrics are available. For example, enter devworkspace_started_total and click Run queries . For more metrics, see Section 3.7.3.2, "Dev Workspace-specific metrics" . Tip To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors: Get the name of the Prometheus pod: USD oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}' Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the step: USD oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring Additional resources Querying Prometheus Prometheus metric types 3.7.3.2. Dev Workspace-specific metrics The following tables describe the Dev Workspace-specific metrics exposed by the devworkspace-controller-metrics Service. Table 3.40. Metrics Name Type Description Labels devworkspace_started_total Counter Number of Dev Workspace starting events. source , routingclass devworkspace_started_success_total Counter Number of Dev Workspaces successfully entering the Running phase. source , routingclass devworkspace_fail_total Counter Number of failed Dev Workspaces. source , reason devworkspace_startup_time Histogram Total time taken to start a Dev Workspace, in seconds. source , routingclass Table 3.41. Labels Name Description Values source The controller.devfile.io/devworkspace-source label of the Dev Workspace. string routingclass The spec.routingclass of the Dev Workspace. "basic|cluster|cluster-tls|web-terminal" reason The workspace startup failure reason. "BadRequest|InfrastructureFailure|Unknown" Table 3.42. Startup failure reasons Name Description BadRequest Startup failure due to an invalid devfile used to create a Dev Workspace. InfrastructureFailure Startup failure due to the following errors: CreateContainerError , RunContainerError , FailedScheduling , FailedMount . Unknown Unknown failure reason. 3.7.3.3. Viewing Dev Workspace Operator metrics from an OpenShift web console dashboard After configuring the in-cluster Prometheus instance to collect Dev Workspace Operator metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The in-cluster Prometheus instance is collecting metrics. See Section 3.7.3.1, "Collecting Dev Workspace Operator metrics" . Procedure Create a ConfigMap for the dashboard definition in the openshift-config-managed project and apply the necessary label. Note The command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat's QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. Note The dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console. Verification steps In the Administrator view of the OpenShift web console, go to Observe Dashboards . Go to Dashboard Che Server JVM and verify that the dashboard panels contain data. 3.7.3.4. Dashboard for the Dev Workspace Operator The OpenShift web console custom dashboard is based on Grafana 6.x and displays the following metrics from the Dev Workspace Operator. Note Not all features for Grafana 6.x dashboards are supported as an OpenShift web console dashboard. 3.7.3.4.1. Dev Workspace metrics The Dev Workspace-specific metrics are displayed in the Dev Workspace Metrics panel. Figure 3.1. The Dev Workspace Metrics panel Average workspace start time The average workspace startup duration. Workspace starts The number of successful and failed workspace startups. Dev Workspace successes and failures A comparison between successful and failed Dev Workspace startups. Dev Workspace failure rate The ratio between the number of failed workspace startups and the number of total workspace startups. Dev Workspace startup failure reasons A pie chart that displays the distribution of workspace startup failures: BadRequest InfrastructureFailure Unknown 3.7.3.4.2. Operator metrics The Operator-specific metrics are displayed in the Operator Metrics panel. Figure 3.2. The Operator Metrics panel Webhooks in flight A comparison between the number of different webhook requests. Work queue depth The number of reconcile requests that are in the work queue. Memory Memory usage for the Dev Workspace controller and the Dev Workspace webhook server. Average reconcile counts per second (DWO) The average per-second number of reconcile counts for the Dev Workspace controller. 3.7.4. Monitoring Dev Spaces Server You can configure OpenShift Dev Spaces to expose JVM metrics such as JVM memory and class loading for OpenShift Dev Spaces Server. 3.7.4.1. Enabling and exposing OpenShift Dev Spaces Server metrics OpenShift Dev Spaces exposes the JVM metrics on port 8087 of the che-host Service. You can configure this behaviour. Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: metrics: enable: <boolean> 1 1 true to enable, false to disable. 3.7.4.2. Collecting OpenShift Dev Spaces Server metrics with Prometheus To use the in-cluster Prometheus instance to collect, store, and query JVM metrics for OpenShift Dev Spaces Server: Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . OpenShift Dev Spaces is exposing metrics on port 8087 . See Enabling and exposing OpenShift Dev Spaces server JVM metrics . Procedure Create the ServiceMonitor for detecting the OpenShift Dev Spaces JVM metrics Service. Example 3.39. ServiceMonitor apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: che-host namespace: openshift-devspaces 1 spec: endpoints: - interval: 10s 2 port: metrics scheme: http namespaceSelector: matchNames: - openshift-devspaces 3 selector: matchLabels: app.kubernetes.io/name: devspaces 1 3 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The rate at which a target is scraped. Create a Role and RoleBinding to allow Prometheus to view the metrics. Example 3.40. Role kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-k8s namespace: openshift-devspaces 1 rules: - verbs: - get - list - watch apiGroups: - '' resources: - services - endpoints - pods 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . Example 3.41. RoleBinding kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-devspaces-openshift-monitoring-prometheus-k8s namespace: openshift-devspaces 1 subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is openshift-devspaces . USD oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true Verification In the Administrator view of the OpenShift web console, go to Observe Metrics . Run a PromQL query to confirm that the metrics are available. For example, enter process_uptime_seconds{job="che-host"} and click Run queries . Tip To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors: Get the name of the Prometheus pod: USD oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}' Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the step: USD oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring Additional resources Querying Prometheus Prometheus metric types 3.7.4.3. Viewing OpenShift Dev Spaces Server from an OpenShift web console dashboard After configuring the in-cluster Prometheus instance to collect OpenShift Dev Spaces Server JVM metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The in-cluster Prometheus instance is collecting metrics. See Section 3.7.4.2, "Collecting OpenShift Dev Spaces Server metrics with Prometheus" . Procedure Create a ConfigMap for the dashboard definition in the openshift-config-managed project and apply the necessary label. Note The command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat's QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. Note The dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console. Verification steps In the Administrator view of the OpenShift web console, go to Observe Dashboards . Go to Dashboard Dev Workspace Operator and verify that the dashboard panels contain data. Figure 3.3. Quick Facts Figure 3.4. JVM Memory Figure 3.5. JVM Misc Figure 3.6. JVM Memory Pools (heap) Figure 3.7. JVM Memory Pools (Non-Heap) Figure 3.8. Garbage Collection Figure 3.9. Class loading Figure 3.10. Buffer Pools 3.8. Configuring networking Section 3.8.1, "Configuring network policies" Section 3.8.2, "Configuring Dev Spaces hostname" Section 3.8.3, "Importing untrusted TLS certificates to Dev Spaces" Section 3.8.4, "Adding labels and annotations" 3.8.1. Configuring network policies By default, all Pods in a OpenShift cluster can communicate with each other even if they are in different namespaces. In the context of OpenShift Dev Spaces, this makes it possible for a workspace Pod in one user project to send traffic to another workspace Pod in a different user project. For security, multitenant isolation could be configured by using NetworkPolicy objects to restrict all incoming communication to Pods in a user project. However, Pods in the OpenShift Dev Spaces project must be able to communicate with Pods in user projects. Prerequisites The OpenShift cluster has network restrictions such as multitenant isolation. Procedure Apply the allow-from-openshift-devspaces NetworkPolicy to each user project. The allow-from-openshift-devspaces NetworkPolicy allows incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user project. Example 3.42. allow-from-openshift-devspaces.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-devspaces spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-devspaces 1 podSelector: {} 2 policyTypes: - Ingress 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The empty podSelector selects all Pods in the project. OPTIONAL: In case you applied Configuring multitenant isolation with network policy , you also must apply allow-from-openshift-apiserver and allow-from-workspaces-namespaces NetworkPolicies to openshift-devspaces . The allow-from-openshift-apiserver NetworkPolicy allows incoming traffic from openshift-apiserver namespace to the devworkspace-webhook-server enabling webhooks. The allow-from-workspaces-namespaces NetworkPolicy allows incoming traffic from each user project to che-gateway pod. Example 3.43. allow-from-openshift-apiserver.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-apiserver namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/name: devworkspace-webhook-server 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-apiserver policyTypes: - Ingress 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The podSelector only selects devworkspace-webhook-server pods Example 3.44. allow-from-workspaces-namespaces.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-workspaces-namespaces namespace: openshift-devspaces 1 spec: podSelector: {} 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: app.kubernetes.io/component: workspaces-namespace policyTypes: - Ingress 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The empty podSelector selects all pods in the OpenShift Dev Spaces namespace. Section 3.2, "Configuring projects" Network isolation Configuring multitenant isolation with network policy 3.8.2. Configuring Dev Spaces hostname This procedure describes how to configure OpenShift Dev Spaces to use custom hostname. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The certificate and the private key files are generated. Important To generate the pair of a private key and certificate, the same certification authority (CA) must be used as for other OpenShift Dev Spaces hosts. Important Ask a DNS provider to point the custom hostname to the cluster ingress. Procedure Pre-create a project for OpenShift Dev Spaces: Create a TLS secret: 1 The TLS secret name 2 A file with the private key 3 A file with the certificate Add the required labels to the secret: 1 The TLS secret name Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . 1 Custom Red Hat OpenShift Dev Spaces server hostname 2 The TLS secret name If OpenShift Dev Spaces has been already deployed, wait until the rollout of all OpenShift Dev Spaces components finishes. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.8.3. Importing untrusted TLS certificates to Dev Spaces OpenShift Dev Spaces components communications with external services are encrypted with TLS. They require TLS certificates signed by trusted Certificate Authorities (CA). Therefore, you must import into OpenShift Dev Spaces all untrusted CA chains in use by an external service such as: A proxy An identity provider (OIDC) A source code repositories provider (Git) OpenShift Dev Spaces uses labeled config maps in OpenShift Dev Spaces project as sources for TLS certificates. The config maps can have an arbitrary amount of keys with a random amount of certificates each. Note When an OpenShift cluster contains cluster-wide trusted CA certificates added through the cluster-wide-proxy configuration , OpenShift Dev Spaces Operator detects them and automatically injects them into a config map with the config.openshift.io/inject-trusted-cabundle="true" label. Based on this annotation, OpenShift automatically injects the cluster-wide trusted CA certificates inside the ca-bundle.crt key of the config map. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The openshift-devspaces project exists. For each CA chain to import: the root CA and intermediate certificates, in PEM format, in a ca-cert-for-devspaces- <count> .pem file. Procedure Concatenate all CA chains PEM files to import, into the custom-ca-certificates.pem file, and remove the return character that is incompatible with the Java truststore. Create the custom-ca-certificates config map with the required TLS certificates: Label the custom-ca-certificates config map: Deploy OpenShift Dev Spaces if it hasn't been deployed before. Otherwise, wait until the rollout of OpenShift Dev Spaces components finishes. Restart running workspaces for the changes to take effect. Verification steps Verify that the config map contains your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc get configmap \ --namespace=openshift-devspaces \ --output='jsonpath={.items[0:].data.custom-ca-certificates\.pem}' \ --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org Verify OpenShift Dev Spaces pod contains a volume mounting the ca-certs-merged config map: USD oc get pod \ --selector=app.kubernetes.io/component=devspaces \ --output='jsonpath={.items[0].spec.volumes[0:].configMap.name}' \ --namespace=openshift-devspaces \ | grep ca-certs-merged Verify the OpenShift Dev Spaces server container has your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc exec -t deploy/devspaces \ --namespace=openshift-devspaces \ -- cat /public-certs/custom-ca-certificates.pem Verify in the OpenShift Dev Spaces server logs that the imported certificates count is not null: USD oc logs deploy/devspaces --namespace=openshift-devspaces \ | grep custom-ca-certificates.pem List the SHA256 fingerprints of your certificates: USD for certificate in ca-cert*.pem ; do openssl x509 -in USDcertificate -digest -sha256 -fingerprint -noout | cut -d= -f2; done Verify that OpenShift Dev Spaces server Java truststore contains certificates with the same fingerprint: USD oc exec -t deploy/devspaces --namespace=openshift-devspaces -- \ keytool -list -keystore /home/user/cacerts \ | grep --after-context=1 custom-ca-certificates.pem Start a workspace, get the project name in which it has been created: <workspace_namespace> , and wait for the workspace to be started. Verify that the che-trusted-ca-certs config map contains your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc get configmap che-trusted-ca-certs \ --namespace= <workspace_namespace> \ --output='jsonpath={.data.custom-ca-certificates\.custom-ca-certificates\.pem}' Verify that the workspace pod mounts the che-trusted-ca-certs config map: USD oc get pod \ --namespace= <workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' \ --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' \ | grep che-trusted-ca-certs Verify that the universal-developer-image container (or the container defined in the workspace devfile) mounts the che-trusted-ca-certs volume: USD oc get pod \ --namespace= <workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' \ --output='jsonpath={.items[0:].spec.containers[0:]}' \ | jq 'select (.volumeMounts[].name == "che-trusted-ca-certs") | .name' Get the workspace pod name <workspace_pod_name> : USD oc get pod \ --namespace= <workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' \ --output='jsonpath={.items[0:].metadata.name}' \ Verify that the workspace container has your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc exec <workspace_pod_name> \ --namespace= <workspace_namespace> \ -- cat /public-certs/custom-ca-certificates.custom-ca-certificates.pem Additional resources Section 3.5.3, "Git with self-signed certificates" . 3.8.4. Adding labels and annotations 3.8.4.1. Configuring OpenShift Route to work with Router Sharding You can configure labels, annotations, and domains for OpenShift Route to work with Router Sharding . Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . dsc . See: Section 1.2, "Installing the dsc management tool" . Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: networking: labels: <labels> 1 domain: <domain> 2 annotations: <annotations> 3 1 An unstructured key value map of labels that the target ingress controller uses to filter the set of Routes to service. 2 The DNS name serviced by the target ingress controller. 3 An unstructured key value map stored with a resource. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.9. Configuring storage Warning OpenShift Dev Spaces does not support the Network File System (NFS) protocol. Section 3.9.1, "Configuring storage classes" Section 3.9.2, "Configuring the storage strategy" Section 3.9.3, "Configuring storage sizes" 3.9.1. Configuring storage classes To configure OpenShift Dev Spaces to use a configured infrastructure storage, install OpenShift Dev Spaces using storage classes. This is especially useful when you want to bind a persistent volume provided by a non-default provisioner. OpenShift Dev Spaces has one component that requires persistent volumes to store data: A OpenShift Dev Spaces workspace. OpenShift Dev Spaces workspaces store source code using volumes, for example /projects volume. Note OpenShift Dev Spaces workspaces source code is stored in the persistent volume only if a workspace is not ephemeral. Persistent volume claims facts: OpenShift Dev Spaces does not create persistent volumes in the infrastructure. OpenShift Dev Spaces uses persistent volume claims (PVC) to mount persistent volumes. The Section 1.3.1.2, "Dev Workspace operator" creates persistent volume claims. Define a storage class name in the OpenShift Dev Spaces configuration to use the storage classes feature in the OpenShift Dev Spaces PVC. Procedure Use CheCluster Custom Resource definition to define storage classes: Define storage class names: configure the CheCluster Custom Resource, and install OpenShift Dev Spaces. See Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" . spec: devEnvironments: storage: perUserStrategyPvcConfig: claimSize: <claim_size> 1 storageClass: <storage_class_name> 2 perWorkspaceStrategyPvcConfig: claimSize: <claim_size> 3 storageClass: <storage_class_name> 4 pvcStrategy: <pvc_strategy> 5 1 3 Persistent Volume Claim size. 2 4 Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. 5 Persistent volume claim strategy. The supported strategies are: per-user (all workspaces Persistent Volume Claims in one volume), per-workspace (each workspace is given its own individual Persistent Volume Claim) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.) Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.9.2. Configuring the storage strategy OpenShift Dev Spaces can be configured to provide persistent or non-persistent storage to workspaces by selecting a storage strategy. The selected storage strategy will be applied to all newly created workspaces by default. Users can opt for a non-default storage strategy for their workspace in their devfile or through the URL parameter . Available storage strategies: per-user : Use a single PVC for all workspaces created by a user. per-workspace : Each workspace is given its own PVC. ephemeral : Non-persistent storage; any local changes will be lost when the workspace is stopped. The default storage strategy used in OpenShift Dev Spaces is per-user . Procedure Set the pvcStrategy field in the Che Cluster Custom Resource to per-user , per-workspace or ephemeral . Note You can set this field at installation. See Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" . You can update this field on the command line. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: storage: pvc: pvcStrategy: 'per-user' 1 1 The available storage strategies are per-user , per-workspace and ephemeral . 3.9.3. Configuring storage sizes You can configure the persistent volume claim (PVC) size using the per-user or per-workspace storage strategies. You must specify the PVC sizes in the CheCluster Custom Resource in the format of a Kubernetes resource quantity . For more details on the available storage strategies, see this page . Default persistent volume claim sizes: per-user: 10Gi per-workspace: 5Gi Procedure Set the appropriate claimSize field for the desired storage strategy in the Che Cluster Custom Resource. Note You can set this field at installation. See Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" . You can update this field on the command line. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: storage: pvc: pvcStrategy: ' <strategy_name> ' 1 perUserStrategyPvcConfig: 2 claimSize: <resource_quantity> 3 perWorkspaceStrategyPvcConfig: 4 claimSize: <resource_quantity> 5 1 Select the storage strategy: per-user or per-workspace or ephemeral . Note: the ephemeral storage strategy does not use persistent storage, therefore you cannot configure its storage size or other PVC-related attributes. 2 4 Specify a claim size on the line or omit the line to set the default claim size value. The specified claim size is only used when you select this storage strategy. 3 5 The claim size must be specified as a Kubernetes resource quantity . The available quantity units include: Ei , Pi , Ti , Gi , Mi and Ki . 3.10. Configuring dashboard Section 3.10.1, "Configuring getting started samples" Section 3.10.2, "Configuring editors definitions" Section 3.10.3, "Customizing OpenShift Eclipse Che ConsoleLink icon" 3.10.1. Configuring getting started samples This procedure describes how to configure OpenShift Dev Spaces Dashboard to display custom samples. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . Procedure Create a JSON file with the samples configuration. The file must contain an array of objects, where each object represents a sample. cat > my-samples.json <<EOF [ { "displayName": " <display_name> ", 1 "description": " <description> ", 2 "tags": <tags> , 3 "url": " <url> ", 4 "icon": { "base64data": " <base64data> ", 5 "mediatype": " <mediatype> " 6 } } ] EOF 1 The display name of the sample. 2 The description of the sample. 3 The JSON array of tags, for example, ["java", "spring"] . 4 The URL to the repository containing the devfile. 5 The base64-encoded data of the icon. 6 The media type of the icon. For example, image/png . Create a ConfigMap with the samples configuration: oc create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces Add the required labels to the ConfigMap: oc label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces Refresh the OpenShift Dev Spaces Dashboard page to see the new samples. 3.10.2. Configuring editors definitions Learn how to configure OpenShift Dev Spaces editor definitions. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . Procedure Create the my-editor-definition-devfile.yaml YAML file with the editor definition configuration. Important Make sure you provide the actual values for publisher and version under metadata.attributes . They are used to construct the editor id along with editor name in the following format publisher/name/version . Below you can find the supported values, including optional ones: # Version of the devile schema schemaVersion: 2.2.2 # Meta information of the editor metadata: # (MANDATORY) The editor name # Must consist of lower case alphanumeric characters, '-' or '.' name: editor-name displayName: Display Name description: Run Editor Foo on top of Eclipse Che # (OPTIONAL) Array of tags of the current editor. The Tech-Preview tag means the option is considered experimental and is not recommended for production environments. While it can include new features and improvements, it may still contain bugs or undergo significant changes before reaching a stable version. tags: - Tech-Preview # Additional attributes attributes: title: This is my editor # (MANDATORY) The publisher name publisher: publisher # (MANDATORY) The editor version version: version repository: https://github.com/editor/repository/ firstPublicationDate: '2024-01-01' iconMediatype: image/svg+xml iconData: | <icon-content> # List of editor components components: # Name of the component - name: che-code-injector # Configuration of devworkspace-related container container: # Image of the container image: 'quay.io/che-incubator/che-code:insiders' # The command to run in the dockerimage component instead of the default one provided in the image command: - /entrypoint-init-container.sh # (OPTIONAL) List of volumes mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # The path of the mount path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 256Mi # (OPTIONAL) The memory request of the container memoryRequest: 32Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # Name of the component - name: che-code-runtime-description # (OPTIONAL) Map of implementation-dependant free-form YAML attributes attributes: # The component within the architecture app.kubernetes.io/component: che-code-runtime # The name of a higher level application this one is part of app.kubernetes.io/part-of: che-code.eclipse.org # Defines a container component as a "container contribution". If a flattened DevWorkspace has a container component with the merge-contribution attribute, then any container contributions are merged into that container component controller.devfile.io/container-contribution: true container: # Can be a dummy image because the component is expected to be injected into workspace dev component image: quay.io/devfile/universal-developer-image:latest # (OPTIONAL) List of volume mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # (OPTIONAL) The path in the component container where the volume should be mounted. If no path is defined, the default path is the is /<name> path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 1024Mi # (OPTIONAL) The memory request of the container memoryRequest: 256Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # (OPTIONAL) Environment variables used in this container env: - name: ENV_NAME value: value # Component endpoints endpoints: # Name of the editor - name: che-code # (OPTIONAL) Map of implementation-dependant string-based free-form attributes attributes: # Type of the endpoint. You can only set its value to main, indicating that the endpoint should be used as the mainUrl in the workspace status (i.e. it should be the URL used to access the editor in this context) type: main # An attribute that instructs the service to automatically redirect the unauthenticated requests for current user authentication. Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false. cookiesAuthEnabled: true # Defines an endpoint as "discoverable", meaning that a service should be created using the endpoint name (i.e. instead of generating a service name for all endpoints, this endpoint should be statically accessible) discoverable: false # Used to secure the endpoint with authorization on OpenShift, so that not anyone on the cluster can access the endpoint, the attribute enables authentication. urlRewriteSupported: true # Port number to be used within the container component targetPort: 3100 # (OPTIONAL) Describes how the endpoint should be exposed on the network (public, internal, none) exposure: public # (OPTIONAL) Describes whether the endpoint should be secured and protected by some authentication process secure: true # (OPTIONAL) Describes the application and transport protocols of the traffic that will go through this endpoint protocol: https # Mandatory name that allows referencing the component from other elements - name: checode # (OPTIONAL) Allows specifying the definition of a volume shared by several other components. Ephemeral volumes are not stored persistently across restarts. Defaults to false volume: {ephemeral: true} # (OPTIONAL) Bindings of commands to events. Each command is referred-to by its name events: # IDs of commands that should be executed before the devworkspace start. These commands would typically be executed in an init container preStart: - init-container-command # IDs of commands that should be executed after the devworkspace has completely started. In the case of Che-Code, these commands should be executed after all plugins and extensions have started, including project cloning. This means that those commands are not triggered until the user opens the IDE within the browser postStart: - init-che-code-command # (OPTIONAL) Predefined, ready-to-use, devworkspace-related commands commands: # Mandatory identifier that allows referencing this command - id: init-container-command apply: # Describes the component for the apply command component: che-code-injector # Mandatory identifier that allows referencing this command - id: init-che-code-command # CLI Command executed in an existing component container exec: # Describes component for the exec command component: che-code-runtime-description # The actual command-line string commandLine: 'nohup /checode/entrypoint-volume.sh > /checode/entrypoint-logs.txt 2>&1 &' Create a ConfigMap with the editor definition content: oc create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces Add the required labels to the ConfigMap: oc label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces Refresh the OpenShift Dev Spaces Dashboard page to see new available editor. 3.10.2.1. Retrieving the editor definition The editor definition is also served by the OpenShift Dev Spaces dashboard API from the following URL: https:// <openshift_dev_spaces_fqdn> /dashboard/api/editors/devfile?che-editor= <editor id> For the example from Section 3.10.2, "Configuring editors definitions" , the editor definition can be retrieved by accessing the following URL: https:// <openshift_dev_spaces_fqdn> /dashboard/api/editors/devfile?che-editor=publisher/editor-name/version Tip When retrieving the editor definition from within the OpenShift cluster, the OpenShift Dev Spaces dashboard API can be accessed via the dashboard service: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors/devfile?che-editor= <editor id> Additional resources Devfile documentation {editor-definition-samples-link} 3.10.3. Customizing OpenShift Eclipse Che ConsoleLink icon This procedure describes how to customize Red Hat OpenShift Dev Spaces ConsoleLink icon. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . Procedure Create a Secret: oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: devspaces-dashboard-customization namespace: openshift-devspaces annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /public/dashboard/assets/branding labels: app.kubernetes.io/component: devspaces-dashboard-secret app.kubernetes.io/part-of: che.eclipse.org data: loader.svg: <Base64_encoded_content_of_the_image> 1 type: Opaque EOF 1 Base64 encoding with disabled line wrapping. Wait until the rollout of devspaces-dashboard finishes. Additional resources Creating custom links in the web console 3.11. Managing identities and authorizations This section describes different aspects of managing identities and authorizations of Red Hat OpenShift Dev Spaces. 3.11.1. Configuring OAuth for Git providers Note To enable the experimental feature that forces a refresh of the personal access token on workspace startup in Red Hat OpenShift Dev Spaces, modify the Custom Resource configuration as follows: spec: components: cheServer: extraProperties: CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: "true" You can configure OAuth between OpenShift Dev Spaces and Git providers, enabling users to work with remote Git repositories: Section 3.11.1.1, "Configuring OAuth 2.0 for GitHub" Section 3.11.1.2, "Configuring OAuth 2.0 for GitLab" Configuring OAuth 2.0 for a Bitbucket Server or OAuth 2.0 for the Bitbucket Cloud Configuring OAuth 1.0 for a Bitbucket Server Section 3.11.1.6, "Configuring OAuth 2.0 for Microsoft Azure DevOps Services" 3.11.1.1. Configuring OAuth 2.0 for GitHub To enable users to work with a remote Git repository that is hosted on GitHub: Set up the GitHub OAuth App (OAuth 2.0). Apply the GitHub OAuth App Secret. 3.11.1.1.1. Setting up the GitHub OAuth App Set up a GitHub OAuth App using OAuth 2.0. Prerequisites You are logged in to GitHub. Procedure Go to https://github.com/settings/applications/new . Enter the following values: Application name : < application name > Homepage URL : https:// <openshift_dev_spaces_fqdn> / Authorization callback URL : https:// <openshift_dev_spaces_fqdn> /api/oauth/callback Click Register application . Click Generate new client secret . Copy and save the GitHub OAuth Client ID for use when applying the GitHub OAuth App Secret. Copy and save the GitHub OAuth Client Secret for use when applying the GitHub OAuth App Secret. Additional resources GitHub Docs: Creating an OAuth App 3.11.1.1.2. Applying the GitHub OAuth App Secret Prepare and apply the GitHub OAuth App Secret. Prerequisites Setting up the GitHub OAuth App is completed. The following values, which were generated when setting up the GitHub OAuth App, are prepared: GitHub OAuth Client ID GitHub OAuth Client Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: github-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: github che.eclipse.org/scm-server-endpoint: <github_server_url> 2 che.eclipse.org/scm-github-disable-subdomain-isolation: 'false' 3 type: Opaque stringData: id: <GitHub_OAuth_Client_ID> 4 secret: <GitHub_OAuth_Client_Secret> 5 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 This depends on the GitHub product your organization is using: When hosting repositories on GitHub.com or GitHub Enterprise Cloud, omit this line or enter the default https://github.com . When hosting repositories on GitHub Enterprise Server, enter the GitHub Enterprise Server URL. 3 If you are using GitHub Enterprise Server with a disabled subdomain isolation option, you must set the annotation to true , otherwise you can either omit the annotation or set it to false . 4 The GitHub OAuth Client ID . 5 The GitHub OAuth Client Secret . Apply the Secret: Verify in the output that the Secret is created. To configure OAuth 2.0 for another GitHub provider, you have to repeat the steps above and create a second GitHub OAuth Secret with a different name. 3.11.1.2. Configuring OAuth 2.0 for GitLab To enable users to work with a remote Git repository that is hosted using a GitLab instance: Set up the GitLab authorized application (OAuth 2.0). Apply the GitLab authorized application Secret. 3.11.1.2.1. Setting up the GitLab authorized application Set up a GitLab authorized application using OAuth 2.0. Prerequisites You are logged in to GitLab. Procedure Click your avatar and go to Edit profile Applications . Enter OpenShift Dev Spaces as the Name . Enter https:// <openshift_dev_spaces_fqdn> /api/oauth/callback as the Redirect URI . Check the Confidential and Expire access tokens checkboxes. Under Scopes , check the api , write_repository , and openid checkboxes. Click Save application . Copy and save the GitLab Application ID for use when applying the GitLab-authorized application Secret. Copy and save the GitLab Client Secret for use when applying the GitLab-authorized application Secret. Additional resources GitLab Docs: Authorized applications 3.11.1.2.2. Applying the GitLab-authorized application Secret Prepare and apply the GitLab-authorized application Secret. Prerequisites Setting up the GitLab authorized application is completed. The following values, which were generated when setting up the GitLab authorized application, are prepared: GitLab Application ID GitLab Client Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: gitlab-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: gitlab che.eclipse.org/scm-server-endpoint: <gitlab_server_url> 2 type: Opaque stringData: id: <GitLab_Application_ID> 3 secret: <GitLab_Client_Secret> 4 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The GitLab server URL . Use https://gitlab.com for the SAAS version. 3 The GitLab Application ID . 4 The GitLab Client Secret . Apply the Secret: Verify in the output that the Secret is created. 3.11.1.3. Configuring OAuth 2.0 for a Bitbucket Server You can use OAuth 2.0 to enable users to work with a remote Git repository that is hosted on a Bitbucket Server: Set up an OAuth 2.0 application link on the Bitbucket Server. Apply an application link Secret for the Bitbucket Server. 3.11.1.3.1. Setting up an OAuth 2.0 application link on the Bitbucket Server Set up an OAuth 2.0 application link on the Bitbucket Server. Prerequisites You are logged in to the Bitbucket Server. Procedure Go to Administration > Applications > Application links . Select Create link . Select External application and Incoming . Enter https:// <openshift_dev_spaces_fqdn> /api/oauth/callback to the Redirect URL field. Select the Admin - Write checkbox in Application permissions . Click Save . Copy and save the Client ID for use when applying the Bitbucket application link Secret. Copy and save the Client secret for use when applying the Bitbucket application link Secret. Additional resources Atlassian Documentation: Configure an incoming link 3.11.1.3.2. Applying an OAuth 2.0 application link Secret for the Bitbucket Server Prepare and apply the OAuth 2.0 application link Secret for the Bitbucket Server. Prerequisites The application link is set up on the Bitbucket Server. The following values, which were generated when setting up the Bitbucket application link, are prepared: Bitbucket Client ID Bitbucket Client secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: id: <Bitbucket_Client_ID> 3 secret: <Bitbucket_Client_Secret> 4 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The URL of the Bitbucket Server. 3 The Bitbucket Client ID . 4 The Bitbucket Client secret . Apply the Secret: Verify in the output that the Secret is created. 3.11.1.4. Configuring OAuth 2.0 for the Bitbucket Cloud You can enable users to work with a remote Git repository that is hosted in the Bitbucket Cloud: Set up an OAuth consumer (OAuth 2.0) in the Bitbucket Cloud. Apply an OAuth consumer Secret for the Bitbucket Cloud. 3.11.1.4.1. Setting up an OAuth consumer in the Bitbucket Cloud Set up an OAuth consumer for OAuth 2.0 in the Bitbucket Cloud. Prerequisites You are logged in to the Bitbucket Cloud. Procedure Click your avatar and go to the All workspaces page. Select a workspace and click it. Go to Settings OAuth consumers Add consumer . Enter OpenShift Dev Spaces as the Name . Enter https:// <openshift_dev_spaces_fqdn> /api/oauth/callback as the Callback URL . Under Permissions , check all of the Account and Repositories checkboxes, and click Save . Expand the added consumer and then copy and save the Key value for use when applying the Bitbucket OAuth consumer Secret: Copy and save the Secret value for use when applying the Bitbucket OAuth consumer Secret. Additional resources Bitbucket Docs: Use OAuth on Bitbucket Cloud 3.11.1.4.2. Applying an OAuth consumer Secret for the Bitbucket Cloud Prepare and apply an OAuth consumer Secret for the Bitbucket Cloud. Prerequisites The OAuth consumer is set up in the Bitbucket Cloud. The following values, which were generated when setting up the Bitbucket OAuth consumer, are prepared: Bitbucket OAuth consumer Key Bitbucket OAuth consumer Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket type: Opaque stringData: id: <Bitbucket_Oauth_Consumer_Key> 2 secret: <Bitbucket_Oauth_Consumer_Secret> 3 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The Bitbucket OAuth consumer Key . 3 The Bitbucket OAuth consumer Secret . Apply the Secret: Verify in the output that the Secret is created. 3.11.1.5. Configuring OAuth 1.0 for a Bitbucket Server To enable users to work with a remote Git repository that is hosted on a Bitbucket Server: Set up an application link (OAuth 1.0) on the Bitbucket Server. Apply an application link Secret for the Bitbucket Server. 3.11.1.5.1. Setting up an application link on the Bitbucket Server Set up an application link for OAuth 1.0 on the Bitbucket Server. Prerequisites You are logged in to the Bitbucket Server. openssl is installed in the operating system you are using. Procedure On a command line, run the commands to create the necessary files for the steps and for use when applying the application link Secret: Go to Administration Application Links . Enter https:// <openshift_dev_spaces_fqdn> / into the URL field and click Create new link . Under The supplied Application URL has redirected once , check the Use this URL checkbox and click Continue . Enter OpenShift Dev Spaces as the Application Name . Select Generic Application as the Application Type . Enter OpenShift Dev Spaces as the Service Provider Name . Paste the content of the bitbucket-consumer-key file as the Consumer key . Paste the content of the bitbucket-shared-secret file as the Shared secret . Enter <bitbucket_server_url> /plugins/servlet/oauth/request-token as the Request Token URL . Enter <bitbucket_server_url> /plugins/servlet/oauth/access-token as the Access token URL . Enter <bitbucket_server_url> /plugins/servlet/oauth/authorize as the Authorize URL . Check the Create incoming link checkbox and click Continue . Paste the content of the bitbucket-consumer-key file as the Consumer Key . Enter OpenShift Dev Spaces as the Consumer name . Paste the content of the public-stripped.pub file as the Public Key and click Continue . Additional resources Atlassian Documentation: Link to other applications 3.11.1.5.2. Applying an application link Secret for the Bitbucket Server Prepare and apply the application link Secret for the Bitbucket Server. Prerequisites The application link is set up on the Bitbucket Server. The following files, which were created when setting up the application link, are prepared: privatepkcs8-stripped.pem bitbucket-consumer-key bitbucket-shared-secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/component: oauth-scm-configuration app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: private.key: <Content_of_privatepkcs8-stripped.pem> 3 consumer.key: <Content_of_bitbucket-consumer-key> 4 shared_secret: <Content_of_bitbucket-shared-secret> 5 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The URL of the Bitbucket Server. 3 The content of the privatepkcs8-stripped.pem file. 4 The content of the bitbucket-consumer-key file. 5 The content of the bitbucket-shared-secret file. Apply the Secret: Verify in the output that the Secret is created. 3.11.1.6. Configuring OAuth 2.0 for Microsoft Azure DevOps Services To enable users to work with a remote Git repository that is hosted on Microsoft Azure Repos: Set up the Microsoft Azure DevOps Services OAuth App (OAuth 2.0). Apply the Microsoft Azure DevOps Services OAuth App Secret. 3.11.1.6.1. Setting up the Microsoft Azure DevOps Services OAuth App Set up a Microsoft Azure DevOps Services OAuth App using OAuth 2.0. Prerequisites You are logged in to Microsoft Azure DevOps Services . Important Third-party application access via OAuth is enabled for your organization. See Change application connection & security policies for your organization . Procedure Visit https://app.vsaex.visualstudio.com/app/register/ . Enter the following values: Company name : OpenShift Dev Spaces Application name : OpenShift Dev Spaces Application website : https:// <openshift_dev_spaces_fqdn> / Authorization callback URL : https:// <openshift_dev_spaces_fqdn> /api/oauth/callback In Select Authorized scopes , select Code (read and write) . Click Create application . Copy and save the App ID for use when applying the Microsoft Azure DevOps Services OAuth App Secret. Click Show to display the Client Secret . Copy and save the Client Secret for use when applying the Microsoft Azure DevOps Services OAuth App Secret. Additional resources Authorize access to REST APIs with OAuth 2.0 Change application connection & security policies for your organization 3.11.1.6.2. Applying the Microsoft Azure DevOps Services OAuth App Secret Prepare and apply the Microsoft Azure DevOps Services Secret. Prerequisites Setting up the Microsoft Azure DevOps Services OAuth App is completed. The following values, which were generated when setting up the Microsoft Azure DevOps Services OAuth App, are prepared: App ID Client Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: azure-devops-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: azure-devops type: Opaque stringData: id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID> 2 secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret> 3 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The Microsoft Azure DevOps Services OAuth App ID . 3 The Microsoft Azure DevOps Services OAuth Client Secret . Apply the Secret: Verify in the output that the Secret is created. Wait for the rollout of the OpenShift Dev Spaces server components to be completed. 3.11.2. Configuring cluster roles for Dev Spaces users You can grant OpenShift Dev Spaces users more cluster permissions by adding cluster roles to those users. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Define the user roles name: 1 Unique resource name. Find out the namespace where the OpenShift Dev Spaces Operator is deployed: Create needed roles: 1 As <verbs> , list all Verbs that apply to all ResourceKinds and AttributeRestrictions contained in this rule. You can use * to represent all verbs. 2 As <apiGroups> , name the APIGroups that contain the resources. 3 As <resources> , list all resources that this rule applies to. You can use * to represent all verbs. Delegate the roles to the OpenShift Dev Spaces Operator: Configure the OpenShift Dev Spaces Operator to delegate the roles to the che service account: Configure the OpenShift Dev Spaces server to delegate the roles to a user: Wait for the rollout of the OpenShift Dev Spaces server components to be completed. Ask the user to log out and log in to have the new roles applied. 3.11.3. Configuring advanced authorization You can determine which users and groups are allowed to access OpenShift Dev Spaces. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: networking: auth: advancedAuthorization: allowUsers: - <allow_users> 1 allowGroups: - <allow_groups> 2 denyUsers: - <deny_users> 3 denyGroups: - <deny_groups> 4 1 List of users allowed to access Red Hat OpenShift Dev Spaces. 2 List of groups of users allowed to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only). 3 List of users denied access to Red Hat OpenShift Dev Spaces. 4 List of groups of users denied to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only). Wait for the rollout of the OpenShift Dev Spaces server components to be completed. Note To allow a user to access OpenShift Dev Spaces, add them to the allowUsers list. Alternatively, choose a group the user is a member of and add the group to the allowGroups list. To deny a user access to OpenShift Dev Spaces, add them to the denyUsers list. Alternatively, choose a group the user is a member of and add the group to the denyGroups list. If the user is on both allow and deny lists, they are denied access to OpenShift Dev Spaces. If allowUsers and allowGroups are empty, all users are allowed to access OpenShift Dev Spaces except the ones on the deny lists. If denyUsers and denyGroups are empty, only the users from allow lists are allowed to access OpenShift Dev Spaces. If both allow and deny lists are empty, all users are allowed to access OpenShift Dev Spaces. 3.11.4. Removing user data in compliance with the GDPR You can remove a user's data on OpenShift Container Platform in compliance with the General Data Protection Regulation (GDPR) that enforces the right of individuals to have their personal data erased. The process for other Kubernetes infrastructures might vary. Follow the user management best practices of the provider you are using for the Red Hat OpenShift Dev Spaces installation. Warning Removing user data as follows is irreversible! All removed data is deleted and unrecoverable! Prerequisites An active oc session with administrative permissions for the OpenShift Container Platform cluster. See Getting started with the OpenShift CLI . Procedure List all the users in the OpenShift cluster using the following command: USD oc get users Delete the user entry: Important If the user has any associated resources (such as projects, roles, or service accounts), you need to delete those first before deleting the user. USD oc delete user <username> Additional resources Chapter 6, Using the Dev Spaces server API Section 3.2.1, "Configuring project name" Chapter 8, Uninstalling Dev Spaces 3.12. Configuring fuse-overlayfs By default, the Universal Developer Image (UDI) contains Podman and Buildah which you can use to build and push container images within a workspace. However, Podman and Buildah in the UDI are configured to use the vfs storage driver which does not provide copy-on-write support. For more efficient image management, use the fuse-overlayfs storage driver which supports copy-on-write in rootless environments. To enable fuse-overlayfs for workspaces for OpenShift versions older than 4.15, the administrator must first enable /dev/fuse access on the cluster by following Section 3.12.1, "Enabling access to for OpenShift version older than 4.15" . This is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse device is available by default. See Release Notes . After enabling /dev/fuse access, fuse-overlayfs can be enabled in two ways: For all user workspaces within the cluster. See Section 3.12.2, "Enabling fuse-overlayfs for all workspaces" . For workspaces belonging to certain users. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver . 3.12.1. Enabling access to for OpenShift version older than 4.15 To use fuse-overlayfs, you must make /dev/fuse accessible to workspace containers first. Note This procedure is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse device is available by default. See Release Notes . Warning Creating MachineConfig resources on an OpenShift cluster is a potentially dangerous task, as you are making advanced, system-level changes to the cluster. View the MachineConfig documentation for more details and possible risks. Prerequisites The Butane tool ( butane ) is installed in the operating system you are using. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Set the environment variable based on the type of your OpenShift cluster: a single node cluster, or a multi node cluster with separate control plane and worker nodes. For a single node cluster, set: For a multi node cluster, set: Set the environment variable for the OpenShift Butane config version. This variable is the major and minor version of the OpenShift cluster. For example, 4.12.0 , 4.13.0 , or 4.14.0 . Create a MachineConfig resource that creates a drop-in CRI-O configuration file named 99-podman-fuse in the NODE_ROLE nodes. This configuration file makes access to the /dev/fuse device possible for certain pods. 1 The absolute file path to the new drop-in configuration file for CRI-O. 2 The content of the new drop-in configuration file. 3 Define a podman-fuse workload. 4 The pod annotation that activates the podman-fuse workload settings. 5 List of annotations the podman-fuse workload is allowed to process. 6 List of devices on the host that a user can specify with the io.kubernetes.cri-o.Devices annotation. After applying the MachineConfig resource, scheduling will be temporarily disabled for each node with the worker role as changes are applied. View the nodes' statuses. Example output: Once all nodes with the worker role have a status Ready , /dev/fuse will be available to any pod with the following annotations. io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse Verification steps Get the name of a node with a worker role: Open an oc debug session to a worker node. Verify that a new CRI-O config file named 99-podman-fuse exists. 3.12.1.1. Using fuse-overlayfs for Podman and Buildah within a workspace Users can follow https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.16/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver to update existing workspaces to use the fuse-overlayfs storage driver for Podman and Buildah. 3.12.2. Enabling fuse-overlayfs for all workspaces Prerequisites The Section 3.12.1, "Enabling access to for OpenShift version older than 4.15" section has been completed. This is not required for OpenShift versions 4.15 and later. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Create a ConfigMap that mounts the storage.conf file for all user workspaces. kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.config/containers/ data: storage.conf: | [storage] driver = "overlay" [storage.options.overlay] mount_program="/usr/bin/fuse-overlayfs" Warning Creating this ConfigMap will cause all running workspaces to restart. Set the necessary annotation in the spec.devEnvironments.workspacesPodAnnotations field of the CheCluster custom resource. kind: CheCluster apiVersion: org.eclipse.che/v2 spec: devEnvironments: workspacesPodAnnotations: io.kubernetes.cri-o.Devices: /dev/fuse Note For OpenShift versions before 4.15, the io.openshift.podman-fuse: "" annotation is also required. Verification steps Start a workspace and verify that the storage driver is overlay . Example output: Note The following error might occur for existing workspaces: In this case, delete the libpod local files as mentioned in the error message.
[ "spec: <component> : <property_to_configure> : <value>", "dsc server:deploy --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml --platform <chosen_platform>", "oc get configmap che -o jsonpath='{.data. <configured_property> }' -n openshift-devspaces", "oc edit checluster/devspaces -n openshift-devspaces", "oc get configmap che -o jsonpath='{.data. <configured_property> }' -n openshift-devspaces", "apiVersion: org.eclipse.che/v2 kind: CheCluster metadata: name: devspaces namespace: openshift-devspaces spec: components: {} devEnvironments: {} networking: {}", "spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_>", "devEnvironments: defaultNamespace: autoProvision: false", "kind: Namespace apiVersion: v1 metadata: name: <project_name> 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-namespace annotations: che.eclipse.org/username: <username>", "apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here>", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <data content here>", "apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here>", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <data content here>", "apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret data: mykey: myvalue", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: myvalue", "apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret stringData: mykey: <data_content_here> otherkey: <data_content_here>", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: <data content here> otherkey: <data content here>", "apiVersion: org.eclipse.che/v2 kind: CheCluster spec: components: cheServer: extraProperties: CHE_LOGS_APPENDERS_IMPL: json", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <deployment_name> 1", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: devspaces-scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: devspaces minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75", "spec: devEnvironments: startTimeoutSeconds: 600 1 ignoredUnrecoverableEvents: 2 - FailedScheduling", "spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: \"false\"", "oc get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\\.kubernetes\\.io/safe-to-evict}' false", "spec: devEnvironments: maxNumberOfWorkspacesPerUser: <kept_workspaces_limit> 1", "oc get checluster --all-namespaces -o=jsonpath=\"{.items[*].metadata.namespace}\"", "oc patch checluster/devspaces -n openshift-devspaces \\ 1 --type='merge' -p '{\"spec\":{\"devEnvironments\":{\"maxNumberOfWorkspacesPerUser\": <kept_workspaces_limit> }}}' 2", "spec: devEnvironments: maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit> 1", "oc get checluster --all-namespaces -o=jsonpath=\"{.items[*].metadata.namespace}\"", "oc patch checluster/devspaces -n openshift-devspaces \\ 1 --type='merge' -p '{\"spec\":{\"devEnvironments\":{\"maxNumberOfRunningWorkspacesPerUser\": <running_workspaces_limit> }}}' 2", "oc create configmap che-git-self-signed-cert --from-file=ca.crt= <path_to_certificate> \\ 1 --from-literal=githost= <git_server_url> -n openshift-devspaces 2", "oc label configmap che-git-self-signed-cert app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces", "spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert", "[http \"https://10.33.177.118:3000\"] sslCAInfo = /etc/config/che-git-tls-creds/certificate", "spec: devEnvironments: nodeSelector: key: value", "spec: devEnvironments: tolerations: - effect: NoSchedule key: key value: value operator: Equal", "spec: components: [...] pluginRegistry: openVSXURL: <your_open_vsx_registy> [...]", "kind: ConfigMap apiVersion: v1 metadata: name: user-configmap namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data:", "kind: ConfigMap apiVersion: v1 metadata: name: user-settings-xml namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 data: settings.xml: | <settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd\"> <localRepository>/home/user/.m2/repository</localRepository> <interactiveMode>true</interactiveMode> <offline>false</offline> </settings>", "kind: Secret apiVersion: v1 metadata: name: user-secret namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data:", "kind: Secret apiVersion: v1 metadata: name: user-certificates namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/pki/ca-trust/source/anchors stringData: trusted-certificates.crt: |", "kind: Secret apiVersion: v1 metadata: name: user-env namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: env stringData: ENV_VAR_1: value_1 ENV_VAR_2: value_2", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config spec:", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: /home/user/data controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi volumeMode: Filesystem", "(memory limit) * (number of images) * (number of nodes in the cluster)", "git clone https://github.com/che-incubator/kubernetes-image-puller cd kubernetes-image-puller/deploy/openshift", "oc new-project <k8s-image-puller>", "oc process -f serviceaccount.yaml | oc apply -f - oc process -f configmap.yaml | oc apply -f - oc process -f app.yaml | oc apply -f -", "oc get deployment,daemonset,pod --namespace <k8s-image-puller>", "oc get configmap <kubernetes-image-puller> --output yaml", "patch checluster/devspaces --namespace openshift-devspaces --type='merge' --patch '{ \"spec\": { \"components\": { \"imagePuller\": { \"enable\": true } } } }'", "patch checluster/devspaces --namespace openshift-devspaces --type='merge' --patch '{ \"spec\": { \"components\": { \"imagePuller\": { \"enable\": true, \"spec\": { \"images\": \" NAME-1 = IMAGE-1 ; NAME-2 = IMAGE-2 \" 1 } } } } }'", "create namespace k8s-image-puller", "apply -f - <<EOF apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: k8s-image-puller-images namespace: k8s-image-puller spec: images: \"__NAME-1__=__IMAGE-1__;__NAME-2__=__IMAGE-2__\" 1 EOF", "spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'https://your-web-server/plugin.yaml'", "package main import ( \"io/ioutil\" \"net/http\" \"go.uber.org/zap\" ) var logger *zap.SugaredLogger func event(w http.ResponseWriter, req *http.Request) { switch req.Method { case \"GET\": logger.Info(\"GET /event\") case \"POST\": logger.Info(\"POST /event\") } body, err := req.GetBody() if err != nil { logger.With(\"err\", err).Info(\"error getting body\") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With(\"error\", err).Info(\"error reading response body\") return } logger.With(\"body\", string(responseBody)).Info(\"got event\") } func activity(w http.ResponseWriter, req *http.Request) { switch req.Method { case \"GET\": logger.Info(\"GET /activity, doing nothing\") case \"POST\": logger.Info(\"POST /activity\") body, err := req.GetBody() if err != nil { logger.With(\"error\", err).Info(\"error getting body\") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With(\"error\", err).Info(\"error reading response body\") return } logger.With(\"body\", string(responseBody)).Info(\"got activity\") } } func main() { log, _ := zap.NewProduction() logger = log.Sugar() http.HandleFunc(\"/event\", event) http.HandleFunc(\"/activity\", activity) logger.Info(\"Added Handlers\") logger.Info(\"Starting to serve\") http.ListenAndServe(\":8080\", nil) }", "git clone https://github.com/che-incubator/telemetry-server-example cd telemetry-server-example podman build -t registry/organization/telemetry-server-example:latest . podman push registry/organization/telemetry-server-example:latest", "kubectl apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces", "mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin -DprojectVersion=1.0.0-SNAPSHOT", "<!-- Required --> <dependency> <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId> <artifactId>backend-base</artifactId> <version>LATEST VERSION FROM PREVIOUS STEP</version> </dependency> <!-- Used to make http requests to the telemetry server --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency>", "<settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd\"> <servers> <server> <id>che-incubator</id> <username>YOUR GITHUB USERNAME</username> <password>YOUR GITHUB TOKEN</password> </server> </servers> <profiles> <profile> <id>github</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>che-incubator</id> <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url> </repository> </repositories> </profile> </profiles> </settings>", "package org.my.group; import java.util.Optional; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration; import org.eclipse.microprofile.config.inject.ConfigProperty; @Dependent @Alternative public class MainConfiguration extends BaseConfiguration { @ConfigProperty(name = \"welcome.message\") 1 Optional<String> welcomeMessage; 2 }", "package org.my.group; import java.util.HashMap; import java.util.Map; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import javax.inject.Inject; import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager; import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent; import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder; import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder; import org.eclipse.microprofile.rest.client.inject.RestClient; import org.slf4j.Logger; import static org.slf4j.LoggerFactory.getLogger; @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { private static final Logger LOG = getLogger(AbstractAnalyticsManager.class); public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) { super(mainConfiguration, devworkspaceFinder, usernameFinder); mainConfiguration.welcomeMessage.ifPresentOrElse( 1 (str) -> LOG.info(\"The welcome message is: {}\", str), () -> LOG.info(\"No welcome message provided\") ); } @Override public boolean isEnabled() { return true; } @Override public void destroy() {} @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { LOG.info(\"The received event is: {}\", event); 2 } @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { } @Override public void onActivity() {} }", "quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager", "spec: template: attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167'", "mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}", "INFO [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided INFO [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]", "INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled INFO [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che", "@Override public boolean isEnabled() { return true; }", "package org.my.group; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; @RegisterRestClient public interface TelemetryService { @POST @Path(\"/event\") 1 @Consumes(MediaType.APPLICATION_JSON) Response sendEvent(Map<String, Object> payload); }", "org.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testing", "@Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { @Inject @RestClient TelemetryService telemetryService; @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { Map<String, Object> payload = new HashMap<String, Object>(properties); payload.put(\"event\", event); telemetryService.sendEvent(payload); }", "@Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}", "public class AnalyticsManager extends AbstractAnalyticsManager { private long inactiveTimeLimit = 60000 * 3; @Override public void onActivity() { if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) { onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } }", "@Override public void destroy() { onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); }", "FROM registry.access.redhat.com/ubi8/openjdk-11:1.11 ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/ COPY --chown=185 target/quarkus-app/*.jar /deployments/ COPY --chown=185 target/quarkus-app/app/ /deployments/app/ COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/ EXPOSE 8080 USER 185 ENTRYPOINT [\"java\", \"-Dquarkus.http.host=0.0.0.0\", \"-Djava.util.logging.manager=org.jboss.logmanager.LogManager\", \"-Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}\", \"-jar\", \"/deployments/quarkus-run.jar\"]", "mvn package && build -f src/main/docker/Dockerfile.jvm -t image:tag .", "FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 CMD [\"./application\", \"-Dquarkus.http.host=0.0.0.0\", \"-Dquarkus.http.port=USDDEVWORKSPACE_TELEMETRY_BACKEND_PORT}\"]", "mvn package -Pnative -Dquarkus.native.container-build=true && build -f src/main/docker/Dockerfile.native -t image:tag .", "schemaVersion: 2.1.0 metadata: name: devworkspace-telemetry-backend-plugin version: 0.0.1 description: A Demo telemetry backend displayName: Devworkspace Telemetry Backend components: - name: devworkspace-telemetry-backend-plugin attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167' container: image: YOUR IMAGE 1 env: - name: WELCOME_MESSAGE 2 value: 'hello world!'", "oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml", "kind: Deployment apiVersion: apps/v1 metadata: name: apache spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: volumes: - name: plugin-yaml configMap: name: telemetry-plugin-yaml defaultMode: 420 containers: - name: apache image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest' ports: - containerPort: 8080 protocol: TCP resources: {} volumeMounts: - name: plugin-yaml mountPath: /var/www/html strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 --- kind: Service apiVersion: v1 metadata: name: apache spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apache type: ClusterIP --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: apache spec: host: apache-che.apps-crc.testing to: kind: Service name: apache weight: 100 port: targetPort: 8080 wildcardPolicy: None", "oc apply -f manifest.yaml", "curl apache-che.apps-crc.testing/plugin.yaml", "components: - name: telemetry-plugin plugin: uri: http://apache-che.apps-crc.testing/plugin.yaml", "spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'http://apache-che.apps-crc.testing/plugin.yaml'", "spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \" <key1=value1,key2=value2> \" 1", "spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \"org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG\"", "spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \"che.infra.request-logging=TRACE\"", "dsc server:logs -d /home/user/che-logs/", "Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'", "dsc server:logs -n my-namespace", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: devworkspace-controller namespace: openshift-devspaces 1 spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 10s 2 port: metrics scheme: https tlsConfig: insecureSkipVerify: true namespaceSelector: matchNames: - openshift-operators selector: matchLabels: app.kubernetes.io/name: devworkspace-controller", "oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true", "oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'", "oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring", "oc create configmap grafana-dashboard-dwo --from-literal=dwo-dashboard.json=\"USD(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)\" -n openshift-config-managed", "oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managed", "spec: components: metrics: enable: <boolean> 1", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: che-host namespace: openshift-devspaces 1 spec: endpoints: - interval: 10s 2 port: metrics scheme: http namespaceSelector: matchNames: - openshift-devspaces 3 selector: matchLabels: app.kubernetes.io/name: devspaces", "kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-k8s namespace: openshift-devspaces 1 rules: - verbs: - get - list - watch apiGroups: - '' resources: - services - endpoints - pods", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-devspaces-openshift-monitoring-prometheus-k8s namespace: openshift-devspaces 1 subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s", "oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true", "oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'", "oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring", "oc create configmap grafana-dashboard-devspaces-server --from-literal=devspaces-server-dashboard.json=\"USD(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)\" -n openshift-config-managed", "oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managed", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-devspaces spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-devspaces 1 podSelector: {} 2 policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-apiserver namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/name: devworkspace-webhook-server 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-apiserver policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-workspaces-namespaces namespace: openshift-devspaces 1 spec: podSelector: {} 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: app.kubernetes.io/component: workspaces-namespace policyTypes: - Ingress", "oc create project openshift-devspaces", "oc create secret TLS <tls_secret_name> \\ 1 --key <key_file> \\ 2 --cert <cert_file> \\ 3 -n openshift-devspaces", "oc label secret <tls_secret_name> \\ 1 app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces", "spec: networking: hostname: <hostname> 1 tlsSecretName: <secret> 2", "cat ca-cert-for-devspaces-*.pem | tr -d '\\r' > custom-ca-certificates.pem", "oc create configmap custom-ca-certificates --from-file=custom-ca-certificates.pem --namespace=openshift-devspaces", "oc label configmap custom-ca-certificates app.kubernetes.io/component=ca-bundle app.kubernetes.io/part-of=che.eclipse.org --namespace=openshift-devspaces", "oc get configmap --namespace=openshift-devspaces --output='jsonpath={.items[0:].data.custom-ca-certificates\\.pem}' --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org", "oc get pod --selector=app.kubernetes.io/component=devspaces --output='jsonpath={.items[0].spec.volumes[0:].configMap.name}' --namespace=openshift-devspaces | grep ca-certs-merged", "oc exec -t deploy/devspaces --namespace=openshift-devspaces -- cat /public-certs/custom-ca-certificates.pem", "oc logs deploy/devspaces --namespace=openshift-devspaces | grep custom-ca-certificates.pem", "for certificate in ca-cert*.pem ; do openssl x509 -in USDcertificate -digest -sha256 -fingerprint -noout | cut -d= -f2; done", "oc exec -t deploy/devspaces --namespace=openshift-devspaces -- keytool -list -keystore /home/user/cacerts | grep --after-context=1 custom-ca-certificates.pem", "oc get configmap che-trusted-ca-certs --namespace= <workspace_namespace> --output='jsonpath={.data.custom-ca-certificates\\.custom-ca-certificates\\.pem}'", "oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' | grep che-trusted-ca-certs", "oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].spec.containers[0:]}' | jq 'select (.volumeMounts[].name == \"che-trusted-ca-certs\") | .name'", "oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].metadata.name}' \\", "oc exec <workspace_pod_name> --namespace= <workspace_namespace> -- cat /public-certs/custom-ca-certificates.custom-ca-certificates.pem", "spec: networking: labels: <labels> 1 domain: <domain> 2 annotations: <annotations> 3", "spec: devEnvironments: storage: perUserStrategyPvcConfig: claimSize: <claim_size> 1 storageClass: <storage_class_name> 2 perWorkspaceStrategyPvcConfig: claimSize: <claim_size> 3 storageClass: <storage_class_name> 4 pvcStrategy: <pvc_strategy> 5", "spec: devEnvironments: storage: pvc: pvcStrategy: 'per-user' 1", "per-user: 10Gi", "per-workspace: 5Gi", "spec: devEnvironments: storage: pvc: pvcStrategy: ' <strategy_name> ' 1 perUserStrategyPvcConfig: 2 claimSize: <resource_quantity> 3 perWorkspaceStrategyPvcConfig: 4 claimSize: <resource_quantity> 5", "cat > my-samples.json <<EOF [ { \"displayName\": \" <display_name> \", 1 \"description\": \" <description> \", 2 \"tags\": <tags> , 3 \"url\": \" <url> \", 4 \"icon\": { \"base64data\": \" <base64data> \", 5 \"mediatype\": \" <mediatype> \" 6 } } ] EOF", "create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces", "label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces", "Version of the devile schema schemaVersion: 2.2.2 Meta information of the editor metadata: # (MANDATORY) The editor name # Must consist of lower case alphanumeric characters, '-' or '.' name: editor-name displayName: Display Name description: Run Editor Foo on top of Eclipse Che # (OPTIONAL) Array of tags of the current editor. The Tech-Preview tag means the option is considered experimental and is not recommended for production environments. While it can include new features and improvements, it may still contain bugs or undergo significant changes before reaching a stable version. tags: - Tech-Preview # Additional attributes attributes: title: This is my editor # (MANDATORY) The publisher name publisher: publisher # (MANDATORY) The editor version version: version repository: https://github.com/editor/repository/ firstPublicationDate: '2024-01-01' iconMediatype: image/svg+xml iconData: | <icon-content> List of editor components components: # Name of the component - name: che-code-injector # Configuration of devworkspace-related container container: # Image of the container image: 'quay.io/che-incubator/che-code:insiders' # The command to run in the dockerimage component instead of the default one provided in the image command: - /entrypoint-init-container.sh # (OPTIONAL) List of volumes mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # The path of the mount path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 256Mi # (OPTIONAL) The memory request of the container memoryRequest: 32Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # Name of the component - name: che-code-runtime-description # (OPTIONAL) Map of implementation-dependant free-form YAML attributes attributes: # The component within the architecture app.kubernetes.io/component: che-code-runtime # The name of a higher level application this one is part of app.kubernetes.io/part-of: che-code.eclipse.org # Defines a container component as a \"container contribution\". If a flattened DevWorkspace has a container component with the merge-contribution attribute, then any container contributions are merged into that container component controller.devfile.io/container-contribution: true container: # Can be a dummy image because the component is expected to be injected into workspace dev component image: quay.io/devfile/universal-developer-image:latest # (OPTIONAL) List of volume mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # (OPTIONAL) The path in the component container where the volume should be mounted. If no path is defined, the default path is the is /<name> path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 1024Mi # (OPTIONAL) The memory request of the container memoryRequest: 256Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # (OPTIONAL) Environment variables used in this container env: - name: ENV_NAME value: value # Component endpoints endpoints: # Name of the editor - name: che-code # (OPTIONAL) Map of implementation-dependant string-based free-form attributes attributes: # Type of the endpoint. You can only set its value to main, indicating that the endpoint should be used as the mainUrl in the workspace status (i.e. it should be the URL used to access the editor in this context) type: main # An attribute that instructs the service to automatically redirect the unauthenticated requests for current user authentication. Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false. cookiesAuthEnabled: true # Defines an endpoint as \"discoverable\", meaning that a service should be created using the endpoint name (i.e. instead of generating a service name for all endpoints, this endpoint should be statically accessible) discoverable: false # Used to secure the endpoint with authorization on OpenShift, so that not anyone on the cluster can access the endpoint, the attribute enables authentication. urlRewriteSupported: true # Port number to be used within the container component targetPort: 3100 # (OPTIONAL) Describes how the endpoint should be exposed on the network (public, internal, none) exposure: public # (OPTIONAL) Describes whether the endpoint should be secured and protected by some authentication process secure: true # (OPTIONAL) Describes the application and transport protocols of the traffic that will go through this endpoint protocol: https # Mandatory name that allows referencing the component from other elements - name: checode # (OPTIONAL) Allows specifying the definition of a volume shared by several other components. Ephemeral volumes are not stored persistently across restarts. Defaults to false volume: {ephemeral: true} (OPTIONAL) Bindings of commands to events. Each command is referred-to by its name events: # IDs of commands that should be executed before the devworkspace start. These commands would typically be executed in an init container preStart: - init-container-command # IDs of commands that should be executed after the devworkspace has completely started. In the case of Che-Code, these commands should be executed after all plugins and extensions have started, including project cloning. This means that those commands are not triggered until the user opens the IDE within the browser postStart: - init-che-code-command (OPTIONAL) Predefined, ready-to-use, devworkspace-related commands commands: # Mandatory identifier that allows referencing this command - id: init-container-command apply: # Describes the component for the apply command component: che-code-injector # Mandatory identifier that allows referencing this command - id: init-che-code-command # CLI Command executed in an existing component container exec: # Describes component for the exec command component: che-code-runtime-description # The actual command-line string commandLine: 'nohup /checode/entrypoint-volume.sh > /checode/entrypoint-logs.txt 2>&1 &'", "create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces", "label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces", "apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: devspaces-dashboard-customization namespace: openshift-devspaces annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /public/dashboard/assets/branding labels: app.kubernetes.io/component: devspaces-dashboard-secret app.kubernetes.io/part-of: che.eclipse.org data: loader.svg: <Base64_encoded_content_of_the_image> 1 type: Opaque EOF", "spec: components: cheServer: extraProperties: CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: \"true\"", "kind: Secret apiVersion: v1 metadata: name: github-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: github che.eclipse.org/scm-server-endpoint: <github_server_url> 2 che.eclipse.org/scm-github-disable-subdomain-isolation: 'false' 3 type: Opaque stringData: id: <GitHub_OAuth_Client_ID> 4 secret: <GitHub_OAuth_Client_Secret> 5", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: gitlab-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: gitlab che.eclipse.org/scm-server-endpoint: <gitlab_server_url> 2 type: Opaque stringData: id: <GitLab_Application_ID> 3 secret: <GitLab_Client_Secret> 4", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: id: <Bitbucket_Client_ID> 3 secret: <Bitbucket_Client_Secret> 4", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket type: Opaque stringData: id: <Bitbucket_Oauth_Consumer_Key> 2 secret: <Bitbucket_Oauth_Consumer_Secret> 3", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "openssl genrsa -out private.pem 2048 && openssl pkcs8 -topk8 -inform pem -outform pem -nocrypt -in private.pem -out privatepkcs8.pem && cat privatepkcs8.pem | sed 's/-----BEGIN PRIVATE KEY-----//g' | sed 's/-----END PRIVATE KEY-----//g' | tr -d '\\n' > privatepkcs8-stripped.pem && openssl rsa -in private.pem -pubout > public.pub && cat public.pub | sed 's/-----BEGIN PUBLIC KEY-----//g' | sed 's/-----END PUBLIC KEY-----//g' | tr -d '\\n' > public-stripped.pub && openssl rand -base64 24 > bitbucket-consumer-key && openssl rand -base64 24 > bitbucket-shared-secret", "kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/component: oauth-scm-configuration app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: private.key: <Content_of_privatepkcs8-stripped.pem> 3 consumer.key: <Content_of_bitbucket-consumer-key> 4 shared_secret: <Content_of_bitbucket-shared-secret> 5", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: azure-devops-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: azure-devops type: Opaque stringData: id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID> 2 secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret> 3", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "USER_ROLES= <name> 1", "OPERATOR_NAMESPACE=USD(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={\".items[0].metadata.namespace\"} --all-namespaces)", "kubectl apply -f - <<EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: USD{USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org rules: - verbs: - <verbs> 1 apiGroups: - <apiGroups> 2 resources: - <resources> 3 EOF", "kubectl apply -f - <<EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: USD{USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org subjects: - kind: ServiceAccount name: devspaces-operator namespace: USD{OPERATOR_NAMESPACE} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: USD{USER_ROLES} EOF", "kubectl patch checluster devspaces --patch '{\"spec\": {\"components\": {\"cheServer\": {\"clusterRoles\": [\"'USD{USER_ROLES}'\"]}}}}' --type=merge -n openshift-devspaces", "kubectl patch checluster devspaces --patch '{\"spec\": {\"devEnvironments\": {\"user\": {\"clusterRoles\": [\"'USD{USER_ROLES}'\"]}}}}' --type=merge -n openshift-devspaces", "spec: networking: auth: advancedAuthorization: allowUsers: - <allow_users> 1 allowGroups: - <allow_groups> 2 denyUsers: - <deny_users> 3 denyGroups: - <deny_groups> 4", "oc get users", "oc delete user <username>", "NODE_ROLE=master", "NODE_ROLE=worker", "VERSION=4.12.0", "cat << EOF | butane | oc apply -f - variant: openshift version: USD{VERSION} metadata: labels: machineconfiguration.openshift.io/role: USD{NODE_ROLE} name: 99-podman-dev-fuse-USD{NODE_ROLE} storage: files: - path: /etc/crio/crio.conf.d/99-podman-fuse 1 mode: 0644 overwrite: true contents: 2 inline: | [crio.runtime.workloads.podman-fuse] 3 activation_annotation = \"io.openshift.podman-fuse\" 4 allowed_annotations = [ \"io.kubernetes.cri-o.Devices\" 5 ] [crio.runtime] allowed_devices = [\"/dev/fuse\"] 6 EOF", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.27.9 ip-10-0-136-243.ec2.internal Ready master 34m v1.27.9 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.27.9 ip-10-0-142-249.ec2.internal Ready master 34m v1.27.9 ip-10-0-153-11.ec2.internal Ready worker 28m v1.27.9 ip-10-0-153-150.ec2.internal Ready master 34m v1.27.9", "io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse", "oc get nodes", "oc debug node/ <nodename>", "sh-4.4# stat /host/etc/crio/crio.conf.d/99-podman-fuse", "kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.config/containers/ data: storage.conf: | [storage] driver = \"overlay\" [storage.options.overlay] mount_program=\"/usr/bin/fuse-overlayfs\"", "kind: CheCluster apiVersion: org.eclipse.che/v2 spec: devEnvironments: workspacesPodAnnotations: io.kubernetes.cri-o.Devices: /dev/fuse", "podman info | grep overlay", "graphDriverName: overlay overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.12-1.module+el8.9.0+20326+387084d0.x86_64 fuse-overlayfs: version 1.12 Backing Filesystem: overlayfs", "ERRO[0000] User-selected graph driver \"overlay\" overwritten by graph driver \"vfs\" from database - delete libpod local files (\"/home/user/.local/share/containers/storage\") to resolve. May prevent use of images created by other tools" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/administration_guide/configuring-devspaces
Chapter 17. Hardening the Shared File System (Manila)
Chapter 17. Hardening the Shared File System (Manila) The Shared File Systems service (manila) provides a set of services for managing shared file systems in a multi-project cloud environment. With manila, you can create a shared file system and manage its properties, such as visibility, accessibility, and quotas. For more information on manila, see the Storage Guide: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html-single/storage_guide/ 17.1. Security considerations for manila Manila is registered with keystone, allowing you to the locate the API using the manila endpoints command. For example: By default, the manila API service only listens on port 8786 with tcp6 , which supports both IPv4 and IPv6. Manila uses multiple configurations files; these are stored in /var/lib/config-data/puppet-generated/manila/ : It is recommended that you configure manila to run under a non-root service account, and change file permissions so that only the system administrator can modify them. Manila expects that only administrators can write to configuration files, and services can only read them through their group membership in the manila group. Other users must not be able to read these files, as they contain service account passwords. Note Only the root user should own be able to write to the configuration for manila-rootwrap in rootwrap.conf , and the manila-rootwrap command filters for share nodes in rootwrap.d/share.filters . 17.2. Network and security models for manila A share driver in manila is a Python class that can be set for the back end to manage share operations, some of which are vendor-specific. The back end is an instance of the manila-share service. Manila has share drivers for many different storage systems, supporting both commercial vendors and open source solutions. Each share driver supports one or more back end modes: share servers and no share servers . An administrator selects a mode by specifying it in manila.conf , using driver_handles_share_servers . A share server is a logical Network Attached Storage (NAS) server that exports shared file systems. Back-end storage systems today are sophisticated and can isolate data paths and network paths between different OpenStack projects. A share server provisioned by a manila share driver would be created on an isolated network that belongs to the project user creating it. The share servers mode can be configured with either a flat network, or a segmented network, depending on the network provider. It is possible to have separate drivers for different modes use the same hardware. Depending on the chosen mode, you might need to provide more configuration details through the configuration file. 17.3. Share backend modes Each share driver supports at least one of the available driver modes: Share servers - driver_handles_share_servers = True - The share driver creates share servers and manages the share server life cycle. No share servers - driver_handles_share_servers = False - An administrator (rather than a share driver) manages the bare metal storage with a network interface, instead of relying on the presence of the share servers. No share servers mode - In this mode, drivers will not set up share servers, and consequently will not need to set up any new network interfaces. It is assumed that storage controller being managed by the driver has all of the network interfaces it is going to need. Drivers create shares directly without previously creating a share server. To create shares using drivers operating in this mode, manila does not require users to create any private share networks either. Note In no share servers mode , manila will assume that the network interfaces through which any shares are exported are already reachable by all projects. In the no share servers mode a share driver does not handle share server life cycle. An administrator is expected to handle the storage, networking, and other host-side configuration that might be necessary to provide project isolation. In this mode an administrator can set storage as a host which exports shares. All projects within the OpenStack cloud share a common network pipe. Lack of isolation can impact security and quality of service. When using share drivers that do not handle share servers, cloud users cannot be sure that their shares cannot be accessed by untrusted users by a tree walk over the top directory of their file systems. In public clouds it is possible that all network bandwidth is used by one client, so an administrator should care for this not to happen. Network balancing can be done by any means, and not necessarily just with OpenStack tools. Share servers mode - In this mode, a driver is able to create share servers and plug them to existing OpenStack networks. Manila determines if a new share server is required, and provides all the networking information necessary for the share drivers to create the requisite share server. When creating shares in the driver mode that handles share servers, users must provide a share network that they expect their shares to be exported upon. Manila uses this network to create network ports for the share server on this network. Users can configure security services in both share servers and no share servers back end modes. But with the no share servers back end mode, an administrator must set the required authentication services manually on the host. And in share servers mode manila can configure security services identified by the users on the share servers it spawns. 17.4. Networking requirements for manila Manila can integrate with different network types: flat , GRE , VLAN , VXLAN . Note Manila is only storing the network information in the database, with the real networks being supplied by the network provider. Manila supports using the OpenStack Networking service (neutron) and also "standalone" pre-configured networking. In the share servers back end mode, a share driver creates and manages a share server for each share network. This mode can be divided in two variations: Flat network in share servers backend mode Segmented network in share servers backend mode Users can use a network and subnet from the OpenStack Networking (neutron) service to create share networks. If the administrator decides to use the StandAloneNetworkPlugin , users need not provide any networking information since the administrator pre-configures this in the configuration file. Note Share servers spawned by some share drivers are Compute servers created with the Compute service. A few of these drivers do not support network plugins. After a share network is created, manila retrieves network information determined by a network provider: network type, segmentation identifier (if the network uses segmentation) and the IP block in CIDR notation from which to allocate the network. Users can create security services that specify security requirements such as AD or LDAP domains or a Kerberos realm. Manila assumes that any hosts referred to in security service are reachable from a subnet where a share server is created, which limits the number of cases where this mode could be used. Note Some share drivers might not support all types of segmentation, for more details see the specification for the driver you are using. 17.5. Security services with manila Manila can restrict access to file shares by integrating with network authentication protocols. Each project can have its own authentication domain that functions separately from the cloud's keystone authentication domain. This project domain can be used to provide authorization (AuthZ) services to applications that run within the OpenStack cloud, including manila. Available authentication protocols include LDAP, Kerberos, and Microsoft Active Directory authentication service. 17.6. Introduction to security services After creating a share and getting its export location, users have no permissions to mount it and operate with files. Users need to explicitly grant access to the new share. The client authentication and authorization (authN/authZ) can be performed in conjunction with security services. Manila can use LDAP, Kerberos, or Microsoft Active directory if they are supported by the share drivers and back ends. Note In some cases, it is required to explicitly specify one of the security services, for example, NetApp, EMC and Windows drivers require Active Directory for the creation of shares with the CIFS protocol. 17.7. Security services management A security service is a manila entity that abstracts a set of options that define a security zone for a particular shared file system protocol, such as an Active Directory domain or a Kerberos domain. The security service contains all of the information necessary for manila to create a server that joins a given domain. Using the API, users can create, update, view, and delete a security service. Security Services are designed on the following assumptions: Projects provide details for the security service. Administrators care about security services: they configure the server side of such security services. Inside the manila API, a security_service is associated with the share_networks . Share drivers use data in the security service to configure newly created share servers. When creating a security service, you can select one of these authentication services: LDAP - The Lightweight Directory Access Protocol. An application protocol for accessing and maintaining distributed directory information services over an IP network. Kerberos - The network authentication protocol which works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Active Directory - A directory service that Microsoft developed for Windows domain networks. Uses LDAP, Microsoft's version of Kerberos, and DNS. Manila allows you to configure a security service with these options: A DNS IP address that is used inside the project network. An IP address or hostname of a security service. A domain of a security service. A user or group name that is used by a project. A password for a user, if you specify a username. An existing security service entity can be associated with share network entities that inform manila about security and network configuration for a group of shares. You can also see the list of all security services for a specified share network and disassociate them from a share network. An administrator and users as share owners can manage access to the shares by creating access rules with authentication through an IP address, user, group, or TLS certificates. Authentication methods depend on which share driver and security service you configure and use. You can then configure a back end to use a specific authentication service, which can operate with clients without manila and keystone. Note Different authentication services are supported by different share drivers. For details of supporting of features by different drivers, see https://docs.openstack.org/manila/latest/admin/share_back_ends_feature_support_mapping.html Support for a specific authentication service by a driver does not mean that it can be configured with any shared file system protocol. Supported shared file systems protocols are NFS, CEPHFS, CIFS, GlusterFS, and HDFS. See the driver vendor's documentation for information on a specific driver and its configuration for security services. Some drivers support security services and other drivers do not support any of the security services mentioned above. For example, Generic Driver with the NFS or the CIFS shared file system protocol supports only authentication method through the IP address. Note In most cases, drivers that support the CIFS shared file system protocol can be configured to use Active Directory and manage access through the user authentication. Drivers that support the GlusterFS protocol can be used with authentication using TLS certificates. With drivers that support NFS protocol authentication using an IP address is the only supported option. Since the HDFS shared file system protocol uses NFS access it also can be configured to authenticate using an IP address. The recommended configuration for production manila deployments is to create a share with the CIFS share protocol and add to it the Microsoft Active Directory directory service. With this configuration you will get the centralized database and the service that integrates the Kerberos and LDAP approaches. 17.8. Share access control Users can specify which specific clients have access to the shares they create. Due to the keystone service, shares created by individual users are only visible to themselves and other users within the same project. Manila allows users to create shares that are "publicly" visible. These shares are visible in dashboards of users that belong to other OpenStack projects if the owners grant them access, they might even be able to mount these shares if they are made accessible on the network. While creating a share, use key --public to make your share public for other projects to see it in a list of shares and see its detailed information. According to the policy.json file, an administrator and the users as share owners can manage access to shares by means of creating access rules. Using the manila access-allow , manila access-deny , and manila access-list commands, you can grant, deny and list access to a specified share correspondingly. Note Manila does not provide end-to-end management of the storage system. You will still need to separately protect the backend system from unauthorized access. As a result, the protection offered by the manila API can still be circumvented if someone compromises the backend storage device, thereby gaining out of band access. When a share is just created there are no default access rules associated with it and permission to mount it. This could be seen in mounting config for export protocol in use. For example, there is an NFS command exportfs or /etc/exports file on the storage which controls each remote share and defines hosts that can access it. It is empty if nobody can mount a share. For a remote CIFS server there is net conf list command which shows the configuration. The hosts deny parameter should be set by the share driver to 0.0.0.0/0 which means that any host is denied to mount the share. Using manila, you can grant or deny access to a share by specifying one of these supported share access levels: rw - Read and write (RW) access. This is the default value. ro - Read-only (RO) access. Note The RO access level can be helpful in public shares when the administrator gives read and write (RW) access for some certain editors or contributors and gives read-only (RO) access for the rest of users (viewers). You must also specify one of these supported authentication methods: ip - Uses an IP address to authenticate an instance. IP access can be provided to clients addressable by well-formed IPv4 or IPv6 addresses or subnets denoted in CIDR notation. cert - Uses a TLS certificate to authenticate an instance. Specify the TLS identity as the IDENTKEY . A valid value is any string up to 64 characters long in the common name (CN) of the certificate. user - Authenticates by a specified user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 32 characters long. Note Supported authentication methods depend on which share driver, security service and shared file system protocol you use. Supported shared file system protocols are MapRFS, CEPHFS, NFS, CIFS, GlusterFS, and HDFS. Supported security services are LDAP, Kerberos protocols, or Microsoft Active Directory service. To verify that access rules (ACL) were configured correctly for a share, you can list its permissions. Note When selecting a security service for your share, you will need to consider whether the share driver is able to create access rules using the available authentication methods. Supported security services are LDAP, Kerberos, and Microsoft Active Directory. 17.9. Share type access control A share type is an administrator-defined type of service , comprised of a project visible description, and a list of non-project-visible key-value pairs called extra specifications . The manila-scheduler uses extra specifications to make scheduling decisions, and drivers control the share creation. An administrator can create and delete share types, and can also manage extra specifications that give them meaning inside manila. Projects can list the share types and can use them to create new shares. Share types can be created as public and private . This is the level of visibility for the share type that defines whether other projects can or cannot see it in a share types list and use it to create a new share. By default, share types are created as public. While creating a share type, use --is_public parameter set to False to make your share type private which will prevent other projects from seeing it in a list of share types and creating new shares with it. On the other hand, public share types are available to every project in a cloud. Manila allows an administrator to grant or deny access to the private share types for projects. You can also get information about the access for a specified private share type. Note Since share types due to their extra specifications help to filter or choose back ends before users create a share, using access to the share types you can limit clients in choice of specific back ends. For example, an administrator user in the admin project can create a private share type named my_type and see it in the list. In the console examples below, the logging in and out is omitted, and environment variables are provided to show the currently logged in user. The demo user in the demo project can list the types and the private share type named my_type is not visible for him. The administrator can grant access to the private share type for the demo project with the project ID equal to df29a37db5ae48d19b349fe947fada46 : As a result, users in the demo project can see the private share type and use it in the share creation: To deny access for a specified project, use manila type-access-remove <share_type> <project_id> . Note For an example that demonstrates the purpose of the share types, consider a situation where you have two back ends: LVM as a public storage and Ceph as a private storage. In this case you can grant access to certain projects and control access with user/group authentication method. 17.10. Policies The Shared File Systems service API is gated with role-based access control policies. These policies determine which user can access certain APIs in a certain way, and are defined in the service's policy.json file. Note The configuration file policy.json may be placed anywhere. The path /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json is expected by default. Whenever an API call is made to manila, the policy engine uses the appropriate policy definitions to determine if the call can be accepted. A policy rule determines under which circumstances the API call is permitted. The /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json file has rules where an action is always permitted, when the rule is an empty string: "" ; the rules based on the user role or rules; rules with boolean expressions. Below is a snippet of the policy.json file for manila. It can be expected to change between OpenStack releases. Users must be assigned to groups and roles that you refer to in your policies. This is done automatically by the service when user management commands are used. Note Any changes to /var/lib/config-data/puppet-generated/manila/etc/manila/policy.json are effective immediately, which allows new policies to be implemented while manila is running. Manual modification of the policy can have unexpected side effects and is not encouraged. Manila does not provide a default policy file; all the default policies are within the code base. You can generate the default policies from the manila code by executing: oslopolicy-sample-generator --config-file=var/lib/config-data/puppet-generated/manila/etc/manila/manila-policy-generator.conf
[ "manila endpoints +-------------+-----------------------------------------+ | manila | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v1/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v1/20787a7b...| | internalURL | http://172.18.198.55:8786/v1/20787a7b...| | id | 82cc5535aa444632b64585f138cb9b61 | +-------------+-----------------------------------------+ +-------------+-----------------------------------------+ | manilav2 | Value | +-------------+-----------------------------------------+ | adminURL | http://172.18.198.55:8786/v2/20787a7b...| | region | RegionOne | | publicURL | http://172.18.198.55:8786/v2/20787a7b...| | internalURL | http://172.18.198.55:8786/v2/20787a7b...| | id | 2e8591bfcac4405fa7e5dc3fd61a2b85 | +-------------+-----------------------------------------+", "api-paste.ini manila.conf policy.json rootwrap.conf rootwrap.d ./rootwrap.d: share.filters", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+----------------------------------+----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+----------------------------------+----------------------+ | 5..| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| +----+--------+-----------+-----------+----------------------------------+----------------------+", "env | grep OS_ OS_USERNAME=admin OS_TENANT_NAME=admin USD openstack project list +----------------------------------+--------------------+ | ID | Name | +----------------------------------+--------------------+ | ... | ... | | df29a37db5ae48d19b349fe947fada46 | demo | +----------------------------------+--------------------+ USD manila type-access-add my_type df29a37db5ae48d19b349fe947fada46", "env | grep OS_ OS_USERNAME=demo OS_TENANT_NAME=demo USD manila type-list --all +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | +----+--------+-----------+-----------+-----------------------------------+-----------------------+ | 4..| my_type| private | - | driver_handles_share_servers:False| snapshot_support:True | | 5..| default| public | YES | driver_handles_share_servers:True | snapshot_support:True | +----+--------+-----------+-----------+-----------------------------------+-----------------------+", "{ \"context_is_admin\": \"role:admin\", \"admin_or_owner\": \"is_admin:True or project_id:%(project_id)s\", \"default\": \"rule:admin_or_owner\", \"share_extension:quotas:show\": \"\", \"share_extension:quotas:update\": \"rule:admin_api\", \"share_extension:quotas:delete\": \"rule:admin_api\", \"share_extension:quota_classes\": \"\", }" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/security_and_hardening_guide/assembly_hardening-the-shared-file-system_security_and_hardening
Appendix A. Inventory file variables
Appendix A. Inventory file variables The following tables contain information about the pre-defined variables used in Ansible installation inventory files. Not all of these variables are required. A.1. General variables Variable Description enable_insights_collection The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to False to disable. Default = true registry_password registry_password is only required if a non-bundle installer is used. Password credential for access to registry_url . Used for both [automationcontroller] and [automationhub] groups. Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry. When registry_url is registry.redhat.io , username and password are required if not using bundle installer. registry_url Used for both [automationcontroller] and [automationhub] groups. Default = registry.redhat.io . registry_username registry_username is only required if a non-bundle installer is used. User credential for access to registry_url . Used for both [automationcontroller] and [automationhub] groups, but only if the value of registry_url is registry.redhat.io . Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry. routable_hostname routable hostname is used if the machine running the installer can only route to the target host through a specific URL, for example, if you use shortnames in your inventory, but the node running the installer can only resolve that host using FQDN. If routable_hostname is not set, it should default to ansible_host . Then if, and only if ansible_host is not set, inventory_hostname is used as a last resort. Note that this variable is used as a host variable for particular hosts and not under the [all:vars] section. For further information, see Assigning a variable to one machine:host variables A.2. Ansible automation hub variables Variable Description automationhub_admin_password Required automationhub_api_token If upgrading from Ansible Automation Platform 2.0 or earlier, you must either: provide an existing Ansible automation hub token as automationhub_api_token , or set generate_automationhub_token to true to generate a new token Generating a new token invalidates the existing token. automationhub_authentication_backend This variable is not set by default. Set it to ldap to use LDAP authentication. When this is set to ldap , you must also set the following variables: automationhub_ldap_server_uri automationhub_ldap_bind_dn automationhub_ldap_bind_password automationhub_ldap_user_search_base_dn automationhub_ldap_group_search_base_dn automationhub_auto_sign_collections If a collection signing service is enabled, collections are not signed automatically by default. Setting this parameter to true signs them by default. Default = false . automationhub_backup_collections Optional Ansible automation hub provides artifacts in /var/lib/pulp . Automation controller automatically backs up the artifacts by default. You can also set automationhub_backup_collections = false and the backup/restore process does not then backup or restore /var/lib/pulp . Default = true automationhub_collection_seed_repository When the bundle installer is run, validated content is uploaded to the validated repository, and certified content is uploaded to the rh-certified repository. By default, both certified and validated content are uploaded. Possible values of this variable are 'certified' or 'validated'. If you do not want to install content, set automationhub_seed_collections to false to disable the seeding. If you only want one type of content, set automationhub_seed_collections to true and automationhub_collection_seed_repository to the type of content you do want to include. automationhub_collection_signing_service_key If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed. /absolute/path/to/key/to/sign automationhub_collection_signing_service_script If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed. /absolute/path/to/script/that/signs automationhub_create_default_collection_signing_service The default install does not create a signing service. If set to true a signing service is created. Default = false automationhub_container_signing_service_key If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed. /absolute/path/to/key/to/sign automationhub_container_signing_service_script If a collection signing service is enabled, you must provide this variable to ensure that containers can be properly signed. /absolute/path/to/script/that/signs automationhub_create_default_contaier_signing_service The default install does not create a signing service. If set to true a signing service is created. Default = false automationhub_disable_hsts The default install deploys a TLS enabled Ansible automation hub. Use if automation hub is deployed with HTTP Strict Transport Security (HSTS) web-security policy enabled. Unless specified otherwise, the HSTS web-security policy mechanism is enabled. This setting allows you to disable it if required. Default = false automationhub_disable_https Optional If Ansible automation hub is deployed with HTTPS enabled. Default = false . automationhub_enable_api_access_log When set to true , creates a log file at /var/log/galaxy_api_access.log that logs all user actions made to the platform, including their username and IP address. Default = false . automationhub_enable_analytics A Boolean indicating whether to enable pulp analytics for the version of pulpcore used in automation hub in Ansible Automation Platform 2.3. To enable pulp analytics, set automationhub_enable_analytics = true . Default = false . automationhub_enable_unauthenticated_collection_access Enables unauthorized users to view collections. To enable unauthorized users to view collections, set automationhub_enable_unauthenticated_collection_access = true . Default = false . automation_hub_enable_unauthenticated_collection_download Enables unauthorized users to download collections. To enable unauthorized users to download collections, set automationhub_enable_unauthenticated_collection_download = true . Default = false . automationhub_importer_settings Optional Dictionary of setting to pass to galaxy-importer. At import time collections can go through a series of checks. Behavior is driven by galaxy-importer.cfg configuration. Examples are ansible-doc , ansible-lint , and flake8 . This parameter enables you to drive this configuration. automationhub_main_url The main {HubNameShort} URL that clients connect to. For example, https://<load balancer host>. If not specified, the first node in the [automationhub] group is used. Use automationhub_main_url to specify the main automation hub URL that clients connect to if you are implementing Red Hat Single Sign-On on your automation hub environment. automationhub_pg_database Required The database name. Default = automationhub automationhub_pg_host Required if not using internal database. automationhub_pg_password The password for the automation hub PostgreSQL database. Do not use special characters for automationhub_pg_password . They can cause the password to fail. automationhub_pg_port Required if not using internal database. Default = 5432 automationhub_pg_sslmode Required. Default = prefer automationhub_pg_username Required Default = automationhub automationhub_require_content_approval Optional If automation hub enforces the approval mechanism before collections are made available. By default when you upload collections to automation hub an administrator must approve it before it is made available to the users. If you want to disable the content approval flow, set the variable to false . Default = true automationhub_seed_collections A boolean that defines whether or not preloading is enabled. When the bundle installer is run, by a new repository is created by default in private automation hub named validated` and the list of the validated collections is updated. If you do not want to install content, set automationhub_seed_collections to false to disable the seeding. If you only want one type of content, set automationhub_seed_collections to true and automationhub_collection_seed_repository to the type of content you do want to include. Default = true automationhub_ssl_cert Optional /path/to/automationhub.cert Same as web_server_ssl_cert but for automation hub UI and API automationhub_ssl_key Optional /path/to/automationhub.key Same as web_server_ssl_key but for automation hub UI and API automationhub_ssl_validate_certs For Red Hat Ansible Automation Platform 2.3 and later, this value is no longer used. If automation hub should validate certificate when requesting itself because by default, Ansible Automation Platform deploys with self-signed certificates. Default = false . automationhub_upgrade Deprecated For Ansible Automation Platform 2.2.1 and later, the value of this has been fixed at true. Automation hub always updates with the latest packages. generate_automationhub_token If upgrading from Red Hat Ansible Automation Platform 2.0 or earlier, you must either: provide an existing Ansible automation hub token as automationhub_api_token or set generate_automationhub_token to true to generate a new token. Generating a new token will invalidate the existing token. nginx_hsts_max_age This variable specifies how long, in seconds, the system should be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication. Default = 63072000 seconds, or two years. nginx_tls_protocols Defines support for ssl_protocols in Nginx. Default = TLSv1.2 . pulp_db_fields_key Relative or absolute path to the Fernet symmetric encryption key you want to import. The path is on the Ansible management node. It is used to encrypt certain fields in the database (such as credentials.) If not specified, a new key will be generated. For Ansible automation hub to connect to LDAP directly; the following variables must be configured. A list of other LDAP related variables (not covered by the automationhub_ldap_xxx variables below) that can be passed using the ldap_extra_settings variable can be found here: https://django-auth-ldap.readthedocs.io/en/latest/reference.html#settings Variable Description automationhub_ldap_bind_dn The name to use when binding to the LDAP server with automationhub_ldap_bind_password . automationhub_ldap_bind_password Required The password to use with automationhub_ldap_bind_dn . automationhub_ldap_group_search_base_dn An LDAPSearch object that finds all LDAP groups that users might belong to. If your configuration makes any references to LDAP groups, this and automationhub_ldap_group_type must be set. Default = None automatiohub_ldap_group_search_filter Optional Search filter for finding group membership. Variable identifies what objectClass type to use for mapping groups with automation hub and LDAP. Used for installing automation hub with LDAP. Default = (objectClass=Group) automationhub_ldap_group_search_scope Optional Scope to search for groups in an LDAP tree using the django framework for LDAP authentication. Used for installing automation hub with LDAP. Default = SUBTREE automationhub_ldap_group_type_class Optional Variable identifies the group type used during group searches within the django framework for LDAP authentication. Used for installing automation hub with LDAP. Default = django_auth_ldap.config:GroupOfNamesType automationhub_ldap_server_uri The URI of the LDAP server. This can be any URI that is supported by your underlying LDAP libraries. automationhub_ldap_user_search_base_dn An LDAPSearch object that locates a user in the directory. The filter parameter should contain the placeholder %(user)s for the username. It must return exactly one result for authentication to succeed. automationhub_ldap_user-search_scope Optional Scope to search for users in an LDAP tree using django framework for LDAP authentication. Used for installing automation hub with LDAP. Default = `SUBTREE A.3. Red Hat Single Sign-On variables *Use these variables for automationhub or automationcatalog . Variable Description sso_automation_platform_login_theme Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Path to the directory where theme files are located. If changing this variable, you must provide your own theme files. Default = ansible-automation-platform sso_automation_platform_realm Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. The name of the realm in SSO. Default = ansible-automation-platform sso_automation_platform_realm_displayname Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Display name for the realm. Default = Ansible Automation Platform sso_console_admin_username Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. SSO administration username. Default = admin sso_console_admin_password Required Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. SSO administration password. sso_custom_keystore_file Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Customer-provided keystore for SSO. sso_host Required Used for Ansible Automation Platform externally managed Red Hat Single Sign-On only. Automation hub and Automation services catalog require SSO and SSO administration credentials for authentication. SSO administration credentials are also required to set automation services catalog specific roles needed for the application. If SSO is not provided in the inventory for configuration, then you must use this variable to define the SSO host. sso_keystore_file_remote Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Set to true if the customer-provided keystore is on a remote node. Default = false sso_keystore_name Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Name of keystore for SSO. Default = ansible-automation-platform sso_keystore_password Password for keystore for HTTPS enabled SSO. Required when using Ansible Automation Platform managed SSO and when HTTPS is enabled. The default install deploys SSO with sso_use_https=true . sso_redirect_host Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. If sso_redirect_host is set, it is used by the application to connect to SSO for authentication. This must be reachable from client machines. sso_ssl_validate_certs Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Set to true if the certificate is to be validated during connection. Default = true sso_use_https Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. If Single Sign On uses https. Default = true A.4. Automation services catalog variables Variable Description automationcatalog_controller_password Used to generate a token from a controller host. Requires automation_controller_main_url to be defined as well. automationcatalog_controller_token Used for a pre-created OAuth token for automation controller. This token will be used instead of generating a token. automationcatalog_controller_username Used to generate a token from a controller host. Requires automation_controller_main_url to be defined as well. automationcatalog_controller_verify_ssl Used to enable or disable SSL validation from automation services catalog to automation controller. Default = true . automationcatalog_disable_hsts Used to enable or disable HSTS web-security policy for automation services catalog. Default = `false. automationcatalog_disable_https Used to enable or disable HSTS web-security policy for Services Catalog. Default = false . automationcatalog_enable_analytics_collection Used to control activation of analytics collection for automation services catalog automationcatalog_main_url Used by the Red Hat Single Sign-On host configuration if there is an alternative hostname that needs to be used between the SSO and automation services catalog host. automationcatalog_pg_database The postgres database URL for your automation services catalog. automationcatalog_pg_host The PostgreSQL host (database node) for your automation services catalog automationcatalog_pg_password The password for the PostgreSQL database of your automation services catalog. Do not use special characters for automationcatalog_pg_password . They can cause the password to fail. automationcatalog_pg_port The PostgreSQL port to use for your automation services catalog. Default = 5432 automationcatalog_pg_username The postgres ID for your automation services catalog. automationcatalog_ssl_cert Path to a custom provided SSL certificate file. Requires automationcatalog_ssl_key The internally managed CA signs and creates the certificate if not provided and https is left enabled. automationcatalog_ssl_key Path to a custom provided SSL certificate key file. Requires automationcatalog_ssl_cert . The internally managed CA signs and creates the certificate if not provided and https is left enabled. A.5. Automation controller variables Variable Description admin_password The password for an administration user to access the UI upon install completion. automation_controller_main_url For an alternative front end URL needed for SSO configuration with automation services catalog, provide the URL. Automation services catalog requires either Controller to be installed with automation controller, or a URL to an active and routable Controller server must be provided with this variable automationcontroller_password Password for your automation controller instance. automationcontroller_username Username for your automation controller instance. nginx_http_port The nginx HTTP server listens for inbound connections. Default = 80 nginx_https_port The nginx HTTPS server listens for secure connections. Default = 443 nginx_hsts_max_age This variable specifies how long, in seconds, the system should be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication. Default = 63072000 seconds, or two years. nginx_tls_protocols Defines support for ssl_protocols in Nginx. Default = TLSv1.2 . node_state Optional The status of a node or group of nodes. Valid options are active , deprovision to remove a node from a cluster or iso_migrate to migrate a legacy isolated node to an execution node. Default = active . node_type For [automationcontroller] group. Two valid node_types can be assigned for this group. A node_type=control implies that the node only runs project and inventory updates, but not regular jobs. A node_type=hybrid has the ability to run everything. Default for this group = hybrid . For [execution_nodes] group Two valid node_types can be assigned for this group. A node_type=hop implies that the node forwards jobs to an execution node. A node_type=execution implies that the node can run jobs. Default for this group = execution . peers Optional The peers variable is used to indicate which nodes a specific host or group connects to. Wherever the peers variable is defined, an outbound connection will be established to the specific host or group. This variable is used to add tcp-peer entries in the receptor.conf file used for establishing network connections with other nodes. See Peering The peers variable can be a comma-separated list of hosts and/or groups from the inventory. This is resolved into a set of hosts that is used to construct the receptor.conf file. pg_database The name of the postgres database. Default = awx . pg_host The postgreSQL host, which can be an externally managed database. pg_password The password for the postgreSQL database. Do not use special characters for pg_password . They can cause the password to fail. NOTE You no longer have to provide a pg_hashed_password in your inventory file at the time of installation because PostgreSQL 13 can now store user passwords more securely. When you supply pg_password in the inventory file for the installer, PostgreSQL uses the SCRAM-SHA-256 hash to secure that password as part of the installation process. pg_port The postgreSQL port to use. Default = 5432 pg_ssl_mode One of prefer or verify-full . Set to verify-full for client-side enforced SSL. Default = prefer . pg_username Your postgres database username. Default = awx . postgres_ssl_cert location of postgres ssl certificate. /path/to/pgsql_ssl.cert postgres_ssl_key location of postgres ssl key. /path/to/pgsql_ssl.key postgres_use_cert Location of postgres user certificate. /path/to/pgsql.crt postgres_use_key Location of postgres user key. /path/to/pgsql.key postgres_use_ssl If postgres is to use SSL. receptor_listener_port Port to use for recptor connection. Default = 27199. supervisor_start_retry_count When specified (no default value exists), adds startretries = <value specified> to the supervisor config file (/etc/supervisord.d/tower.ini). See program:x Section Values for further explanation about startretries . web_server_ssl_cert Optional /path/to/webserver.cert Same as automationhub_ssl_cert but for web server UI and API. web_server_ssl_key Optional /path/to/webserver.key Same as automationhub_server_ssl_key but for web server UI and API. A.6. Ansible variables The following variables control how Ansible Automation Platform interacts with remote hosts. Additional information on variables specific to certain plugins can be found at https://docs.ansible.com/ansible-core/devel/collections/ansible/builtin/index.html A list of global configuration options can be found at https://docs.ansible.com/ansible-core/devel/reference_appendices/config.html Variable Description ansible_connection The connection plugin used for the task on the target host. This can be the name of any of ansible connection plugin. SSH protocol types are smart , ssh or paramiko . Default = smart ansible_host The ip or name of the target host to use instead of inventory_hostname . ansible_port The connection port number, if not, the default (22 for ssh). ansible_user The user name to use when connecting to the host. ansible_password The password to use to authenticate to the host. Never store this variable in plain text. Always use a vault. ansible_ssh_private_key_file Private key file used by ssh. Useful if using multiple keys and you do not want to use an SSH agent. ansible_ssh_common_args This setting is always appended to the default command line for sftp , scp , and ssh . Useful to configure a ProxyCommand for a certain host (or group). ansible_sftp_extra_args This setting is always appended to the default sftp command line. ansible_scp_extra_args This setting is always appended to the default scp command line. ansible_ssh_extra_args This setting is always appended to the default ssh command line. ansible_ssh_pipelining Determines if SSH pipelining is used. This can override the pipelining setting in ansible.cfg . If using SSH key-based authentication, then the key must be managed by an SSH agent. ansible_ssh_executable (added in version 2.2) This setting overrides the default behavior to use the system ssh. This can override the ssh_executable setting in ansible.cfg . ansible_shell_type The shell type of the target system. You should not use this setting unless you have set the ansible_shell_executable to a non-Bourne (sh) compatible shell. By default commands are formatted using sh-style syntax. Setting this to csh or fish causes commands executed on target systems to follow the syntax of those shells instead. ansible_shell_executable This sets the shell that the ansible controller uses on the target machine, and overrides the executable in ansible.cfg which defaults to /bin/sh . You should only change if it is not possible to use /bin/sh , that is, if /bin/sh is not installed on the target machine or cannot be run from sudo. inventory_hostname This variable takes the hostname of the machine from the inventory script or the ansible configuration file. You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars
11.15. Managing Split-brain
11.15. Managing Split-brain Split-brain is a state of data inconsistency that occurs when different data sources in a cluster having different ideas about what the correct, current state of that data should be. This can happen because of servers in a network design, or a failure condition based on servers not communicating and synchronizing their data to each other. In Red Hat Gluster Storage, split-brain is a term applicable to Red Hat Gluster Storage volumes in a replicate configuration. A file is said to be in split-brain when the copies of the same file in different bricks that constitute the replica-pair have mismatching data and/or metadata contents such that they are conflicting each other and automatic healing is not possible. In this scenario, you can decide which is the correct file (source) and which is the one that requires healing (sink) by inspecting at the mismatching files from the backend bricks. The AFR translator in glusterFS makes use of extended attributes to keep track of the operations on a file. These attributes determine which brick is the correct source when a file requires healing. If the files are clean, the extended attributes are all zeroes indicating that no heal is necessary. When a heal is required, they are marked in such a way that there is a distinguishable source and sink and the heal can happen automatically. But, when a split-brain occurs, these extended attributes are marked in such a way that both bricks mark themselves as sources, making automatic healing impossible. Split-brain occurs when a difference exists between multiple copies of the same file, and Red Hat Gluster Storage is unable to determine which version is correct. Applications are restricted from executing certain operations like read and write on the disputed file when split-brain happens. Attempting to access the files results in the application receiving an input/output error on the disputed file. The three types of split-brain that occur in Red Hat Gluster Storage are: Data split-brain: Contents of the file under split-brain are different in different replica pairs and automatic healing is not possible. Red Hat allows the user to resolve Data split-brain from the mount point and from the CLI. For information on how to recover from data split-brain from the mount point, see Section 11.15.2.1, " Recovering File Split-brain from the Mount Point" . For information on how to recover from data split-brain using CLIS, see Section 11.15.2.2, "Recovering File Split-brain from the gluster CLI" . Metadata split-brain: The metadata of the files like user defined extended attribute are different and automatic healing is not possible. Like Data split-brain, Metadata split-brain can also be resolved from both mount point and CLI. For information on how to recover from metadata split-brain from the mount point, see Section 11.15.2.1, " Recovering File Split-brain from the Mount Point" . para>For information on how to recover from metadata split-brain using CLI, see Section 11.15.2.2, "Recovering File Split-brain from the gluster CLI" . Entry split-brain: Entry split-brain can be of two types: GlusterFS Internal File Identifier or GFID split-Brain: This happen when files or directories in different replica pairs have different GFIDs. Type Mismatch Split-Brain: This happen when the files/directories stored in replica pairs are of different types but with the same names. Red Hat Gluster Storage 3.4 and later allows you to resolve GFID split-brain from gluster CLI. For more information, see Section 11.15.3, "Recovering GFID Split-brain from the gluster CLI" . You can resolve split-brain manually by inspecting the file contents from the backend and deciding which is the true copy (source) and modifying the appropriate extended attributes such that healing can happen automatically. 11.15.1. Preventing Split-brain To prevent split-brain in the trusted storage pool, you must configure server-side and client-side quorum. 11.15.1.1. Configuring Server-Side Quorum The quorum configuration in a trusted storage pool determines the number of server failures that the trusted storage pool can sustain. If an additional failure occurs, the trusted storage pool will become unavailable. If too many server failures occur, or if there is a problem with communication between the trusted storage pool nodes, it is essential that the trusted storage pool be taken offline to prevent data loss. After configuring the quorum ratio at the trusted storage pool level, you must enable the quorum on a particular volume by setting cluster.server-quorum-type volume option as server . For more information on this volume option, see Section 11.1, "Configuring Volume Options" . Configuration of the quorum is necessary to prevent network partitions in the trusted storage pool. Network Partition is a scenario where, a small set of nodes might be able to communicate together across a functioning part of a network, but not be able to communicate with a different set of nodes in another part of the network. This can cause undesirable situations, such as split-brain in a distributed system. To prevent a split-brain situation, all the nodes in at least one of the partitions must stop running to avoid inconsistencies. This quorum is on the server-side, that is, the glusterd service. Whenever the glusterd service on a machine observes that the quorum is not met, it brings down the bricks to prevent data split-brain. When the network connections are brought back up and the quorum is restored, the bricks in the volume are brought back up. When the quorum is not met for a volume, any commands that update the volume configuration or peer addition or detach are not allowed. It is to be noted that both, the glusterd service not running and the network connection between two machines being down are treated equally. You can configure the quorum percentage ratio for a trusted storage pool. If the percentage ratio of the quorum is not met due to network outages, the bricks of the volume participating in the quorum in those nodes are taken offline. By default, the quorum is met if the percentage of active nodes is more than 50% of the total storage nodes. However, if the quorum ratio is manually configured, then the quorum is met only if the percentage of active storage nodes of the total storage nodes is greater than or equal to the set value. To configure the quorum ratio, use the following command: For example, to set the quorum to 51% of the trusted storage pool: In this example, the quorum ratio setting of 51% means that more than half of the nodes in the trusted storage pool must be online and have network connectivity between them at any given time. If a network disconnect happens to the storage pool, then the bricks running on those nodes are stopped to prevent further writes. You must ensure to enable the quorum on a particular volume to participate in the server-side quorum by running the following command: Important For a two-node trusted storage pool, it is important to set the quorum ratio to be greater than 50% so that two nodes separated from each other do not both believe they have a quorum. For a replicated volume with two nodes and one brick on each machine, if the server-side quorum is enabled and one of the nodes goes offline, the other node will also be taken offline because of the quorum configuration. As a result, the high availability provided by the replication is ineffective. To prevent this situation, a dummy node can be added to the trusted storage pool which does not contain any bricks. This ensures that even if one of the nodes which contains data goes offline, the other node will remain online. Note that if the dummy node and one of the data nodes goes offline, the brick on other node will be also be taken offline, and will result in data unavailability. 11.15.1.2. Configuring Client-Side Quorum By default, when replication is configured, clients can modify files as long as at least one brick in the replica group is available. If network partitioning occurs, different clients are only able to connect to different bricks in a replica set, potentially allowing different clients to modify a single file simultaneously. For example, imagine a three-way replicated volume is accessed by two clients, C1 and C2, who both want to modify the same file. If network partitioning occurs such that client C1 can only access brick B1, and client C2 can only access brick B2, then both clients are able to modify the file independently, creating split-brain conditions on the volume. The file becomes unusable, and manual intervention is required to correct the issue. Client-side quorum allows administrators to set a minimum number of bricks that a client must be able to access in order to allow data in the volume to be modified. If client-side quorum is not met, files in the replica set are treated as read-only. This is useful when three-way replication is configured. Client-side quorum is configured on a per-volume basis, and applies to all replica sets in a volume. If client-side quorum is not met for X of Y volume sets, only X volume sets are treated as read-only; the remaining volume sets continue to allow data modification. Earlier, the replica subvolume turned read-only when the quorum does not met. With rhgs-3.4.3, the subvolume becomes unavailable as all the file operations fail with ENOTCONN error instead of becoming EROFS. This means the cluster.quorum-reads volume option is also not supported. Client-Side Quorum Options cluster.quorum-count The minimum number of bricks that must be available in order for writes to be allowed. This is set on a per-volume basis. Valid values are between 1 and the number of bricks in a replica set. This option is used by the cluster.quorum-type option to determine write behavior. This option is used in conjunction with cluster.quorum-type =fixed option to specify the number of bricks to be active to participate in quorum. If the quorum-type is auto then this option has no significance. cluster.quorum-type Determines when the client is allowed to write to a volume. Valid values are fixed and auto . If cluster.quorum-type is fixed , writes are allowed as long as the number of bricks available in the replica set is greater than or equal to the value of the cluster.quorum-count option. If cluster.quorum-type is auto , writes are allowed when at least 50% of the bricks in a replica set are be available. In a replica set with an even number of bricks, if exactly 50% of the bricks are available, the first brick in the replica set must be available in order for writes to continue. In a three-way replication setup, it is recommended to set cluster.quorum-type to auto to avoid split-brains. If the quorum is not met, the replica pair becomes read-only. Example 11.7. Client-Side Quorum In the above scenario, when the client-side quorum is not met for replica group A , only replica group A becomes read-only. Replica groups B and C continue to allow data modifications. Configure the client-side quorum using cluster.quorum-type and cluster.quorum-count options. Important When you integrate Red Hat Gluster Storage with Red Hat Enterprise Virtualization, the client-side quorum is enabled when you run gluster volume set VOLNAME group virt command. If on a two replica set up, if the first brick in the replica pair is offline, virtual machines will be paused because quorum is not met and writes are disallowed. Consistency is achieved at the cost of fault tolerance. If fault-tolerance is preferred over consistency, disable client-side quorum with the following command: Example - Setting up server-side and client-side quorum to avoid split-brain scenario This example provides information on how to set server-side and client-side quorum on a Distribute Replicate volume to avoid split-brain scenario. The configuration of this example has 3 X 3 ( 9 bricks) Distribute Replicate setup. Setting Server-side Quorum Enable the quorum on a particular volume to participate in the server-side quorum by running the following command: Set the quorum to 51% of the trusted storage pool: In this example, the quorum ratio setting of 51% means that more than half of the nodes in the trusted storage pool must be online and have network connectivity between them at any given time. If a network disconnect happens to the storage pool, then the bricks running on those nodes are stopped to prevent further writes. Setting Client-side Quorum Set the quorum-type option to auto to allow writes to the file only if the percentage of active replicate bricks is more than 50% of the total number of bricks that constitute that replica. In this example, as there are only two bricks in the replica pair, the first brick must be up and running to allow writes. Important Atleast n/2 bricks need to be up for the quorum to be met. If the number of bricks ( n ) in a replica set is an even number, it is mandatory that the n/2 count must consist of the primary brick and it must be up and running. If n is an odd number, the n/2 count can have any brick up and running, that is, the primary brick need not be up and running to allow writes. 11.15.2. Recovering from File Split-brain You can recover from the data and meta-data split-brain using one of the following methods: See Section 11.15.2.1, " Recovering File Split-brain from the Mount Point" for information on how to recover from data and meta-data split-brain from the mount point. See Section 11.15.2.2, "Recovering File Split-brain from the gluster CLI" for information on how to recover from data and meta-data split-brain using CLI For information on resolving entry/type-mismatch split-brain, see Chapter 23, Manually Recovering File Split-brain . 11.15.2.1. Recovering File Split-brain from the Mount Point Steps to recover from a split-brain from the mount point You can use a set of getfattr and setfattr commands to detect the data and meta-data split-brain status of a file and resolve split-brain from the mount point. Important This process for split-brain resolution from mount will not work on NFS mounts as it does not provide extended attributes support. In this example, the test-volume volume has bricks brick0 , brick1 , brick2 and brick3 . Directory structure of the bricks is as follows: In the following output, some of the files in the volume are in split-brain. To know data or meta-data split-brain status of a file: The above command executed from mount provides information if a file is in data or meta-data split-brain. This command is not applicable to entry/type-mismatch split-brain. For example, file100 is in meta-data split-brain. Executing the above mentioned command for file100 gives : file1 is in data split-brain. file99 is in both data and meta-data split-brain. dir is in entry/type-mismatch split-brain but as mentioned earlier, the above command is does not display if the file is in entry/type-mismatch split-brain. Hence, the command displays The file is not under data or metadata split-brain . For information on resolving entry/type-mismatch split-brain, see Chapter 23, Manually Recovering File Split-brain . file2 is not in any kind of split-brain. Analyze the files in data and meta-data split-brain and resolve the issue When you perform operations like cat , getfattr , and more from the mount on files in split-brain, it throws an input/output error. For further analyzing such files, you can use setfattr command. Using this command, a particular brick can be chosen to access the file in split-brain. For example, file1 is in data-split-brain and when you try to read from the file, it throws input/output error. Split-brain choices provided for file1 were test-client-2 and test-client-3 . Setting test-client-2 as split-brain choice for file1 serves reads from b2 for the file. Now, you can perform operations on the file. For example, read operations on the file: Similarly, to inspect the file from other choice, replica.split-brain-choice is to be set to test-client-3 . Trying to inspect the file from a wrong choice errors out. You can undo the split-brain-choice that has been set, the above mentioned setfattr command can be used with none as the value for extended attribute. For example, Now performing cat operation on the file will again result in input/output error, as before. After you decide which brick to use as a source for resolving the split-brain, it must be set for the healing to be done. Example The above process can be used to resolve data and/or meta-data split-brain on all the files. Setting the split-brain-choice on the file After setting the split-brain-choice on the file, the file can be analyzed only for five minutes. If the duration of analyzing the file needs to be increased, use the following command and set the required time in timeout-in-minute argument. This is a global timeout and is applicable to all files as long as the mount exists. The timeout need not be set each time a file needs to be inspected but for a new mount it will have to be set again for the first time. This option becomes invalid if the operations like add-brick or remove-brick are performed. Note If fopen-keep-cache FUSE mount option is disabled, then inode must be invalidated each time before selecting a new replica.split-brain-choice to inspect a file using the following command: 11.15.2.2. Recovering File Split-brain from the gluster CLI You can resolve the split-brain from the gluster CLI by the following ways: Use bigger-file as source Use the file with latest mtime as source Use one replica as source for a particular file Use one replica as source for all files Note The entry/type-mismatch split-brain resolution is not supported using CLI. For information on resolving entry/type-mismatch split-brain, see Chapter 23, Manually Recovering File Split-brain . Selecting the bigger-file as source This method is useful for per file healing and where you can decided that the file with bigger size is to be considered as source. Run the following command to obtain the list of files that are in split-brain: From the command output, identify the files that are in split-brain. You can find the differences in the file size and md5 checksums by performing a stat and md5 checksums on the file from the bricks. The following is the stat and md5 checksum output of a file: You can notice the differences in the file size and md5 checksums. Execute the following command along with the full file name as seen from the root of the volume (or) the gfid-string representation of the file, which is displayed in the heal info command's output. For example, After the healing is complete, the md5sum and file size on both bricks must be same. The following is a sample output of the stat and md5 checksums command after completion of healing the file. Selecting the file with latest mtime as source This method is useful for per file healing and if you want the file with latest mtime has to be considered as source. Run the following command to obtain the list of files that are in split-brain: From the command output, identify the files that are in split-brain. You can find the differences in the file size and md5 checksums by performing a stat and md5 checksums on the file from the bricks. The following is the stat and md5 checksum output of a file: You can notice the differences in the md5 checksums, and the modify time. Execute the following command In this command, FILE can be either the full file name as seen from the root of the volume or the gfid-string representation of the file. For example, After the healing is complete, the md5 checksum, file size, and modify time on both bricks must be same. The following is a sample output of the stat and md5 checksums command after completion of healing the file. You can notice that the file has been healed using the brick having the latest mtime (brick b1, in this example) as the source. Selecting one replica as source for a particular file This method is useful if you know which file is to be considered as source. Run the following command to obtain the list of files that are in split-brain: From the command output, identify the files that are in split-brain. You can find the differences in the file size and md5 checksums by performing a stat and md5 checksums on the file from the bricks. The following is the stat and md5 checksum output of a file: You can notice the differences in the file size and md5 checksums. Execute the following command In this command, FILE present in <HOSTNAME:BRICKNAME> is taken as source for healing. For example, After the healing is complete, the md5 checksum and file size on both bricks must be same. The following is a sample output of the stat and md5 checksums command after completion of healing the file. Selecting one replica as source for all files This method is useful if you know want to use a particular brick as a source for the split-brain files in that replica pair. Run the following command to obtain the list of files that are in split-brain: From the command output, identify the files that are in split-brain. Execute the following command In this command, for all the files that are in split-brain in this replica, <HOSTNAME:BRICKNAME> is taken as source for healing. For example, 11.15.3. Recovering GFID Split-brain from the gluster CLI With this release, Red Hat Gluster Storage allows you to resolve GFID split-brain from the gluster CLI. You can use one of the following policies to resolve GFID split-brain: Use bigger-file as source Use the file with latest mtime as source Use one replica as source for a particular file Note The entry/type-mismatch split-brain resolution is not supported using CLI. For information on resolving entry/type-mismatch split-brain, see Chapter 23, Manually Recovering File Split-brain . Selecting the bigger-file as source This method is useful for per file healing and where you can decided that the file with bigger size is to be considered as source. Run the following command to obtain the path of the file that is in split-brain: From the output, identify the files for which file operations performed from the client failed with input/output error. For example, In the above command, 12 is the volume name, b0 and b1 are the bricks. Execute the below command on the brick to fetch information if a file is in GFID split-brain. The getfattr command is used to obtain and verify the AFR changelog extended attributes of the files. For example, You can notice the difference in GFID for the file f5 in both the bricks. You can find the differences in the file size by executing stat command on the file from the bricks. The following is the output of the file f5 in bricks b0 and b1 : Execute the following command along with the full filename as seen from the root of the volume which is displayed in the heal info command's output: For example, After the healing is complete, the file size on both bricks must be the same as that of the file which had the bigger size. The following is a sample output of the getfattr command after completion of healing the file. Selecting the file with latest mtime as source This method is useful for per file healing and if you want the file with latest mtime has to be considered as source. Run the following command to obtain the list of files that are in split-brain: From the output, identify the files for which file operations performed from the client failed with input/output error. For example, In the above command, 12 is the volume name, b0 and b1 are the bricks. The below command executed from backend provides information if a file is in GFID split-brain. For example, You can notice the difference in GFID for the file f4 in both the bricks. You can find the difference in the modify time by executing stat command on the file from the bricks. The following is the output of the file f4 in bricks b0 and b1 : Execute the following command: For example, After the healing is complete, the GFID of the files on both bricks must be same. The following is a sample output of the getfattr command after completion of healing the file. You can notice that the file has been healed using the brick having the latest mtime as the source. Selecting one replica as source for a particular file This method is useful if you know which file is to be considered as source. Run the following command to obtain the list of files that are in split-brain: From the output, identify the files for which file operations performed from the client failed with input/output error. For example, In the above command, 12 is the volume name, b0 and b1 are the bricks. Note With one replica as source option, there is no way to resolve all the GFID split-brain in one shot by not specifying any file-path in the CLI as done for data/metadata split-brain resolutions. For each file in GFID split-brain, you have to run the heal command separately. The below command executed from backend provides information if a file is in GFID split-brain. For example, You can notice the difference in GFID for the file f3 in both the bricks. Execute the following command: In this command, FILE present in HOSTNAME : export-directory-absolute-path is taken as source for healing. For example, After the healing is complete, the GFID of the file on both the bricks should be same as that of the file which had bigger size. The following is a sample output of the getfattr command after the file is healed. Note You can not use the GFID of the file as an argument with any of the CLI options to resolve GFID split-brain. It should be the absolute path as seen from the mount point to the file considered as source. With source-brick option there is no way to resolve all the GFID split-brain in one shot by not specifying any file-path in the CLI as done while resolving data or metadata split-brain. For each file in GFID split-brain, run the CLI with the policy you want to use. Resolving directory GFID split-brain using CLI with the "source-brick" option in a "distributed-replicated" volume needs to be done on all the volumes explicitly. Since directories get created on all the subvolumes, using one particular brick as source for directory GFID split-brain, heal the directories for that subvolume. In this case, other subvolumes must be healed using the brick which has same GFID as that of the brick which was used as source for healing other subvolume. For information on resolving entry/type-mismatch split-brain, see Chapter 23, Manually Recovering File Split-brain . 11.15.4. Triggering Self-Healing on Replicated Volumes For replicated volumes, when a brick goes offline and comes back online, self-healing is required to re-sync all the replicas. There is a self-heal daemon which runs in the background, and automatically initiates self-healing every 10 minutes on any files which require healing. Multithreaded Self-heal Self-heal daemon has the capability to handle multiple heals in parallel and is supported on Replicate and Distribute-replicate volumes. However, increasing the number of heals has impact on I/O performance so the following options have been provided. The cluster.shd-max-threads volume option controls the number of entries that can be self healed in parallel on each replica by self-heal daemon using. Using cluster.shd-wait-qlength volume option, you can configure the number of entries that must be kept in the queue for self-heal daemon threads to take up as soon as any of the threads are free to heal. For more information on cluster.shd-max-threads and cluster.shd-wait-qlength volume set options, see Section 11.1, "Configuring Volume Options" . There are various commands that can be used to check the healing status of volumes and files, or to manually initiate healing: To view the list of files that need healing: For example, to view the list of files on test-volume that need healing: To trigger self-healing only on the files which require healing: For example, to trigger self-healing on files which require healing on test-volume: To trigger self-healing on all the files on a volume: For example, to trigger self-heal on all the files on test-volume: To view the list of files on a volume that are in a split-brain state: For example, to view the list of files on test-volume that are in a split-brain state:
[ "gluster volume set all cluster.server-quorum-ratio PERCENTAGE", "gluster volume set all cluster.server-quorum-ratio 51%", "gluster volume set VOLNAME cluster.server-quorum-type server", "gluster volume reset VOLNAME quorum-type", "gluster volume info testvol Volume Name: testvol Type: Distributed-Replicate Volume ID: 0df52d58-bded-4e5d-ac37-4c82f7c89cfh Status: Created Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: server1:/rhgs/brick1 Brick2: server2:/rhgs/brick2 Brick3: server3:/rhgs/brick3 Brick4: server4:/rhgs/brick4 Brick5: server5:/rhgs/brick5 Brick6: server6:/rhgs/brick6 Brick7: server7:/rhgs/brick7 Brick8: server8:/rhgs/brick8 Brick9: server9:/rhgs/brick9", "gluster volume set VOLNAME cluster.server-quorum-type server", "gluster volume set all cluster.server-quorum-ratio 51%", "gluster volume set VOLNAME quorum-type auto", "gluster volume info test-volume Volume Name: test-volume Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: test-host:/rhgs/brick0 Brick2: test-host:/rhgs/brick1 Brick3: test-host:/rhgs/brick2 Brick4: test-host:/rhgs/brick3", "tree -R /test/b? /rhgs/brick0 ├── dir │ └── a └── file100 /rhgs/brick1 ├── dir │ └── a └── file100 /rhgs/brick2 ├── dir ├── file1 ├── file2 └── file99 /rhgs/brick3 ├── dir ├── file1 ├── file2 └── file99", "gluster volume heal test-volume info split-brain Brick test-host:/rhgs/brick0/ /file100 /dir Number of entries in split-brain: 2 Brick test-host:/rhgs/brick1/ /file100 /dir Number of entries in split-brain: 2 Brick test-host:/rhgs/brick2/ /file99 <gfid:5399a8d1-aee9-4653-bb7f-606df02b3696> Number of entries in split-brain: 2 Brick test-host:/rhgs/brick3/ <gfid:05c4b283-af58-48ed-999e-4d706c7b97d5> <gfid:5399a8d1-aee9-4653-bb7f-606df02b3696> Number of entries in split-brain: 2", "getfattr -n replica.split-brain-status <path-to-file>", "getfattr -n replica.split-brain-status file100 file: file100 replica.split-brain-status=\"data-split-brain:no metadata-split-brain:yes Choices:test-client-0,test-client-1\"", "getfattr -n replica.split-brain-status file1 file: file1 replica.split-brain-status=\"data-split-brain:yes metadata-split-brain:no Choices:test-client-2,test-client-3\"", "getfattr -n replica.split-brain-status file99 file: file99 replica.split-brain-status=\"data-split-brain:yes metadata-split-brain:yes Choices:test-client-2,test-client-3\"", "getfattr -n replica.split-brain-status dir file: dir replica.split-brain-status=\"The file is not under data or metadata split-brain\"", "getfattr -n replica.split-brain-status file2 file: file2 replica.split-brain-status=\"The file is not under data or metadata split-brain\"", "setfattr -n replica.split-brain-choice -v \"choiceX\" <path-to-file>", "cat file1 cat: file1: Input/output error", "setfattr -n replica.split-brain-choice -v test-client-2 file1", "cat file1 xyz", "setfattr -n replica.split-brain-choice -v none file1", "cat file cat: file1: Input/output error", "setfattr -n replica.split-brain-heal-finalize -v <heal-choice> <path-to-file>", "setfattr -n replica.split-brain-heal-finalize -v test-client-2 file1", "setfattr -n replica.split-brain-choice-timeout -v <timeout-in-minutes> <mount_point/file>", "setfattr -n inode-invalidate -v 0 <path-to-file>", "gluster volume heal VOLNAME info split-brain", "Brick <hostname:brickpath-b1> <gfid:aaca219f-0e25-4576-8689-3bfd93ca70c2> <gfid:39f301ae-4038-48c2-a889-7dac143e82dd> <gfid:c3c94de2-232d-4083-b534-5da17fc476ac> Number of entries in split-brain: 3 Brick <hostname:brickpath-b2> /dir/file1 /dir /file4 Number of entries in split-brain: 3", "On brick b1: stat b1/dir/file1 File: 'b1/dir/file1' Size: 17 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919362 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 13:55:40.149897333 +0530 Modify: 2015-03-06 13:55:37.206880347 +0530 Change: 2015-03-06 13:55:37.206880347 +0530 Birth: - md5sum b1/dir/file1 040751929ceabf77c3c0b3b662f341a8 b1/dir/file1 On brick b2: stat b2/dir/file1 File: 'b2/dir/file1' Size: 13 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919365 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 13:54:22.974451898 +0530 Modify: 2015-03-06 13:52:22.910758923 +0530 Change: 2015-03-06 13:52:22.910758923 +0530 Birth: - md5sum b2/dir/file1 cb11635a45d45668a403145059c2a0d5 b2/dir/file1", "gluster volume heal <VOLNAME> split-brain bigger-file <FILE>", "gluster volume heal test-volume split-brain bigger-file /dir/file1 Healed /dir/file1.", "On brick b1: stat b1/dir/file1 File: 'b1/dir/file1' Size: 17 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919362 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 14:17:27.752429505 +0530 Modify: 2015-03-06 13:55:37.206880347 +0530 Change: 2015-03-06 14:17:12.880343950 +0530 Birth: - md5sum b1/dir/file1 040751929ceabf77c3c0b3b662f341a8 b1/dir/file1 On brick b2: stat b2/dir/file1 File: 'b2/dir/file1' Size: 17 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919365 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 14:17:23.249403600 +0530 Modify: 2015-03-06 13:55:37.206880000 +0530 Change: 2015-03-06 14:17:12.881343955 +0530 Birth: - md5sum b2/dir/file1 040751929ceabf77c3c0b3b662f341a8 b2/dir/file1", "gluster volume heal VOLNAME info split-brain", "Brick <hostname:brickpath-b1> <gfid:aaca219f-0e25-4576-8689-3bfd93ca70c2> <gfid:39f301ae-4038-48c2-a889-7dac143e82dd> <gfid:c3c94de2-232d-4083-b534-5da17fc476ac> Number of entries in split-brain: 3 Brick <hostname:brickpath-b2> /dir/file1 /dir /file4 Number of entries in split-brain: 3", "On brick b1: stat b1/file4 File: 'b1/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919356 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 13:53:19.417085062 +0530 Modify: 2015-03-06 13:53:19.426085114 +0530 Change: 2015-03-06 13:53:19.426085114 +0530 Birth: - md5sum b1/file4 b6273b589df2dfdbd8fe35b1011e3183 b1/file4 On brick b2: stat b2/file4 File: 'b2/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919358 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 13:52:35.761833096 +0530 Modify: 2015-03-06 13:52:35.769833142 +0530 Change: 2015-03-06 13:52:35.769833142 +0530 Birth: - md5sum b2/file4 0bee89b07a248e27c83fc3d5951213c1 b2/file4", "gluster volume heal <VOLNAME> split-brain latest-mtime <FILE>", "gluster volume heal test-volume split-brain latest-mtime /file4 Healed /file4", "On brick b1: stat b1/file4 File: 'b1/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919356 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 14:23:38.944609863 +0530 Modify: 2015-03-06 13:53:19.426085114 +0530 Change: 2015-03-06 14:27:15.058927962 +0530 Birth: - md5sum b1/file4 b6273b589df2dfdbd8fe35b1011e3183 b1/file4 On brick b2: stat b2/file4 File: 'b2/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919358 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 14:23:38.944609000 +0530 Modify: 2015-03-06 13:53:19.426085000 +0530 Change: 2015-03-06 14:27:15.059927968 +0530 Birth: md5sum b2/file4 b6273b589df2dfdbd8fe35b1011e3183 b2/file4", "gluster volume heal VOLNAME info split-brain", "Brick <hostname:brickpath-b1> <gfid:aaca219f-0e25-4576-8689-3bfd93ca70c2> <gfid:39f301ae-4038-48c2-a889-7dac143e82dd> <gfid:c3c94de2-232d-4083-b534-5da17fc476ac> Number of entries in split-brain: 3 Brick <hostname:brickpath-b2> /dir/file1 /dir /file4 Number of entries in split-brain: 3", "On brick b1: stat b1/file4 File: 'b1/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919356 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 13:53:19.417085062 +0530 Modify: 2015-03-06 13:53:19.426085114 +0530 Change: 2015-03-06 13:53:19.426085114 +0530 Birth: - md5sum b1/file4 b6273b589df2dfdbd8fe35b1011e3183 b1/file4 On brick b2: stat b2/file4 File: 'b2/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919358 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 13:52:35.761833096 +0530 Modify: 2015-03-06 13:52:35.769833142 +0530 Change: 2015-03-06 13:52:35.769833142 +0530 Birth: - md5sum b2/file4 0bee89b07a248e27c83fc3d5951213c1 b2/file4", "gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME> <FILE>", "gluster volume heal test-volume split-brain source-brick test-host:b1 /file4 Healed /file4", "On brick b1: stat b1/file4 File: 'b1/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919356 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 14:23:38.944609863 +0530 Modify: 2015-03-06 13:53:19.426085114 +0530 Change: 2015-03-06 14:27:15.058927962 +0530 Birth: - md5sum b1/file4 b6273b589df2dfdbd8fe35b1011e3183 b1/file4 On brick b2: stat b2/file4 File: 'b2/file4' Size: 4 Blocks: 16 IO Block: 4096 regular file Device: fd03h/64771d Inode: 919358 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2015-03-06 14:23:38.944609000 +0530 Modify: 2015-03-06 13:53:19.426085000 +0530 Change: 2015-03-06 14:27:15.059927968 +0530 Birth: - md5sum b2/file4 b6273b589df2dfdbd8fe35b1011e3183 b2/file4", "gluster volume heal VOLNAME info split-brain", "gluster volume heal <VOLNAME> split-brain source-brick <HOSTNAME:BRICKNAME>", "gluster volume heal test-volume split-brain source-brick test-host:b1", "gluster volume heal VOLNAME info split-brain", "gluster volume heal 12 info split-brain", "Brick 10.70.47.45:/bricks/brick2/b0 /f5 / - Is in split-brain Status: Connected Number of entries: 2 Brick 10.70.47.144:/bricks/brick2/b1 /f5 / - Is in split-brain Status: Connected Number of entries: 2", "getfattr -d -e hex -m. <path-to-file>", "On brick /b0 getfattr -d -m . -e hex /bricks/brick2/b0/f5 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b0/f5 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.12-client-1=0x000000020000000100000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0xce0a9956928e40afb78e95f78defd64f trusted.gfid2path.9cde09916eabc845=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6635 On brick /b1 getfattr -d -m . -e hex /bricks/brick2/b1/f5 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b1/f5 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.12-client-0=0x000000020000000100000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x9563544118653550e888ab38c232e0c trusted.gfid2path.9cde09916eabc845=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6635", "On brick /b0 stat /bricks/brick2/b0/f5 File: '/bricks/brick2/b0/f5' Size: 15 Blocks: 8 IO Block: 4096 regular file Device: fd15h/64789d Inode: 67113350 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2018-08-29 20:46:26.353751073 +0530 Modify: 2018-08-29 20:46:26.361751203 +0530 Change: 2018-08-29 20:47:16.363751236 +0530 Birth: - On brick /b1 stat /bricks/brick2/b1/f5 File: '/bricks/brick2/b1/f5' Size: 2 Blocks: 8 IO Block: 4096 regular file Device: fd15h/64789d Inode: 67111750 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2018-08-29 20:44:56.153301616 +0530 Modify: 2018-08-29 20:44:56.161301745 +0530 Change: 2018-08-29 20:44:56.162301761 +0530 Birth: -", "gluster volume heal VOLNAME split-brain bigger-file FILE", "gluster volume heal 12 split-brain bigger-file /f5 GFID split-brain resolved for file /f5", "On brick /b0 getfattr -d -m . -e hex /bricks/brick2/b0/f5 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b0/f5 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0xce0a9956928e40afb78e95f78defd64f trusted.gfid2path.9cde09916eabc845=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6635 On brick /b1 getfattr -d -m . -e hex /bricks/brick2/b1/f5 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b1/f5 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0xce0a9956928e40afb78e95f78defd64f trusted.gfid2path.9cde09916eabc845=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6635", "gluster volume heal VOLNAME info split-brain", "gluster volume heal 12 info split-brain", "Brick 10.70.47.45:/bricks/brick2/b0 /f4 / - Is in split-brain Status: Connected Number of entries: 2 Brick 10.70.47.144:/bricks/brick2/b1 /f4 / - Is in split-brain Status: Connected Number of entries: 2", "getfattr -d -e hex -m. <path-to-file>", "On brick /b0 getfattr -d -m . -e hex /bricks/brick2/b0/f4 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b0/f4 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.12-client-1=0x000000020000000100000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0xb66b66d07b315f3c9cffac2fb6422a28 trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634 On brick /b1 getfattr -d -m . -e hex /bricks/brick2/b1/f4 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b1/f4 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.12-client-0=0x000000020000000100000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x87242f808c6e56a007ef7d49d197acff trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634", "On brick /b0 stat /bricks/brick2/b0/f4 File: '/bricks/brick2/b0/f4' Size: 14 Blocks: 8 IO Block: 4096 regular file Device: fd15h/64789d Inode: 67113349 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2018-08-29 20:57:38.913629991 +0530 Modify: 2018-08-29 20:57:38.921630122 +0530 Change: 2018-08-29 20:57:38.923630154 +0530 Birth: - On brick /b1 stat /bricks/brick2/b1/f4 File: '/bricks/brick2/b1/f4' Size: 2 Blocks: 8 IO Block: 4096 regular file Device: fd15h/64789d Inode: 67111749 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: system_u:object_r:glusterd_brick_t:s0 Access: 2018-08-24 20:54:50.953217256 +0530 Modify: 2018-08-24 20:54:50.961217385 +0530 Change: 2018-08-24 20:54:50.962217402 +0530 Birth: -", "gluster volume heal VOLNAME split-brain latest-mtime FILE", "gluster volume heal 12 split-brain latest-mtime /f4 GFID split-brain resolved for file /f4", "On brick /b0 getfattr -d -m . -e hex /bricks/brick2/b0/f4 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b0/f4 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0xb66b66d07b315f3c9cffac2fb6422a28 trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634 On brick /b1 getfattr -d -m . -e hex /bricks/brick2/b1/f4 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b1/f4 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0xb66b66d07b315f3c9cffac2fb6422a28 trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634", "gluster volume heal VOLNAME info split-brain", "gluster volume heal 12 info split-brain", "Brick 10.70.47.45:/bricks/brick2/b0 /f3 / - Is in split-brain Status: Connected Number of entries: 2 Brick 10.70.47.144:/bricks/brick2/b1 /f3 / - Is in split-brain Status: Connected Number of entries: 2", "getfattr -d -e hex -m. <path-to-file>", "getfattr -d -m . -e hex /bricks/brick2/b0/f3 On brick /b0 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b0/f3 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.12-client-1=0x000000020000000100000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0x9d542fb1b3b15837a2f7f9dcdf5d6ee8 trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634 On brick /b1 getfattr -d -m . -e hex /bricks/brick2/b1/f3 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b0/f3 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.afr.12-client-1=0x000000020000000100000000 trusted.afr.dirty=0x000000000000000000000000 trusted.gfid=0xc90d9b0f65f6530b95b9f3f8334033df trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634", "gluster volume heal VOLNAME split-brain source-brick HOSTNAME : export-directory-absolute-path FILE", "gluster volume heal 12 split-brain source-brick 10.70.47.144:/bricks/brick2/b1 /f3 GFID split-brain resolved for file /f3", "On brick /b0 getfattr -d -m . -e hex /bricks/brick2/b0/f3 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b0/f3 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0x90d9b0f65f6530b95b9f3f8334033df trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634 On brick /b1 getfattr -d -m . -e hex /bricks/brick2/b1/f3 getfattr: Removing leading '/' from absolute path names file: bricks/brick2/b1/f3 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.gfid=0x90d9b0f65f6530b95b9f3f8334033df trusted.gfid2path.364f55367c7bd6f4=0x30303030303030302d303030302d303030302d303030302d3030303030303030303030312f6634", "gluster volume heal VOLNAME info", "gluster volume heal test-volume info Brick server1 :/gfs/test-volume_0 Number of entries: 0 Brick server2 :/gfs/test-volume_1 /95.txt /32.txt /66.txt /35.txt /18.txt /26.txt - Possibly undergoing heal /47.txt /55.txt /85.txt - Possibly undergoing heal Number of entries: 101", "gluster volume heal VOLNAME", "gluster volume heal test-volume Heal operation on volume test-volume has been successful", "gluster volume heal VOLNAME full", "gluster volume heal test-volume full Heal operation on volume test-volume has been successful", "gluster volume heal VOLNAME info split-brain", "gluster volume heal test-volume info split-brain Brick server1:/gfs/test-volume_2 Number of entries: 12 at path on brick ---------------------------------- 2012-06-13 04:02:05 /dir/file.83 2012-06-13 04:02:05 /dir/file.28 2012-06-13 04:02:05 /dir/file.69 Brick server2:/gfs/test-volume_2 Number of entries: 12 at path on brick ---------------------------------- 2012-06-13 04:02:05 /dir/file.83 2012-06-13 04:02:05 /dir/file.28 2012-06-13 04:02:05 /dir/file.69" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-managing_split-brain
Chapter 4. Managing namespace buckets
Chapter 4. Managing namespace buckets Namespace buckets let you connect data repositories on different providers together, so that you can interact with all of your data through a single unified view. Add the object bucket associated with each provider to the namespace bucket, and access your data through the namespace bucket to see all of your object buckets at once. This lets you write to your preferred storage provider while reading from multiple other storage providers, greatly reducing the cost of migrating to a new storage provider. Note A namespace bucket can only be used if its write target is available and functional. 4.1. Amazon S3 API endpoints for objects in namespace buckets You can interact with objects in the namespace buckets using the Amazon Simple Storage Service (S3) API. Ensure that the credentials provided for the Multicloud Object Gateway (MCG) enables you to perform the AWS S3 namespace bucket operations. You can use the AWS tool, aws-cli to verify that all the operations can be performed on the target bucket. Also, the list bucket which is using this MCG account shows the target bucket. Red Hat OpenShift Data Foundation supports the following namespace bucket operations: ListBuckets ListObjects ListMultipartUploads ListObjectVersions GetObject HeadObject CopyObject PutObject CreateMultipartUpload UploadPartCopy UploadPart ListParts AbortMultipartUpload PubObjectTagging DeleteObjectTagging GetObjectTagging GetObjectAcl PutObjectAcl DeleteObject DeleteObjects See the Amazon S3 API reference documentation for the most up-to-date information about these operations and how to use them. Additional resources Amazon S3 REST API Reference Amazon S3 CLI Reference 4.2. Adding a namespace bucket using the Multicloud Object Gateway CLI and YAML For more information about namespace buckets, see Managing namespace buckets . Depending on the type of your deployment and whether you want to use YAML or the Multicloud Object Gateway (MCG) CLI, choose one of the following procedures to add a namespace bucket: Adding an AWS S3 namespace bucket using YAML Adding an IBM COS namespace bucket using YAML Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI 4.2.1. Adding an AWS S3 namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). For information, see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: where <namespacestore-secret-name> is a unique NamespaceStore name. You must provide and encode your own AWS access key ID and secret access key using Base64 , and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <resource-name> The name you want to give to the resource. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . A namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. A namespace policy of type multi requires the following configuration: <my-bucket-class> A unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the names of the NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step using the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.2. Adding an IBM COS namespace bucket using YAML Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Procedure Create a secret with the credentials: <namespacestore-secret-name> A unique NamespaceStore name. You must provide and encode your own IBM COS access key ID and secret access key using Base64 , and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Create a NamespaceStore resource using OpenShift custom resource definitions (CRDs). A NamespaceStore represents underlying storage to be used as a read or write target for the data in the MCG namespace buckets. To create a NamespaceStore resource, apply the following YAML: <IBM COS ENDPOINT> The appropriate IBM COS endpoint. <namespacestore-secret-name> The secret created in the step. <namespace-secret> The namespace where the secret can be found. <target-bucket> The target bucket you created for the NamespaceStore. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . The namespace policy of type single requires the following configuration: <my-bucket-class> The unique namespace bucket class name. <resource> The name of a single NamespaceStore that defines the read and write target of the namespace bucket. The namespace policy of type multi requires the following configuration: <my-bucket-class> The unique bucket class name. <write-resource> The name of a single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A list of the NamespaceStores names that defines the read targets of the namespace bucket. To create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step, apply the following YAML: <resource-name> The name you want to give to the resource. <my-bucket> The name you want to give to the bucket. <my-bucket-class> The bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.3. Adding an AWS S3 namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> The AWS access key ID and secret access key you created for this purpose. <bucket-name> The existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy can be either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single namespace-store that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single namespace-store that defines the write target of the namespace bucket. <read-resources>s A list of namespace-stores separated by commas that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and a ConfigMap with the same name and in the same namespace as that of the OBC. 4.2.4. Adding an IBM COS namespace bucket using the Multicloud Object Gateway CLI Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . Download the MCG command-line interface binary from the customer portal and make it executable. Note Choose either Linux(x86_64), Windows, or Mac OS. Procedure In the MCG command-line interface, create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. <namespacestore> The name of the NamespaceStore. <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> An IBM access key ID, secret access key, and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. <bucket-name> An existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. Create a namespace bucket class that defines a namespace policy for the namespace buckets. The namespace policy requires a type of either single or multi . To create a namespace bucket class with a namespace policy of type single : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <resource> A single NamespaceStore that defines the read and write target of the namespace bucket. To create a namespace bucket class with a namespace policy of type multi : <resource-name> The name you want to give the resource. <my-bucket-class> A unique bucket class name. <write-resource> A single NamespaceStore that defines the write target of the namespace bucket. <read-resources> A comma-separated list of NamespaceStores that defines the read targets of the namespace bucket. Create a bucket using an Object Bucket Class (OBC) resource that uses the bucket class defined in the earlier step. <bucket-name> A bucket name of your choice. <custom-bucket-class> The name of the bucket class created in the step. After the OBC is provisioned by the operator, a bucket is created in the MCG, and the operator creates a Secret and ConfigMap with the same name and in the same namespace as that of the OBC. 4.3. Adding a namespace bucket using the OpenShift Container Platform user interface You can add namespace buckets using the OpenShift Container Platform user interface. For information about namespace buckets, see Managing namespace buckets . Prerequisites Ensure that Openshift Container Platform with OpenShift Data Foundation operator is already installed. Access to the Multicloud Object Gateway (MCG). Procedure On the OpenShift Web Console, navigate to Storage Object Storage Namespace Store tab. Click Create namespace store to create a namespacestore resources to be used in the namespace bucket. Enter a namespacestore name. Choose a provider and region. Either select an existing secret, or click Switch to credentials to create a secret by entering a secret key and secret access key. Enter a target bucket. Click Create . On the Namespace Store tab, verify that the newly created namespacestore is in the Ready state. Repeat steps 2 and 3 until you have created all the desired amount of resources. Navigate to Bucket Class tab and click Create Bucket Class . Choose Namespace BucketClass type radio button. Enter a BucketClass name and click . Choose a Namespace Policy Type for your namespace bucket, and then click . If your namespace policy type is Single , you need to choose a read resource. If your namespace policy type is Multi , you need to choose read resources and a write resource. If your namespace policy type is Cache , you need to choose a Hub namespace store that defines the read and write target of the namespace bucket. Select one Read and Write NamespaceStore which defines the read and write targets of the namespace bucket and click . Review your new bucket class details, and then click Create Bucket Class . Navigate to Bucket Class tab and verify that your newly created resource is in the Ready phase. Navigate to Object Bucket Claims tab and click Create Object Bucket Claim . Enter ObjectBucketClaim Name for the namespace bucket. Select StorageClass as openshift-storage.noobaa.io . Select the BucketClass that you created earlier for your namespacestore from the list. By default, noobaa-default-bucket-class gets selected. Click Create . The namespace bucket is created along with Object Bucket Claim for your namespace. Navigate to Object Bucket Claims tab and verify that the Object Bucket Claim created is in Bound state. Navigate to Object Buckets tab and verify that the your namespace bucket is present in the list and is in Bound state. 4.4. Sharing legacy application data with cloud native application using S3 protocol Many legacy applications use file systems to share data sets. You can access and share the legacy data in the file system by using the S3 operations. To share data you need to do the following: Export the pre-existing file system datasets, that is, RWX volume such as Ceph FileSystem (CephFS) or create a new file system datasets using the S3 protocol. Access file system datasets from both file system and S3 protocol. Configure S3 accounts and map them to the existing or a new file system unique identifiers (UIDs) and group identifiers (GIDs). 4.4.1. Creating a NamespaceStore to use a file system Prerequisites Openshift Container Platform with OpenShift Data Foundation operator installed. Access to the Multicloud Object Gateway (MCG). Procedure Log into the OpenShift Web Console. Click Storage Object Storage . Click the NamespaceStore tab to create NamespaceStore resources to be used in the namespace bucket. Click Create namespacestore . Enter a name for the NamespaceStore. Choose Filesystem as the provider. Choose the Persistent volume claim. Enter a folder name. If the folder name exists, then that folder is used to create the NamespaceStore or else a folder with that name is created. Click Create . Verify the NamespaceStore is in the Ready state. 4.4.2. Creating accounts with NamespaceStore filesystem configuration You can either create a new account with NamespaceStore filesystem configuration or convert an existing normal account into a NamespaceStore filesystem account by editing the YAML. Note You cannot remove a NamespaceStore filesystem configuration from an account. Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a new account with NamespaceStore filesystem configuration using the MCG command-line interface. For example: allow_bucket_create Indicates whether the account is allowed to create new buckets. Supported values are true or false . Default value is true . allowed_buckets A comma separated list of bucket names to which the user is allowed to have access and management rights. default_resource The NamespaceStore resource on which the new buckets will be created when using the S3 CreateBucket operation. The NamespaceStore must be backed by an RWX (ReadWriteMany) persistent volume claim (PVC). full_permission Indicates whether the account should be allowed full permission or not. Supported values are true or false . Default value is false . new_buckets_path The filesystem path where directories corresponding to new buckets will be created. The path is inside the filesystem of NamespaceStore filesystem PVCs where new directories are created to act as the filesystem mapping of newly created object bucket classes. nsfs_account_config A mandatory field that indicates if the account is used for NamespaceStore filesystem. nsfs_only Indicates whether the account is used only for NamespaceStore filesystem or not. Supported values are true or false . Default value is false . If it is set to 'true', it limits you from accessing other types of buckets. uid The user ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem gid The group ID of the filesystem to which the MCG account will be mapped and it is used to access and manage data on the filesystem The MCG system sends a response with the account configuration and its S3 credentials: You can list all the custom resource definition (CRD) based accounts by using the following command: If you are interested in a particular account, you can read its custom resource definition (CRD) directly by the account name: 4.4.3. Accessing legacy application data from the openshift-storage namespace When using the Multicloud Object Gateway (MCG) NamespaceStore filesystem (NSFS) feature, you need to have the Persistent Volume Claim (PVC) where the data resides in the openshift-storage namespace. In almost all cases, the data you need to access is not in the openshift-storage namespace, but in the namespace that the legacy application uses. In order to access data stored in another namespace, you need to create a PVC in the openshift-storage namespace that points to the same CephFS volume that the legacy application uses. Procedure Display the application namespace with scc : <application_namespace> Specify the name of the application namespace. For example: Navigate into the application namespace: For example: Ensure that a ReadWriteMany (RWX) PVC is mounted on the pod that you want to consume from the noobaa S3 endpoint using the MCG NSFS feature: Check the mount point of the Persistent Volume (PV) inside your pod. Get the volume name of the PV from the pod: <pod_name> Specify the name of the pod. For example: In this example, the name of the volume for the PVC is cephfs-write-workload-generator-no-cache-pv-claim . List all the mounts in the pod, and check for the mount point of the volume that you identified in the step: For example: Confirm the mount point of the RWX PV in your pod: <mount_path> Specify the path to the mount point that you identified in the step. For example: Ensure that the UID and SELinux labels are the same as the ones that the legacy namespace uses: For example: Get the information of the legacy application RWX PV that you want to make accessible from the openshift-storage namespace: <pv_name> Specify the name of the PV. For example: Ensure that the PVC from the legacy application is accessible from the openshift-storage namespace so that one or more noobaa-endpoint pods can access the PVC. Find the values of the subvolumePath and volumeHandle from the volumeAttributes . You can get these values from the YAML description of the legacy application PV: For example: Use the subvolumePath and volumeHandle values that you identified in the step to create a new PV and PVC object in the openshift-storage namespace that points to the same CephFS volume as the legacy application PV: Example YAML file : 1 The storage capacity of the PV that you are creating in the openshift-storage namespace must be the same as the original PV. 2 The volume handle for the target PV that you create in openshift-storage needs to have a different handle than the original application PV, for example, add -clone at the end of the volume handle. 3 The storage capacity of the PVC that you are creating in the openshift-storage namespace must be the same as the original PVC. Create the PV and PVC in the openshift-storage namespace using the YAML file specified in the step: <YAML_file> Specify the name of the YAML file. For example: Ensure that the PVC is available in the openshift-storage namespace: Navigate into the openshift-storage project: Create the NSFS namespacestore: <nsfs_namespacestore> Specify the name of the NSFS namespacestore. <cephfs_pvc_name> Specify the name of the CephFS PVC in the openshift-storage namespace. For example: Ensure that the noobaa-endpoint pod restarts and that it successfully mounts the PVC at the NSFS namespacestore, for example, /nsfs/legacy-namespace mountpoint: <noobaa_endpoint_pod_name> Specify the name of the noobaa-endpoint pod. For example: Create a MCG user account: <user_account> Specify the name of the MCG user account. <gid_number> Specify the GID number. <uid_number> Specify the UID number. Important Use the same UID and GID as that of the legacy application. You can find it from the output. For example: Create a MCG bucket. Create a dedicated folder for S3 inside the NSFS share on the CephFS PV and PVC of the legacy application pod: For example: Create the MCG bucket using the nsfs/ path: For example: Check the SELinux labels of the folders residing in the PVCs in the legacy application and openshift-storage namespaces: For example: For example: In these examples, you can see that the SELinux labels are not the same which results in permission denied or access issues. Ensure that the legacy application and openshift-storage pods use the same SELinux labels on the files. You can do this in one of the following ways: Section 4.4.3.1, "Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project" . Section 4.4.3.2, "Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC" . Delete the NSFS namespacestore: Delete the MCG bucket: For example: Delete the MCG user account: For example: Delete the NSFS namespacestore: For example: Delete the PV and PVC: Important Before you delete the PV and PVC, ensure that the PV has a retain policy configured. <cephfs_pv_name> Specify the CephFS PV name of the legacy application. <cephfs_pvc_name> Specify the CephFS PVC name of the legacy application. For example: 4.4.3.1. Changing the default SELinux label on the legacy application project to match the one in the openshift-storage project Display the current openshift-storage namespace with sa.scc.mcs : Edit the legacy application namespace, and modify the sa.scc.mcs with the value from the sa.scc.mcs of the openshift-storage namespace: For example: For example: Restart the legacy application pod. A relabel of all the files take place and now the SELinux labels match with the openshift-storage deployment. 4.4.3.2. Modifying the SELinux label only for the deployment config that has the pod which mounts the legacy application PVC Create a new scc with the MustRunAs and seLinuxOptions options, with the Multi Category Security (MCS) that the openshift-storage project uses. Example YAML file: Create a service account for the deployment and add it to the newly created scc . Create a service account: <service_account_name>` Specify the name of the service account. For example: Add the service account to the newly created scc : For example: Patch the legacy application deployment so that it uses the newly created service account. This allows you to specify the SELinux label in the deployment: For example: Edit the deployment to specify the security context to use at the SELinux label in the deployment configuration: Add the following lines: <security_context_value> You can find this value when you execute the command to create a dedicated folder for S3 inside the NSFS share, on the CephFS PV and PVC of the legacy application pod. For example: Ensure that the security context to be used at the SELinux label in the deployment configuration is specified correctly: For example" The legacy application is restarted and begins using the same SELinux labels as the openshift-storage namespace.
[ "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <resource-name> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>", "apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: bs namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <namespacestore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: single: resource: <resource>", "apiVersion: noobaa.io/v1alpha1 kind: BucketClass metadata: labels: app: noobaa name: <my-bucket-class> namespace: openshift-storage spec: namespacePolicy: type: Multi multi: writeResource: <write-resource> readResources: - <read-resources> - <read-resources>", "apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <resource-name> namespace: openshift-storage spec: generateBucketName: <my-bucket> storageClassName: openshift-storage.noobaa.io additionalConfig: bucketclass: <my-bucket-class>", "noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass single <my-bucket-class> --resource <resource> -n openshift-storage", "noobaa bucketclass create namespace-bucketclass multi <my-bucket-class> --write-resource <write-resource> --read-resources <read-resources> -n openshift-storage", "noobaa obc create my-bucket-claim -n openshift-storage --app-namespace my-app --bucketclass <custom-bucket-class>", "noobaa account create <noobaa-account-name> [flags]", "noobaa account create testaccount --full_permission --nsfs_account_config --gid 10001 --uid 10001 -default_resource fs_namespacestore", "NooBaaAccount spec: allow_bucket_creation: true Allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store Nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001 INFO[0006] ✅ Exists: Secret \"noobaa-account-testaccount\" Connection info: AWS_ACCESS_KEY_ID : <aws-access-key-id> AWS_SECRET_ACCESS_KEY : <aws-secret-access-key>", "noobaa account list NAME ALLOWED_BUCKETS DEFAULT_RESOURCE PHASE AGE testaccount [*] noobaa-default-backing-store Ready 1m17s", "oc get noobaaaccount/testaccount -o yaml spec: allow_bucket_creation: true allowed_buckets: full_permission: true permission_list: [] default_resource: noobaa-default-namespace-store nsfs_account_config: gid: 10001 new_buckets_path: / nsfs_only: true uid: 10001", "oc get ns <application_namespace> -o yaml | grep scc", "oc get ns testnamespace -o yaml | grep scc openshift.io/sa.scc.mcs: s0:c26,c5 openshift.io/sa.scc.supplemental-groups: 1000660000/10000 openshift.io/sa.scc.uid-range: 1000660000/10000", "oc project <application_namespace>", "oc project testnamespace", "oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-write-workload-generator-no-cache-pv-claim Bound pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX ocs-storagecluster-cephfs 12s", "oc get pod NAME READY STATUS RESTARTS AGE cephfs-write-workload-generator-no-cache-1-cv892 1/1 Running 0 11s", "oc get pods <pod_name> -o jsonpath='{.spec.volumes[]}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.volumes[]}' {\"name\":\"app-persistent-storage\",\"persistentVolumeClaim\":{\"claimName\":\"cephfs-write-workload-generator-no-cache-pv-claim\"}}", "oc get pods <pod_name> -o jsonpath='{.spec.containers[].volumeMounts}'", "oc get pods cephfs-write-workload-generator-no-cache-1-cv892 -o jsonpath='{.spec.containers[].volumeMounts}' [{\"mountPath\":\"/mnt/pv\",\"name\":\"app-persistent-storage\"},{\"mountPath\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"name\":\"kube-api-access-8tnc5\",\"readOnly\":true}]", "oc exec -it <pod_name> -- df <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- df /mnt/pv main Filesystem 1K-blocks Used Available Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10485760 0 10485760 0% /mnt/pv", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "oc get pv | grep <pv_name>", "oc get pv | grep pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a 10Gi RWX Delete Bound testnamespace/cephfs-write-workload-generator-no-cache-pv-claim ocs-storagecluster-cephfs 47s", "oc get pv <pv_name> -o yaml", "oc get pv pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: openshift-storage.cephfs.csi.ceph.com creationTimestamp: \"2022-05-25T06:27:49Z\" finalizers: - kubernetes.io/pv-protection name: pvc-aa58fb91-c3d2-475b-bbee-68452a613e1a resourceVersion: \"177458\" uid: 683fa87b-5192-4ccf-af2f-68c6bcf8f500 spec: accessModes: - ReadWriteMany capacity: storage: 10Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: cephfs-write-workload-generator-no-cache-pv-claim namespace: testnamespace resourceVersion: \"177453\" uid: aa58fb91-c3d2-475b-bbee-68452a613e1a csi: controllerExpandSecretRef: name: rook-csi-cephfs-provisioner namespace: openshift-storage driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: clusterID: openshift-storage fsName: ocs-storagecluster-cephfilesystem storage.kubernetes.io/csiProvisionerIdentity: 1653458225664-8081-openshift-storage.cephfs.csi.ceph.com subvolumeName: csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213 subvolumePath: /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213 persistentVolumeReclaimPolicy: Delete storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem status: phase: Bound", "cat << EOF >> pv-openshift-storage.yaml apiVersion: v1 kind: PersistentVolume metadata: name: cephfs-pv-legacy-openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany capacity: storage: 10Gi 1 csi: driver: openshift-storage.cephfs.csi.ceph.com nodeStageSecretRef: name: rook-csi-cephfs-node namespace: openshift-storage volumeAttributes: # Volume Attributes can be copied from the Source testnamespace PV \"clusterID\": \"openshift-storage\" \"fsName\": \"ocs-storagecluster-cephfilesystem\" \"staticVolume\": \"true\" # rootpath is the subvolumePath: you copied from the Source testnamespace PV \"rootPath\": /volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c volumeHandle: 0001-0011-openshift-storage-0000000000000001-cc416d9e-dbf3-11ec-b286-0a580a810213-clone 2 persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc-legacy namespace: openshift-storage spec: storageClassName: \"\" accessModes: - ReadWriteMany resources: requests: storage: 10Gi 3 volumeMode: Filesystem # volumeName should be same as PV name volumeName: cephfs-pv-legacy-openshift-storage EOF", "oc create -f <YAML_file>", "oc create -f pv-openshift-storage.yaml persistentvolume/cephfs-pv-legacy-openshift-storage created persistentvolumeclaim/cephfs-pvc-legacy created", "oc get pvc -n openshift-storage NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc-legacy Bound cephfs-pv-legacy-openshift-storage 10Gi RWX 14s", "oc project openshift-storage Now using project \"openshift-storage\" on server \"https://api.cluster-5f6ng.5f6ng.sandbox65.opentlc.com:6443\".", "noobaa namespacestore create nsfs <nsfs_namespacestore> --pvc-name=' <cephfs_pvc_name> ' --fs-backend='CEPH_FS'", "noobaa namespacestore create nsfs legacy-namespace --pvc-name='cephfs-pvc-legacy' --fs-backend='CEPH_FS'", "oc exec -it <noobaa_endpoint_pod_name> -- df -h /nsfs/ <nsfs_namespacestore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -- df -h /nsfs/legacy-namespace Filesystem Size Used Avail Use% Mounted on 172.30.202.87:6789,172.30.120.254:6789,172.30.77.247:6789:/volumes/csi/csi-vol-cc416d9e-dbf3-11ec-b286-0a580a810213/edcfe4d5-bdcb-4b8e-8824-8a03ad94d67c 10G 0 10G 0% /nsfs/legacy-namespace", "noobaa account create <user_account> --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid <gid_number> --uid <uid_number> --default_resource='legacy-namespace'", "noobaa account create leguser --full_permission --allow_bucket_create=true --new_buckets_path='/' --nsfs_only=true --nsfs_account_config=true --gid 0 --uid 1000660000 --default_resource='legacy-namespace'", "oc exec -it <pod_name> -- mkdir <mount_path> /nsfs", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- mkdir /mnt/pv/nsfs", "noobaa api bucket_api create_bucket '{ \"name\": \" <bucket_name> \", \"namespace\":{ \"write_resource\": { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \" <nsfs_namespacestore> \", \"path\": \"nsfs/\" }] } }'", "noobaa api bucket_api create_bucket '{ \"name\": \"legacy-bucket\", \"namespace\":{ \"write_resource\": { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }, \"read_resources\": [ { \"resource\": \"legacy-namespace\", \"path\": \"nsfs/\" }] } }'", "oc exec -it <noobaa_endpoint_pod_name> -n openshift-storage -- ls -ltraZ /nsfs/ <nsfs_namespacstore>", "oc exec -it noobaa-endpoint-5875f467f5-546c6 -n openshift-storage -- ls -ltraZ /nsfs/legacy-namespace total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c0,c26 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c0,c26 30 May 25 06:35 ..", "oc exec -it <pod_name> -- ls -latrZ <mount_path>", "oc exec -it cephfs-write-workload-generator-no-cache-1-cv892 -- ls -latrZ /mnt/pv/ total 567 drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 2 May 25 06:35 . -rw-r--r--. 1 1000660000 root system_u:object_r:container_file_t:s0:c26,c5 580138 May 25 06:35 fs_write_cephfs-write-workload-generator-no-cache-1-cv892-data.log drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c26,c5 30 May 25 06:35 ..", "noobaa bucket delete <bucket_name>", "noobaa bucket delete legacy-bucket", "noobaa account delete <user_account>", "noobaa account delete leguser", "noobaa namespacestore delete <nsfs_namespacestore>", "noobaa namespacestore delete legacy-namespace", "oc delete pv <cephfs_pv_name>", "oc delete pvc <cephfs_pvc_name>", "oc delete pv cephfs-pv-legacy-openshift-storage", "oc delete pvc cephfs-pvc-legacy", "oc get ns openshift-storage -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "oc edit ns <appplication_namespace>", "oc edit ns testnamespace", "oc get ns <application_namespace> -o yaml | grep sa.scc.mcs", "oc get ns testnamespace -o yaml | grep sa.scc.mcs openshift.io/sa.scc.mcs: s0:c26,c0", "cat << EOF >> scc.yaml allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:authenticated kind: SecurityContextConstraints metadata: annotations: name: restricted-pvselinux priority: null readOnlyRootFilesystem: false requiredDropCapabilities: - KILL - MKNOD - SETUID - SETGID runAsUser: type: MustRunAsRange seLinuxContext: seLinuxOptions: level: s0:c26,c0 type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret EOF", "oc create -f scc.yaml", "oc create serviceaccount <service_account_name>", "oc create serviceaccount testnamespacesa", "oc adm policy add-scc-to-user restricted-pvselinux -z <service_account_name>", "oc adm policy add-scc-to-user restricted-pvselinux -z testnamespacesa", "oc patch dc/ <pod_name> '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \" <service_account_name> \"}}}}'", "oc patch dc/cephfs-write-workload-generator-no-cache --patch '{\"spec\":{\"template\":{\"spec\":{\"serviceAccountName\": \"testnamespacesa\"}}}}'", "oc edit dc <pod_name> -n <application_namespace>", "spec: template: metadata: securityContext: seLinuxOptions: Level: <security_context_value>", "oc edit dc cephfs-write-workload-generator-no-cache -n testnamespace", "spec: template: metadata: securityContext: seLinuxOptions: level: s0:c26,c0", "oc get dc <pod_name> -n <application_namespace> -o yaml | grep -A 2 securityContext", "oc get dc cephfs-write-workload-generator-no-cache -n testnamespace -o yaml | grep -A 2 securityContext securityContext: seLinuxOptions: level: s0:c26,c0" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/managing-namespace-buckets_rhodf
B.90.2. RHSA-2011:0328 - Moderate: subversion security update
B.90.2. RHSA-2011:0328 - Moderate: subversion security update Updated subversion packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Subversion (SVN) is a concurrent version control system which enables one or more users to collaborate in developing and maintaining a hierarchy of files and directories while keeping a history of all changes. The mod_dav_svn module is used with the Apache HTTP Server to allow access to Subversion repositories via HTTP. CVE-2011-0715 A NULL pointer dereference flaw was found in the way the mod_dav_svn module processed certain requests to lock working copy paths in a repository. A remote attacker could issue a lock request that could cause the httpd process serving the request to crash. Red Hat would like to thank Hyrum Wright of the Apache Subversion project for reporting this issue. Upstream acknowledges Philip Martin, WANdisco, Inc. as the original reporter. All Subversion users should upgrade to these updated packages, which contain a backported patch to correct this issue. After installing the updated packages, you must restart the httpd daemon, if you are using mod_dav_svn, for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0328
Chapter 1. Adding custom certificates
Chapter 1. Adding custom certificates Learn how to use a custom TLS certificate with Red Hat Advanced Cluster Security for Kubernetes. After you set up a certificate, users and API clients do not have to bypass the certificate security warnings when connecting to Central. 1.1. Adding a custom security certificate You can apply a security certificate during the installation or on an existing Red Hat Advanced Cluster Security for Kubernetes deployment. 1.1.1. Prerequisites for adding custom certificates Prerequisites You must already have PEM-encoded private key and certificate files. The certificate file should begin and end with human-readable blocks. For example: -----BEGIN CERTIFICATE----- MIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G ... l4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END CERTIFICATE----- The certificate file can contain either a single (leaf) certificate, or a certificate chain. Warning If the certificate is not directly signed by a trusted root, you must provide the full certificate chain, including any intermediate certificates. All certificates in the chain must be in order so that the leaf certificate is the first and the root certificate is the last in the chain. If you are using a custom certificate that is not globally trusted, you must also configure the Sensor to trust your custom certificate. 1.1.2. Adding a custom certificate during a new installation Procedure If you are installing Red Hat Advanced Cluster Security for Kubernetes using the Operator: Create a central-default-tls-cert secret that contains the appropriate TLS certificates in the namespace where the Central service will be installed by entering the following command: oc -n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem> If you are installing Red Hat Advanced Cluster Security for Kubernetes using Helm: Add your custom certificate and its key in the values-private.yaml file: central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G ... -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= ... -----END EC PRIVATE KEY----- Provide the configuration file during the installation: USD helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f values-private.yaml If you are installing Red Hat Advanced Cluster Security for Kubernetes using the roxctl CLI, provide the certificate and key files when you run the installer: For the non-interactive installer, use the --default-tls-cert and --default-tls-key options: USD roxctl central generate --default-tls-cert "cert.pem" --default-tls-key "key.pem" For the interactive installer, provide the certificate and key files when you enter answers for the prompts: ... Enter PEM cert bundle file (optional): <cert.pem> Enter PEM private key file (optional): <key.pem> Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): openshift ... 1.1.3. Adding a custom certificate for an existing instance Procedure If you have installed Red Hat Advanced Cluster Security for Kubernetes using the Operator: Create a central-default-tls-cert secret that contains the appropriate TLS certificates in the namespace where the Central service is installed by entering the following command: oc -n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem> If you have installed Red Hat Advanced Cluster Security for Kubernetes using Helm: Add your custom certificate and its key in the values-private.yaml file: central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G ... -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= ... -----END EC PRIVATE KEY----- Use the helm upgrade command and provide the updated configuration file: USD helm upgrade -n stackrox --create-namespace stackrox-central-services \ rhacs/central-services --reuse-values \ 1 -f values-private.yaml 1 You must use this parameter because the values-private.yaml file does not contain all of the required configuration values. If you have installed Red Hat Advanced Cluster Security for Kubernetes using the roxctl CLI: Create and apply a TLS secret from the PEM-encoded key and certificate files: USD oc -n stackrox create secret tls central-default-tls-cert \ --cert <server_cert.pem> \ --key <server_key.pem> \ --dry-run -o yaml | oc apply -f - After you run this command, Central automatically applies the new key and certificate without requiring the pod to be restarted. It might take up to a minute to propagate the changes. 1.1.4. Updating the custom certificate for an existing instance If you use a custom certificate for Central, you can update the certificate by performing the following procedure. Procedure Delete the existing custom certificate's secret: USD oc delete secret central-default-tls-cert Create a new secret: USD oc -n stackrox create secret tls central-default-tls-cert \ --cert <server_cert.pem> \ --key <server_key.pem> \ --dry-run -o yaml | oc apply -f - Restart the Central container. 1.1.4.1. Restarting the Central container You can restart the Central container by killing the Central container or by deleting the Central pod. Procedure Run the following command to kill the Central container: Note You must wait for at least 1 minute, until OpenShift Container Platform propagates your changes and restarts the Central container. USD oc -n stackrox exec deploy/central -c central -- kill 1 Or, run the following command to delete the Central pod: USD oc -n stackrox delete pod -lapp=central 1.2. Configuring Sensor to trust custom certificates If you are using a custom certificate that is not trusted globally, you must configure the Sensor to trust your custom certificate. Otherwise, you might get errors. The specific type of error may vary based on your setup and the certificate you use. Usually, it is an x509 validation related error. Note You do not need to configure the Sensor to trust your custom certificate if you are using a globally trusted certificate. 1.2.1. Downloading a Sensor bundle The Sensor bundle includes the necessary configuration files and scripts to install Sensor. You can download the Sensor bundle from the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Clusters . Click New Cluster and specify a name for the cluster. If you are deploying the Sensor in the same cluster, accept the default values for all the fields. Otherwise, if you are deploying into a different cluster, replace the address central.stackrox.svc:443 with a load balancer, node port, or other address (including the port number) that is accessible from the other cluster in which you are planning to install. Note If you are using a non-gRPC capable load balancer, such as HAProxy, AWS Application Load Balancer (ALB), or AWS Elastic Load Balancing (ELB) use the WebSocket Secure ( wss ) protocol. To use wss : Prefix the address with wss:// , and Add the port number after the address, for example, wss://stackrox-central.example.com:443 . Click to continue. Click Download YAML File and Keys . 1.2.2. Configuring Sensor to trust custom certificates when deploying a new Sensor Prerequisites You have downloaded the Sensor bundle. Procedure If you are using the sensor.sh script: Unzip the Sensor bundle: USD unzip -d sensor sensor-<cluster_name>.zip Run the sensor.sh script: USD ./sensor/sensor.sh The certificates are automatically applied when you run the sensor ( ./sensor/sensor.sh ) script. You can also place additional custom certificates in the sensor/additional-cas/ directory before you run the sensor.sh script. If you are not using the sensor.sh script: Unzip the Sensor bundle: USD unzip -d sensor sensor-<cluster_name>.zip Run the following command to create the secret: USD ./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1 1 Use the -d option to specify a directory containing custom certificates. Note If you get the "secret already exists" error message, re-run the script with the -u option: USD ./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u Continue Sensor deployment by using the YAML files. 1.2.3. Configuring an existing Sensor to trust custom certificates Prerequisites You have downloaded the Sensor bundle. Procedure Unzip the Sensor bundle: USD unzip -d sensor sensor-<cluster_name>.zip Run the following command to create the secret: USD ./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1 1 Use the -d option to specify a directory containing custom certificates. Note If you get the "secret already exists" error message, re-run the script with the -u option: USD ./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u Continue Sensor deployment by using the YAML files. If you added the certificates to an existing sensor, you must restart the Sensor container. 1.2.3.1. Restarting the Sensor container You can restart the Sensor container either by killing the container or by deleting the Sensor pod. Procedure Run the following command to kill the Sensor container: Note You must wait for at least 1 minute, until OpenShift Container Platform or Kubernetes propagates your changes and restarts the Sensor container. On OpenShift Container Platform: USD oc -n stackrox deploy/sensor -c sensor -- kill 1 On Kubernetes: USD kubectl -n stackrox deploy/sensor -c sensor -- kill 1 Or, run the following command to delete the Sensor pod: On OpenShift Container Platform: USD oc -n stackrox delete pod -lapp=sensor On Kubernetes: USD kubectl -n stackrox delete pod -lapp=sensor
[ "-----BEGIN CERTIFICATE----- MIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G l4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END CERTIFICATE-----", "-n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem>", "central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END EC PRIVATE KEY-----", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f values-private.yaml", "roxctl central generate --default-tls-cert \"cert.pem\" --default-tls-key \"key.pem\"", "Enter PEM cert bundle file (optional): <cert.pem> Enter PEM private key file (optional): <key.pem> Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): openshift", "-n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem>", "central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END EC PRIVATE KEY-----", "helm upgrade -n stackrox --create-namespace stackrox-central-services rhacs/central-services --reuse-values \\ 1 -f values-private.yaml", "oc -n stackrox create secret tls central-default-tls-cert --cert <server_cert.pem> --key <server_key.pem> --dry-run -o yaml | oc apply -f -", "oc delete secret central-default-tls-cert", "oc -n stackrox create secret tls central-default-tls-cert --cert <server_cert.pem> --key <server_key.pem> --dry-run -o yaml | oc apply -f -", "oc -n stackrox exec deploy/central -c central -- kill 1", "oc -n stackrox delete pod -lapp=central", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u", "oc -n stackrox deploy/sensor -c sensor -- kill 1", "kubectl -n stackrox deploy/sensor -c sensor -- kill 1", "oc -n stackrox delete pod -lapp=sensor", "kubectl -n stackrox delete pod -lapp=sensor" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/configuring/add-custom-cert
Chapter 34. JSON Tapset
Chapter 34. JSON Tapset This family of probe points, functions, and macros is used to output data in JSON format. It contains the following probe points, functions, and macros:
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/json-dot-stp
Chapter 8. Authorization services
Chapter 8. Authorization services Red Hat build of Keycloak Authorization Services are built on top of well-known standards such as the OAuth2 and User-Managed Access specifications. OAuth2 clients (such as front end applications) can obtain access tokens from the server using the token endpoint and use these same tokens to access resources protected by a resource server (such as back end services). In the same way, Red Hat build of Keycloak Authorization Services provide extensions to OAuth2 to allow access tokens to be issued based on the processing of all policies associated with the resource(s) or scope(s) being requested. This means that resource servers can enforce access to their protected resources based on the permissions granted by the server and held by an access token. In Red Hat build of Keycloak Authorization Services the access token with permissions is called a Requesting Party Token or RPT for short. In addition to the issuance of RPTs, Red Hat build of Keycloak Authorization Services also provides a set of RESTful endpoints that allow resources servers to manage their protected resources, scopes, permissions and policies, helping developers to extend or integrate these capabilities into their applications in order to support fine-grained authorization. 8.1. Discovering authorization services endpoints and metadata Red Hat build of Keycloak provides a discovery document from which clients can obtain all necessary information to interact with Red Hat build of Keycloak Authorization Services, including endpoint locations and capabilities. The discovery document can be obtained from: curl -X GET \ http://USD{host}:USD{port}/realms/USD{realm}/.well-known/uma2-configuration Where USD{host}:USD{port} is the hostname (or IP address) and port where Red Hat build of Keycloak is running and USD{realm} is the name of a realm in Red Hat build of Keycloak. As a result, you should get a response as follows: { // some claims are expected here // these are the main claims in the discovery document about Authorization Services endpoints location "token_endpoint": "http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token", "token_introspection_endpoint": "http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token/introspect", "resource_registration_endpoint": "http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/resource_set", "permission_endpoint": "http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/permission", "policy_endpoint": "http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/uma-policy" } Each of these endpoints expose a specific set of capabilities: token_endpoint A OAuth2-compliant Token Endpoint that supports the urn:ietf:params:oauth:grant-type:uma-ticket grant type. Through this endpoint clients can send authorization requests and obtain an RPT with all permissions granted by Red Hat build of Keycloak. token_introspection_endpoint A OAuth2-compliant Token Introspection Endpoint which clients can use to query the server to determine the active state of an RPT and to determine any other information associated with the token, such as the permissions granted by Red Hat build of Keycloak. resource_registration_endpoint A UMA-compliant Resource Registration Endpoint which resource servers can use to manage their protected resources and scopes. This endpoint provides operations create, read, update and delete resources and scopes in Red Hat build of Keycloak. permission_endpoint A UMA-compliant Permission Endpoint which resource servers can use to manage permission tickets. This endpoint provides operations create, read, update, and delete permission tickets in Red Hat build of Keycloak. 8.2. Obtaining permissions To obtain permissions from Red Hat build of Keycloak you send an authorization request to the token endpoint. As a result, Red Hat build of Keycloak will evaluate all policies associated with the resource(s) and scope(s) being requested and issue an RPT with all permissions granted by the server. Clients are allowed to send authorization requests to the token endpoint using the following parameters: grant_type This parameter is required . Must be urn:ietf:params:oauth:grant-type:uma-ticket . ticket This parameter is optional . The most recent permission ticket received by the client as part of the UMA authorization process. claim_token This parameter is optional . A string representing additional claims that should be considered by the server when evaluating permissions for the resource(s) and scope(s) being requested. This parameter allows clients to push claims to Red Hat build of Keycloak. For more details about all supported token formats see claim_token_format parameter. claim_token_format This parameter is optional . A string indicating the format of the token specified in the claim_token parameter. Red Hat build of Keycloak supports two token formats: urn:ietf:params:oauth:token-type:jwt and https://openid.net/specs/openid-connect-core-1_0.html#IDToken . The urn:ietf:params:oauth:token-type:jwt format indicates that the claim_token parameter references an access token. The https://openid.net/specs/openid-connect-core-1_0.html#IDToken indicates that the claim_token parameter references an OpenID Connect ID Token. rpt This parameter is optional . A previously issued RPT which permissions should also be evaluated and added in a new one. This parameter allows clients in possession of an RPT to perform incremental authorization where permissions are added on demand. permission This parameter is optional . A string representing a set of one or more resources and scopes the client is seeking access. This parameter can be defined multiple times in order to request permission for multiple resource and scopes. This parameter is an extension to urn:ietf:params:oauth:grant-type:uma-ticket grant type in order to allow clients to send authorization requests without a permission ticket. The format of the string must be: RESOURCE_ID#SCOPE_ID . For instance: Resource A#Scope A , Resource A#Scope A, Scope B, Scope C , Resource A , #Scope A . permission_resource_format This parameter is optional . A string representing a format indicating the resource in the permission parameter. Possible values are id and uri . id indicates the format is RESOURCE_ID . uri indicates the format is URI . If not specified, the default is id . permission_resource_matching_uri This parameter is optional . A boolean value that indicates whether to use path matching when representing resources in URI format in the permission parameter. If not specified, the default is false. audience This parameter is optional . The client identifier of the resource server to which the client is seeking access. This parameter is mandatory in case the permission parameter is defined. It serves as a hint to Red Hat build of Keycloak to indicate the context in which permissions should be evaluated. response_include_resource_name This parameter is optional . A boolean value indicating to the server whether resource names should be included in the RPT's permissions. If false, only the resource identifier is included. response_permissions_limit This parameter is optional . An integer N that defines a limit for the amount of permissions an RPT can have. When used together with rpt parameter, only the last N requested permissions will be kept in the RPT. submit_request This parameter is optional . A boolean value indicating whether the server should create permission requests to the resources and scopes referenced by a permission ticket. This parameter only has effect if used together with the ticket parameter as part of a UMA authorization process. response_mode This parameter is optional . A string value indicating how the server should respond to authorization requests. This parameter is specially useful when you are mainly interested in either the overall decision or the permissions granted by the server, instead of a standard OAuth2 response. Possible values are: decision Indicates that responses from the server should only represent the overall decision by returning a JSON with the following format: { 'result': true } If the authorization request does not map to any permission, a 403 HTTP status code is returned instead. permissions Indicates that responses from the server should contain any permission granted by the server by returning a JSON with the following format: [ { 'rsid': 'My Resource' 'scopes': ['view', 'update'] }, ... ] If the authorization request does not map to any permission, a 403 HTTP status code is returned instead. Example of an authorization request when a client is seeking access to two resources protected by a resource server. curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "audience={resource_server_client_id}" \ --data "permission=Resource A#Scope A" \ --data "permission=Resource B#Scope B" Example of an authorization request when a client is seeking access to any resource and scope protected by a resource server. NOTE: This will not evaluate the permissions for all resources. Instead, the permissions for resources owned by the resource server, owned by the requesting user, and explicitly granted to the requesting user by other owners are evaluated. curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "audience={resource_server_client_id}" Example of an authorization request when a client is seeking access to a UMA protected resource after receiving a permission ticket from the resource server as part of the authorization process: curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "ticket=USD{permission_ticket} If Red Hat build of Keycloak assessment process results in issuance of permissions, it issues the RPT with which it has associated the permissions: Red Hat build of Keycloak responds to the client with the RPT HTTP/1.1 200 OK Content-Type: application/json ... { "access_token": "USD{rpt}", } The response from the server is just like any other response from the token endpoint when using some other grant type. The RPT can be obtained from the access_token response parameter. If the client is not authorized, Red Hat build of Keycloak responds with a 403 HTTP status code: Red Hat build of Keycloak denies the authorization request HTTP/1.1 403 Forbidden Content-Type: application/json ... { "error": "access_denied", "error_description": "request_denied" } 8.2.1. Client authentication methods Clients need to authenticate to the token endpoint in order to obtain an RPT. When using the urn:ietf:params:oauth:grant-type:uma-ticket grant type, clients can use any of these authentication methods: Bearer Token Clients should send an access token as a Bearer credential in an HTTP Authorization header to the token endpoint. Example: an authorization request using an access token to authenticate to the token endpoint curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" This method is especially useful when the client is acting on behalf of a user. In this case, the bearer token is an access token previously issued by Red Hat build of Keycloak to some client acting on behalf of a user (or on behalf of itself). Permissions will be evaluated considering the access context represented by the access token. For instance, if the access token was issued to Client A acting on behalf of User A, permissions will be granted depending on the resources and scopes to which User A has access. Client Credentials Clients can use any of the client authentication methods supported by Red Hat build of Keycloak. For instance, client_id/client_secret or JWT. Example: an authorization request using client id and client secret to authenticate to the token endpoint curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Basic cGhvdGg6L7Jl13RmfWgtkk==pOnNlY3JldA==" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" 8.2.2. Pushing claims When obtaining permissions from the server you can push arbitrary claims in order to have these claims available to your policies when evaluating permissions. If you are obtaining permissions from the server without using a permission ticket (UMA flow), you can send an authorization request to the token endpoint as follows: curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "claim_token=ewogICAib3JnYW5pemF0aW9uIjogWyJhY21lIl0KfQ==" \ --data "claim_token_format=urn:ietf:params:oauth:token-type:jwt" \ --data "client_id={resource_server_client_id}" \ --data "client_secret={resource_server_client_secret}" \ --data "audience={resource_server_client_id}" The claim_token parameter expects a BASE64 encoded JSON with a format similar to the example below: { "organization" : ["acme"] } The format expects one or more claims where the value for each claim must be an array of strings. 8.2.2.1. Pushing claims Using UMA For more details about how to push claims when using UMA and permission tickets, please take a look at Permission API 8.3. User-managed access Red Hat build of Keycloak Authorization Services is based on User-Managed Access or UMA for short. UMA is a specification that enhances OAuth2 capabilities in the following ways: Privacy Nowadays, user privacy is becoming a huge concern, as more and more data and devices are available and connected to the cloud. With UMA and Red Hat build of Keycloak, resource servers can enhance their capabilities in order to improve how their resources are protected in respect to user privacy where permissions are granted based on policies defined by the user. Party-to-Party Authorization Resource owners (e.g.: regular end-users) can manage access to their resources and authorize other parties (e.g: regular end-users) to access these resources. This is different than OAuth2 where consent is given to a client application acting on behalf of a user, with UMA resource owners are allowed to consent access to other users, in a completely asynchronous manner. Resource Sharing Resource owners are allowed to manage permissions to their resources and decide who can access a particular resource and how. Red Hat build of Keycloak can then act as a sharing management service from which resource owners can manage their resources. Red Hat build of Keycloak is a UMA 2.0 compliant authorization server that provides most UMA capabilities. As an example, consider a user Alice (resource owner) using an Internet Banking Service (resource server) to manage her Bank Account (resource). One day, Alice decides to open her bank account to Bob (requesting party), an accounting professional. However, Bob should only have access to view (scope) Alice's account. As a resource server, the Internet Banking Service must be able to protect Alice's Bank Account. For that, it relies on Red Hat build of Keycloak Resource Registration Endpoint to create a resource in the server representing Alice's Bank Account. At this moment, if Bob tries to access Alice's Bank Account, access will be denied. The Internet Banking Service defines a few default policies for banking accounts. One of them is that only the owner, in this case Alice, is allowed to access her bank account. However, Internet Banking Service in respect to Alice's privacy also allows her to change specific policies for the banking account. One of these policies that she can change is to define which people are allowed to view her bank account. For that, Internet Banking Service relies on Red Hat build of Keycloak to provide to Alice a space where she can select individuals and the operations (or data) they are allowed to access. At any time, Alice can revoke access or grant additional permissions to Bob. 8.3.1. Authorization process In UMA, the authorization process starts when a client tries to access a UMA protected resource server. A UMA protected resource server expects a bearer token in the request where the token is an RPT. When a client requests a resource at the resource server without an RPT: Client requests a protected resource without sending an RPT curl -X GET \ http://USD{host}:USD{port}/my-resource-server/resource/1bfdfe78-a4e1-4c2d-b142-fc92b75b986f The resource server sends a response back to the client with a permission ticket and a as_uri parameter with the location of a Red Hat build of Keycloak server to where the ticket should be sent in order to obtain an RPT. Resource server responds with a permission ticket HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm="USD{realm}", as_uri="https://USD{host}:USD{port}/realms/USD{realm}", ticket="016f84e8-f9b9-11e0-bd6f-0021cc6004de" The permission ticket is a special type of token issued by Red Hat build of Keycloak Permission API. They represent the permissions being requested (e.g.: resources and scopes) as well any other information associated with the request. Only resource servers are allowed to create those tokens. Now that the client has a permission ticket and also the location of a Red Hat build of Keycloak server, the client can use the discovery document to obtain the location of the token endpoint and send an authorization request. Client sends an authorization request to the token endpoint to obtain an RPT curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "ticket=USD{permission_ticket} If Red Hat build of Keycloak assessment process results in issuance of permissions, it issues the RPT with which it has associated the permissions: Red Hat build of Keycloak responds to the client with the RPT HTTP/1.1 200 OK Content-Type: application/json ... { "access_token": "USD{rpt}", } The response from the server is just like any other response from the token endpoint when using some other grant type. The RPT can be obtained from the access_token response parameter. In case the client is not authorized to have permissions Red Hat build of Keycloak responds with a 403 HTTP status code: Red Hat build of Keycloak denies the authorization request HTTP/1.1 403 Forbidden Content-Type: application/json ... { "error": "access_denied", "error_description": "request_denied" } 8.3.2. Submitting permission requests As part of the authorization process, clients need first to obtain a permission ticket from a UMA protected resource server in order to exchange it with an RPT at the Red Hat build of Keycloak Token Endpoint. By default, Red Hat build of Keycloak responds with a 403 HTTP status code and a request_denied error in case the client can not be issued with an RPT. Red Hat build of Keycloak denies the authorization request HTTP/1.1 403 Forbidden Content-Type: application/json ... { "error": "access_denied", "error_description": "request_denied" } Such response implies that Red Hat build of Keycloak could not issue an RPT with the permissions represented by a permission ticket. In some situations, client applications may want to start an asynchronous authorization flow and let the owner of the resources being requested decide whether or not access should be granted. For that, clients can use the submit_request request parameter along with an authorization request to the token endpoint: curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token \ -H "Authorization: Bearer USD{access_token}" \ --data "grant_type=urn:ietf:params:oauth:grant-type:uma-ticket" \ --data "ticket=USD{permission_ticket} \ --data "submit_request=true" When using the submit_request parameter, Red Hat build of Keycloak will persist a permission request for each resource to which access was denied. Once created, resource owners can check their account and manage their permissions requests. You can think about this functionality as a Request Access button in your application, where users can ask other users for access to their resources. 8.3.3. Managing access to users resources Users can manage access to their resources using the Red Hat build of Keycloak Account Console. To enable this functionality, you must first enable User-Managed Access for your realm. Procedure Log into the Admin Console. Click Realm Settings in the menu. Toggle User-Managed Access to ON . Click the user name at the top right of the Admin Console and select Manage Account . Click My Resources in the menu option. A page displays with the following options. Manage My resources This section contains a list of all resources owned by the user. Users can click on a resource for more details and share the resource with others. When there is a permission requests awaiting approval an icon is put to the name of the resource. These requests are connected to the parties (users) requesting access to a particular resource. Users are allowed to approve or deny these requests. You can do so by clicking the icon Manage Resources shared with me This section contains a list of all resources shared with the user. Manage People with access to this resource This section contains a list of people with access to this resource. Users are allowed to revoke access by clicking on the Revoke button or by removing a specific Permission . Share the resource with others By typing the username or e-mail of another user, the user is able to share the resource and select the permissions he wants to grant access. 8.4. Protection API The Protection API provides a UMA-compliant set of endpoints providing: Resource Management With this endpoint, resource servers can manage their resources remotely and enable policy enforcers to query the server for the resources that need protection. Permission Management In the UMA protocol, resource servers access this endpoint to create permission tickets. Red Hat build of Keycloak also provides endpoints to manage the state of permissions and query permissions. Policy API Red Hat build of Keycloak leverages the UMA Protection API to allow resource servers to manage permissions for their users. In addition to the Resource and Permission APIs, Red Hat build of Keycloak provides a Policy API from where permissions can be set to resources by resource servers on behalf of their users. An important requirement for this API is that only resource servers are allowed to access its endpoints using a special OAuth2 access token called a protection API token (PAT). In UMA, a PAT is a token with the scope uma_protection . 8.4.1. What is a PAT and how to obtain it A protection API token (PAT) is a special OAuth2 access token with a scope defined as uma_protection . When you create a resource server, Red Hat build of Keycloak automatically creates a role, uma_protection , for the corresponding client application and associates it with the client's service account. Service Account granted with uma_protection role Resource servers can obtain a PAT from Red Hat build of Keycloak like any other OAuth2 access token. For example, using curl: curl -X POST \ -H "Content-Type: application/x-www-form-urlencoded" \ -d 'grant_type=client_credentials&client_id=USD{client_id}&client_secret=USD{client_secret}' \ "http://localhost:8080/realms/USD{realm_name}/protocol/openid-connect/token" The example above is using the client_credentials grant type to obtain a PAT from the server. As a result, the server returns a response similar to the following: { "access_token": USD{PAT}, "expires_in": 300, "refresh_expires_in": 1800, "refresh_token": USD{refresh_token}, "token_type": "bearer", "id_token": USD{id_token}, "not-before-policy": 0, "session_state": "ccea4a55-9aec-4024-b11c-44f6f168439e" } Note Red Hat build of Keycloak can authenticate your client application in different ways. For simplicity, the client_credentials grant type is used here, which requires a client_id and a client_secret . You can choose to use any supported authentication method. 8.4.2. Managing resources Resource servers can manage their resources remotely using a UMA-compliant endpoint. This endpoint provides operations outlined as follows (entire path omitted for clarity): Create resource set description: POST /resource_set Read resource set description: GET /resource_set/{_id} Update resource set description: PUT /resource_set/{_id} Delete resource set description: DELETE /resource_set/{_id} List resource set descriptions: GET /resource_set For more information about the contract for each of these operations, see UMA Resource Registration API . 8.4.2.1. Creating a resource To create a resource you must send an HTTP POST request as follows: curl -v -X POST \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "name":"Tweedl Social Service", "type":"http://www.example.com/rsrcs/socialstream/140-compatible", "icon_uri":"http://www.example.com/icons/sharesocial.png", "resource_scopes":[ "read-public", "post-updates", "read-private", "http://www.example.com/scopes/all" ] }' By default, the owner of a resource is the resource server. If you want to define a different owner, such as a specific user, you can send a request as follows: curl -v -X POST \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "name":"Alice Resource", "owner": "alice" }' Where the property owner can be set with the username or the identifier of the user. 8.4.2.2. Creating user-managed resources By default, resources created via Protection API can not be managed by resource owners through the Account Console . To create resources and allow resource owners to manage these resources, you must set ownerManagedAccess property as follows: curl -v -X POST \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "name":"Alice Resource", "owner": "alice", "ownerManagedAccess": true }' 8.4.2.3. Updating resources To update an existing resource, send an HTTP PUT request as follows: curl -v -X PUT \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '{ "_id": "Alice Resource", "name":"Alice Resource", "resource_scopes": [ "read" ] }' 8.4.2.4. Deleting resources To delete an existing resource, send an HTTP DELETE request as follows: curl -v -X DELETE \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} \ -H 'Authorization: Bearer 'USDpat 8.4.2.5. Querying resources To query the resources by id , send an HTTP GET request as follows: http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} To query resources given a name , send an HTTP GET request as follows: http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?name=Alice Resource By default, the name filter will match any resource with the given pattern. To restrict the query to only return resources with an exact match, use: http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?name=Alice Resource&exactName=true To query resources given an uri , send an HTTP GET request as follows: http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?uri=/api/alice To query resources given an owner , send an HTTP GET request as follows: http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?owner=alice To query resources given an type , send an HTTP GET request as follows: http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?type=albums To query resources given an scope , send an HTTP GET request as follows: http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?scope=read When querying the server for permissions use parameters first and max results to limit the result. 8.4.3. Managing permission requests Resource servers using the UMA protocol can use a specific endpoint to manage permission requests. This endpoint provides a UMA-compliant flow for registering permission requests and obtaining a permission ticket. A permission ticket is a special security token type representing a permission request. Per the UMA specification, a permission ticket is: A correlation handle that is conveyed from an authorization server to a resource server, from a resource server to a client, and ultimately from a client back to an authorization server, to enable the authorization server to assess the correct policies to apply to a request for authorization data. In most cases, you won't need to deal with this endpoint directly. Red Hat build of Keycloak provides a policy enforcer that enables UMA for your resource server so it can obtain a permission ticket from the authorization server, return this ticket to client application, and enforce authorization decisions based on a final requesting party token (RPT). The process of obtaining permission tickets from Red Hat build of Keycloak is performed by resource servers and not regular client applications, where permission tickets are obtained when a client tries to access a protected resource without the necessary grants to access the resource. The issuance of permission tickets is an important aspects when using UMA as it allows resource servers to: Abstract from clients the data associated with the resources protected by the resource server Register in the Red Hat build of Keycloak authorization requests which in turn can be used later in workflows to grant access based on the resource's owner consent Decouple resource servers from authorization servers and allow them to protect and manage their resources using different authorization servers Client wise, a permission ticket has also important aspects that its worthy to highlight: Clients don't need to know about how authorization data is associated with protected resources. A permission ticket is completely opaque to clients. Clients can have access to resources on different resource servers and protected by different authorization servers These are just some of the benefits brought by UMA where other aspects of UMA are strongly based on permission tickets, specially regarding privacy and user controlled access to their resources. 8.4.3.1. Creating permission ticket To create a permission ticket, send an HTTP POST request as follows: curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '[ { "resource_id": "{resource_id}", "resource_scopes": [ "view" ] } ]' When creating tickets you can also push arbitrary claims and associate these claims with the ticket: curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission \ -H 'Authorization: Bearer 'USDpat \ -H 'Content-Type: application/json' \ -d '[ { "resource_id": "{resource_id}", "resource_scopes": [ "view" ], "claims": { "organization": ["acme"] } } ]' Where these claims will be available to your policies when evaluating permissions for the resource and scope(s) associated with the permission ticket. 8.4.3.2. Other non UMA-compliant endpoints 8.4.3.2.1. Creating permission ticket To grant permissions for a specific resource with id {resource_id} to a user with id {user_id}, as an owner of the resource send an HTTP POST request as follows: curl -X POST \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Content-Type: application/json' \ -d '{ "resource": "{resource_id}", "requester": "{user_id}", "granted": true, "scopeName": "view" }' 8.4.3.2.2. Getting permission tickets curl http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket \ -H 'Authorization: Bearer 'USDaccess_token You can use any of these query parameters: scopeId resourceId owner requester granted returnNames first max 8.4.3.2.3. Updating permission ticket curl -X PUT \ http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Content-Type: application/json' \ -d '{ "id": "{ticket_id}" "resource": "{resource_id}", "requester": "{user_id}", "granted": false, "scopeName": "view" }' 8.4.3.2.4. Deleting permission ticket curl -X DELETE http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket/{ticket_id} \ -H 'Authorization: Bearer 'USDaccess_token 8.4.4. Managing resource permissions using the Policy API Red Hat build of Keycloak leverages the UMA Protection API to allow resource servers to manage permissions for their users. In addition to the Resource and Permission APIs, Red Hat build of Keycloak provides a Policy API from where permissions can be set to resources by resource servers on behalf of their users. The Policy API is available at: This API is protected by a bearer token that must represent a consent granted by the user to the resource server to manage permissions on his behalf. The bearer token can be a regular access token obtained from the token endpoint using: Resource Owner Password Credentials Grant Type Token Exchange, in order to exchange an access token granted to some client (public client) for a token where audience is the resource server 8.4.4.1. Associating a permission with a resource To associate a permission with a specific resource you must send a HTTP POST request as follows: curl -X POST \ http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "roles": ["people-manager"] }' In the example above we are creating and associating a new permission to a resource represented by resource_id where any user with a role people-manager should be granted with the read scope. You can also create policies using other access control mechanisms, such as using groups: curl -X POST \ http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "groups": ["/Managers/People Managers"] }' Or a specific client: curl -X POST \ http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "clients": ["my-client"] }' Or even using a custom policy using JavaScript: Note Upload Scripts is Deprecated and will be removed in future releases. This feature is disabled by default. To enable start the server with -Dkeycloak.profile.feature.upload_scripts=enabled . For more details see the Enabling and disabling features chapter. curl -X POST \ http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Cache-Control: no-cache' \ -H 'Content-Type: application/json' \ -d '{ "name": "Any people manager", "description": "Allow access to any people manager", "scopes": ["read"], "condition": "my-deployed-script.js" }' It is also possible to set any combination of these access control mechanisms. To update an existing permission, send an HTTP PUT request as follows: curl -X PUT \ http://localhost:8180/realms/photoz/authz/protection/uma-policy/{permission_id} \ -H 'Authorization: Bearer 'USDaccess_token \ -H 'Content-Type: application/json' \ -d '{ "id": "21eb3fed-02d7-4b5a-9102-29f3f09b6de2", "name": "Any people manager", "description": "Allow access to any people manager", "type": "uma", "scopes": [ "album:view" ], "logic": "POSITIVE", "decisionStrategy": "UNANIMOUS", "owner": "7e22131a-aa57-4f5f-b1db-6e82babcd322", "roles": [ "user" ] }' 8.4.4.2. Removing a permission To remove a permission associated with a resource, send an HTTP DELETE request as follows: curl -X DELETE \ http://localhost:8180/realms/photoz/authz/protection/uma-policy/{permission_id} \ -H 'Authorization: Bearer 'USDaccess_token 8.4.4.3. Querying permission To query the permissions associated with a resource, send an HTTP GET request as follows: To query the permissions given its name, send an HTTP GET request as follows: http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/uma-policy?name=Any people manager To query the permissions associated with a specific scope, send an HTTP GET request as follows: To query all permissions, send an HTTP GET request as follows: When querying the server for permissions use parameters first and max results to limit the result. 8.5. Requesting party token A requesting party token (RPT) is a JSON web token (JWT) digitally signed using JSON web signature (JWS) . The token is built based on the OAuth2 access token previously issued by Red Hat build of Keycloak to a specific client acting on behalf of a user or on its own behalf. When you decode an RPT, you see a payload similar to the following: { "authorization": { "permissions": [ { "resource_set_id": "d2fe9843-6462-4bfc-baba-b5787bb6e0e7", "resource_set_name": "Hello World Resource" } ] }, "jti": "d6109a09-78fd-4998-bf89-95730dfd0892-1464906679405", "exp": 1464906971, "nbf": 0, "iat": 1464906671, "sub": "f1888f4d-5172-4359-be0c-af338505d86c", "typ": "kc_ett", "azp": "hello-world-authz-service" } From this token you can obtain all permissions granted by the server from the permissions claim. Also note that permissions are directly related with the resources/scopes you are protecting and completely decoupled from the access control methods that were used to actually grant and issue these same permissions. 8.5.1. Introspecting a requesting party token Sometimes you might want to introspect a requesting party token (RPT) to check its validity or obtain the permissions within the token to enforce authorization decisions on the resource server side. There are two main use cases where token introspection can help you: When client applications need to query the token validity to obtain a new one with the same or additional permissions When enforcing authorization decisions at the resource server side, especially when none of the built-in policy enforcers fits your application 8.5.2. Obtaining Information about an RPT The token introspection is essentially a OAuth2 token introspection -compliant endpoint from which you can obtain information about an RPT. To introspect an RPT using this endpoint, you can send a request to the server as follows: curl -X POST \ -H "Authorization: Basic aGVsbG8td29ybGQtYXV0aHotc2VydmljZTpzZWNyZXQ=" \ -H "Content-Type: application/x-www-form-urlencoded" \ -d 'token_type_hint=requesting_party_token&token=USD{RPT}' \ "http://localhost:8080/realms/hello-world-authz/protocol/openid-connect/token/introspect" Note The request above is using HTTP BASIC and passing the client's credentials (client ID and secret) to authenticate the client attempting to introspect the token, but you can use any other client authentication method supported by Red Hat build of Keycloak. The introspection endpoint expects two parameters: token_type_hint Use requesting_party_token as the value for this parameter, which indicates that you want to introspect an RPT. token Use the token string as it was returned by the server during the authorization process as the value for this parameter. As a result, the server response is: { "permissions": [ { "resource_id": "90ccc6fc-b296-4cd1-881e-089e1ee15957", "resource_name": "Hello World Resource" } ], "exp": 1465314139, "nbf": 0, "iat": 1465313839, "aud": "hello-world-authz-service", "active": true } If the RPT is not active, this response is returned instead: { "active": false } 8.5.3. Do I need to invoke the server every time I want to introspect an RPT? No. Just like a regular access token issued by a Red Hat build of Keycloak server, RPTs also use the JSON web token (JWT) specification as the default format. If you want to validate these tokens without a call to the remote introspection endpoint, you can decode the RPT and query for its validity locally. Once you decode the token, you can also use the permissions within the token to enforce authorization decisions. This is essentially what the policy enforcers do. Be sure to: Validate the signature of the RPT (based on the realm's public key) Query for token validity based on its exp , iat , and aud claims Additional resources JSON web token (JWT) policy enforcers 8.6. Authorization client java API Depending on your requirements, a resource server should be able to manage resources remotely or even check for permissions programmatically. If you are using Java, you can access the Red Hat build of Keycloak Authorization Services using the Authorization Client API. It is targeted for resource servers that want to access the different endpoints provided by the server such as the Token Endpoint, Resource, and Permission management endpoints. 8.6.1. Maven dependency <dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>USD{KEYCLOAK_VERSION}</version> </dependency> </dependencies> 8.6.2. Configuration The client configuration is defined in a keycloak.json file as follows: { "realm": "hello-world-authz", "auth-server-url" : "http://localhost:8080", "resource" : "hello-world-authz-service", "credentials": { "secret": "secret" } } realm (required) The name of the realm. auth-server-url (required) The base URL of the Red Hat build of Keycloak server. All other Red Hat build of Keycloak pages and REST service endpoints are derived from this. It is usually in the form https://host:port . resource (required) The client-id of the application. Each application has a client-id that is used to identify the application. credentials (required) Specifies the credentials of the application. This is an object notation where the key is the credential type and the value is the value of the credential type. The configuration file is usually located in your application's classpath, the default location from where the client is going to try to find a keycloak.json file. 8.6.3. Creating the authorization client Considering you have a keycloak.json file in your classpath, you can create a new AuthzClient instance as follows: // create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create(); 8.6.4. Obtaining user entitlements Here is an example illustrating how to obtain user entitlements: // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(request); String rpt = response.getToken(); System.out.println("You got an RPT: " + rpt); // now you can use the RPT to access protected resources on the resource server Here is an example illustrating how to obtain user entitlements for a set of one or more resources: // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission("Default Resource"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(request); String rpt = response.getToken(); System.out.println("You got an RPT: " + rpt); // now you can use the RPT to access protected resources on the resource server 8.6.5. Creating a resource using the protection API // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName("New Resource"); newResource.setType("urn:hello-world-authz:resources:example"); newResource.addScope(new ScopeRepresentation("urn:hello-world-authz:scopes:view")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource); 8.6.6. Introspecting an RPT // create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization("alice", "alice").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println("Token status is: " + requestingPartyToken.getActive()); System.out.println("Permissions granted by the server: "); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); }
[ "curl -X GET http://USD{host}:USD{port}/realms/USD{realm}/.well-known/uma2-configuration", "{ // some claims are expected here // these are the main claims in the discovery document about Authorization Services endpoints location \"token_endpoint\": \"http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token\", \"token_introspection_endpoint\": \"http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token/introspect\", \"resource_registration_endpoint\": \"http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/resource_set\", \"permission_endpoint\": \"http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/permission\", \"policy_endpoint\": \"http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/uma-policy\" }", "{ 'result': true }", "[ { 'rsid': 'My Resource' 'scopes': ['view', 'update'] }, ]", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"audience={resource_server_client_id}\" --data \"permission=Resource A#Scope A\" --data \"permission=Resource B#Scope B\"", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"audience={resource_server_client_id}\"", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"ticket=USD{permission_ticket}", "HTTP/1.1 200 OK Content-Type: application/json { \"access_token\": \"USD{rpt}\", }", "HTTP/1.1 403 Forbidden Content-Type: application/json { \"error\": \"access_denied\", \"error_description\": \"request_denied\" }", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\"", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Basic cGhvdGg6L7Jl13RmfWgtkk==pOnNlY3JldA==\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\"", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"claim_token=ewogICAib3JnYW5pemF0aW9uIjogWyJhY21lIl0KfQ==\" --data \"claim_token_format=urn:ietf:params:oauth:token-type:jwt\" --data \"client_id={resource_server_client_id}\" --data \"client_secret={resource_server_client_secret}\" --data \"audience={resource_server_client_id}\"", "{ \"organization\" : [\"acme\"] }", "curl -X GET http://USD{host}:USD{port}/my-resource-server/resource/1bfdfe78-a4e1-4c2d-b142-fc92b75b986f", "HTTP/1.1 401 Unauthorized WWW-Authenticate: UMA realm=\"USD{realm}\", as_uri=\"https://USD{host}:USD{port}/realms/USD{realm}\", ticket=\"016f84e8-f9b9-11e0-bd6f-0021cc6004de\"", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"ticket=USD{permission_ticket}", "HTTP/1.1 200 OK Content-Type: application/json { \"access_token\": \"USD{rpt}\", }", "HTTP/1.1 403 Forbidden Content-Type: application/json { \"error\": \"access_denied\", \"error_description\": \"request_denied\" }", "HTTP/1.1 403 Forbidden Content-Type: application/json { \"error\": \"access_denied\", \"error_description\": \"request_denied\" }", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm}/protocol/openid-connect/token -H \"Authorization: Bearer USD{access_token}\" --data \"grant_type=urn:ietf:params:oauth:grant-type:uma-ticket\" --data \"ticket=USD{permission_ticket} --data \"submit_request=true\"", "curl -X POST -H \"Content-Type: application/x-www-form-urlencoded\" -d 'grant_type=client_credentials&client_id=USD{client_id}&client_secret=USD{client_secret}' \"http://localhost:8080/realms/USD{realm_name}/protocol/openid-connect/token\"", "{ \"access_token\": USD{PAT}, \"expires_in\": 300, \"refresh_expires_in\": 1800, \"refresh_token\": USD{refresh_token}, \"token_type\": \"bearer\", \"id_token\": USD{id_token}, \"not-before-policy\": 0, \"session_state\": \"ccea4a55-9aec-4024-b11c-44f6f168439e\" }", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set", "curl -v -X POST http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"name\":\"Tweedl Social Service\", \"type\":\"http://www.example.com/rsrcs/socialstream/140-compatible\", \"icon_uri\":\"http://www.example.com/icons/sharesocial.png\", \"resource_scopes\":[ \"read-public\", \"post-updates\", \"read-private\", \"http://www.example.com/scopes/all\" ] }'", "curl -v -X POST http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"name\":\"Alice Resource\", \"owner\": \"alice\" }'", "curl -v -X POST http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"name\":\"Alice Resource\", \"owner\": \"alice\", \"ownerManagedAccess\": true }'", "curl -v -X PUT http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '{ \"_id\": \"Alice Resource\", \"name\":\"Alice Resource\", \"resource_scopes\": [ \"read\" ] }'", "curl -v -X DELETE http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set/{resource_id} -H 'Authorization: Bearer 'USDpat", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set/{resource_id}", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?name=Alice Resource", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?name=Alice Resource&exactName=true", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?uri=/api/alice", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?owner=alice", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?type=albums", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/resource_set?scope=read", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '[ { \"resource_id\": \"{resource_id}\", \"resource_scopes\": [ \"view\" ] } ]'", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission -H 'Authorization: Bearer 'USDpat -H 'Content-Type: application/json' -d '[ { \"resource_id\": \"{resource_id}\", \"resource_scopes\": [ \"view\" ], \"claims\": { \"organization\": [\"acme\"] } } ]'", "curl -X POST http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket -H 'Authorization: Bearer 'USDaccess_token -H 'Content-Type: application/json' -d '{ \"resource\": \"{resource_id}\", \"requester\": \"{user_id}\", \"granted\": true, \"scopeName\": \"view\" }'", "curl http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket -H 'Authorization: Bearer 'USDaccess_token", "curl -X PUT http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket -H 'Authorization: Bearer 'USDaccess_token -H 'Content-Type: application/json' -d '{ \"id\": \"{ticket_id}\" \"resource\": \"{resource_id}\", \"requester\": \"{user_id}\", \"granted\": false, \"scopeName\": \"view\" }'", "curl -X DELETE http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/permission/ticket/{ticket_id} -H 'Authorization: Bearer 'USDaccess_token", "http://USD{host}:USD{port}/realms/USD{realm_name}/authz/protection/uma-policy/{resource_id}", "curl -X POST http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"roles\": [\"people-manager\"] }'", "curl -X POST http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"groups\": [\"/Managers/People Managers\"] }'", "curl -X POST http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"clients\": [\"my-client\"] }'", "curl -X POST http://localhost:8180/realms/photoz/authz/protection/uma-policy/{resource_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Cache-Control: no-cache' -H 'Content-Type: application/json' -d '{ \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"scopes\": [\"read\"], \"condition\": \"my-deployed-script.js\" }'", "curl -X PUT http://localhost:8180/realms/photoz/authz/protection/uma-policy/{permission_id} -H 'Authorization: Bearer 'USDaccess_token -H 'Content-Type: application/json' -d '{ \"id\": \"21eb3fed-02d7-4b5a-9102-29f3f09b6de2\", \"name\": \"Any people manager\", \"description\": \"Allow access to any people manager\", \"type\": \"uma\", \"scopes\": [ \"album:view\" ], \"logic\": \"POSITIVE\", \"decisionStrategy\": \"UNANIMOUS\", \"owner\": \"7e22131a-aa57-4f5f-b1db-6e82babcd322\", \"roles\": [ \"user\" ] }'", "curl -X DELETE http://localhost:8180/realms/photoz/authz/protection/uma-policy/{permission_id} -H 'Authorization: Bearer 'USDaccess_token", "http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/uma-policy?resource={resource_id}", "http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/uma-policy?name=Any people manager", "http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/uma-policy?scope=read", "http://USD{host}:USD{port}/realms/USD{realm}/authz/protection/uma-policy", "{ \"authorization\": { \"permissions\": [ { \"resource_set_id\": \"d2fe9843-6462-4bfc-baba-b5787bb6e0e7\", \"resource_set_name\": \"Hello World Resource\" } ] }, \"jti\": \"d6109a09-78fd-4998-bf89-95730dfd0892-1464906679405\", \"exp\": 1464906971, \"nbf\": 0, \"iat\": 1464906671, \"sub\": \"f1888f4d-5172-4359-be0c-af338505d86c\", \"typ\": \"kc_ett\", \"azp\": \"hello-world-authz-service\" }", "http://USD{host}:USD{port}/realms/USD{realm_name}/protocol/openid-connect/token/introspect", "curl -X POST -H \"Authorization: Basic aGVsbG8td29ybGQtYXV0aHotc2VydmljZTpzZWNyZXQ=\" -H \"Content-Type: application/x-www-form-urlencoded\" -d 'token_type_hint=requesting_party_token&token=USD{RPT}' \"http://localhost:8080/realms/hello-world-authz/protocol/openid-connect/token/introspect\"", "{ \"permissions\": [ { \"resource_id\": \"90ccc6fc-b296-4cd1-881e-089e1ee15957\", \"resource_name\": \"Hello World Resource\" } ], \"exp\": 1465314139, \"nbf\": 0, \"iat\": 1465313839, \"aud\": \"hello-world-authz-service\", \"active\": true }", "{ \"active\": false }", "<dependencies> <dependency> <groupId>org.keycloak</groupId> <artifactId>keycloak-authz-client</artifactId> <version>USD{KEYCLOAK_VERSION}</version> </dependency> </dependencies>", "{ \"realm\": \"hello-world-authz\", \"auth-server-url\" : \"http://localhost:8080\", \"resource\" : \"hello-world-authz-service\", \"credentials\": { \"secret\": \"secret\" } }", "// create a new instance based on the configuration defined in a keycloak.json located in your classpath AuthzClient authzClient = AuthzClient.create();", "// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // send the entitlement request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server", "// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create an authorization request AuthorizationRequest request = new AuthorizationRequest(); // add permissions to the request based on the resources and scopes you want to check access request.addPermission(\"Default Resource\"); // send the entitlement request to the server in order to // obtain an RPT with permissions for a single resource AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(request); String rpt = response.getToken(); System.out.println(\"You got an RPT: \" + rpt); // now you can use the RPT to access protected resources on the resource server", "// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // create a new resource representation with the information we want ResourceRepresentation newResource = new ResourceRepresentation(); newResource.setName(\"New Resource\"); newResource.setType(\"urn:hello-world-authz:resources:example\"); newResource.addScope(new ScopeRepresentation(\"urn:hello-world-authz:scopes:view\")); ProtectedResource resourceClient = authzClient.protection().resource(); ResourceRepresentation existingResource = resourceClient.findByName(newResource.getName()); if (existingResource != null) { resourceClient.delete(existingResource.getId()); } // create the resource on the server ResourceRepresentation response = resourceClient.create(newResource); String resourceId = response.getId(); // query the resource using its newly generated id ResourceRepresentation resource = resourceClient.findById(resourceId); System.out.println(resource);", "// create a new instance based on the configuration defined in keycloak.json AuthzClient authzClient = AuthzClient.create(); // send the authorization request to the server in order to // obtain an RPT with all permissions granted to the user AuthorizationResponse response = authzClient.authorization(\"alice\", \"alice\").authorize(); String rpt = response.getToken(); // introspect the token TokenIntrospectionResponse requestingPartyToken = authzClient.protection().introspectRequestingPartyToken(rpt); System.out.println(\"Token status is: \" + requestingPartyToken.getActive()); System.out.println(\"Permissions granted by the server: \"); for (Permission granted : requestingPartyToken.getPermissions()) { System.out.println(granted); }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/authorization_services_guide/service_overview
Chapter 3. OpenJDK features
Chapter 3. OpenJDK features The latest Red Hat build of OpenJDK 17 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 17 releases. Note For all the other changes and security fixes, see OpenJDK 17.0.2 Released . 3.1. New features and enhancements Review the following release notes to understand new features and feature enhancements that have been included with the Red Hat build of OpenJDK 17.0.2 release: IANA Time Zone Database The Internet Assigned Numbers Authority (IANA) updated its Time Zone Database to version 2021c. Red Hat OpenJDK date and time libraries depends on IANA's Time Zone Database for determining local time for various regions around the world. Note The 2021b release of the Time Zone Database updated time zone rules that existed before 1970. For more information about the 2021b release, see 2021b release of tz code and data available on the IANA website. For more information about IANA's 2021c Time Zone Database release, see JDK-8274857 . For more information about IANA's Time Zone Database, see Time Zone Database on the IANA website. 3.2. OpenJDK enhancements Red Hat build of OpenJDK 17 provides enhancements to features originally created in releases of Red Hat build of OpenJDK. OpenJDK's identification of Microsoft Windows versions Before the Red Hat build of OpenJDK 17 release, the os.name system property that is retrieved from System.getProperty() and the HotSpot error logs would report Windows 10.0 on Microsoft Windows 11 and Windows Server 2019 on Microsoft Windows Server 2022. Red Hat build of OpenJDK now identifies the correct version on these systems. System property behavior change Red Hat build of OpenJDK 17 reverts the behavior of the file.encoding system property to a state identical to Red Hat build of OpenJDK 11 on most supported platforms, except for macOS. This change improves how the system property behaves on the Microsoft Windows platform, where the system locales and user locales differ. For more information about the behavior change to the file.encoding system property, see JDK-8275343 . Vector class update Red Hat build of OpenJDK 17 updates the java.util.Vector class, so that this class now reports any ClassNotFoundException messages that have been generated with the java.io.ObjectInputStream.GetField.get(name, object) method during the deserialization process. These exception messages occur when a vector's class, wrapped inside an element, is not found. Before the java.util.Vector class update, the class reported any StreamCorruptedException messages when the previously detailed incident occurred. A StreamCorruptedException message does not provide information about a missing class. For more information about the update to the java.util.Vector class, see JDK-8277157 . Z Garbage Collector bug fix Before the Red Hat build of OpenJDK 17 update, the Z Garbage Collector (ZGC) experienced lengthy Concurrent Process Non-strong References times that caused latency and throughput issues for Java applications that use ZGC for memory management. You could determine these lengthy times by entering the -Xlog:gc* against a garbage collector (GC) log in your command-line interface. The Red Hat build of OpenJDK 17 release removes the bug that caused these issues, so the ZGC can now achieve shorter Concurrent Process Non-strong References times. For more information about ZGC bug fix, see JDK-8277533 . 3.3. Deprecated and removed features Review the following release notes to understand pre-existing features that have been either deprecated or removed in the Red Hat build of OpenJDK 17.0.2 release: Google GlobalSign root certificate Red Hat build of OpenJDK 17.0.2 removes the following root certificate from the cacerts keystore: Alias name globalsignr2ca [jdk] Distinguished name CN=GlobalSign, O=GlobalSign, OU=GlobalSign Root CA - R2 For more information about this removed Google GlobalSign root certificate, see JDK-8272535 .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.2/rn_openjdk-1702-features_openjdk
Chapter 3. Tools to Assist in Migration
Chapter 3. Tools to Assist in Migration 3.1. Use Migration Toolkit for Applications to analyze applications for migration Migration Toolkit for Applications (MTA) is an extensible and customizable rule-based set of tools that helps simplify migration of Java applications. It analyzes the APIs, technologies, and architectures used by the applications you plan to migrate and provides detailed migration reports for each application. These reports provide the following information. Detailed explanations of the migration changes needed Whether the reported change is mandatory or optional Whether the reported change is complex or trivial Links to the code requiring the migration change Hints and links to information about how to make the required changes An estimate of the level of effort for each migration issue found and the total estimated effort to migrate the application You can use MTA to analyze the code and architecture of your JBoss EAP 6 applications before you migrate them to JBoss EAP 7. The MTA rule set for migration from JBoss EAP 6 to JBoss EAP 7 reports on XML descriptors and specific application code and parameters that need to be replaced by an alternative configuration when migrating to JBoss EAP 7. For more information about how to use Migration Toolkit for Applications to analyze your JBoss EAP 6 applications, see the Migration Toolkit for Applications Getting Started Guide . 3.2. Use the JBoss Server Migration Tool to Migrate Server Configurations The JBoss Server Migration Tool is the preferred method to update your server configuration to include the new features and settings in JBoss EAP 7 while keeping your existing configuration. The JBoss Server Migration Tool reads your existing JBoss EAP server configuration files and adds configurations for any new subsystems, updates the existing subsystem configurations with new features, and removes any obsolete subsystem configurations. You can use the JBoss Server Migration Tool to migrate standalone servers and managed domains for the following configurations. Migrating to JBoss EAP 7.4 The JBoss Server Migration Tool ships with JBoss EAP 7.4, so there is no separate download or installation required. This tool supports migration from JBoss EAP 6.4 and all 7.x releases up to JBoss EAP 7.4. You run the tool by executing the jboss-server-migration script located in the EAP_HOME /bin directory. For more information about how to configure and run the tool, see Using the JBoss Server Migration Tool . It is recommended that you use this version of the JBoss Server Migration Tool to migrate your server configuration to JBoss EAP 7.4 as this version of the tool is supported . Migrating from WildFly to JBoss EAP If you want to migrate from the WildFly server to JBoss EAP, you must download the latest binary distribution of the JBoss Server Migration Tool from the JBoss Server Migration Tool GitHub repository. This open source, standalone version of the tool supports migration from several versions of the WildFly server to JBoss EAP. For information about how to install and run this version of the tool, see the JBoss Server Migration Tool User Guide . Important The binary distribution of the JBoss Server Migration Tool is not supported. If you are migrating from a release of JBoss EAP, it is recommended that you use this supported version of the tool to migrate your server configuration to JBoss EAP 7.4 instead.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/migration_guide/tools_to_assist_in_migration
3.7. Deploy a VDB via Admin API
3.7. Deploy a VDB via Admin API You can deploy a VDB using the deploy method provided by the Admin interface within the Admin API package ( org.teiid.adminapi ). Javadocs for Red Hat JBoss Data Virtualization can be found on the Red Hat Customer Portal . Note In domain mode, when deploying using the Admin API, the VDB is deployed to all available servers.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/deploy_a_vdb_via_admin_api1
15.16. Importing the Replication Changelog from an LDIF-formatted Changelog Dump
15.16. Importing the Replication Changelog from an LDIF-formatted Changelog Dump Complete this procedure to import an LDIF-formatted replication changelog dump into Directory Server. Prerequisites Replication is enabled on the Directory Server instance. The changelog dump has been created as described in Section 15.15, "Exporting the Replication Changelog" . Procedure To import the changelog dump from the /tmp/changelog.ldif file, enter: Note that the dirsrv user requires permissions to read the specified file.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication restore-changelog from-ldif /tmp/changelog.ldif" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/importing-the-replication-changelog-from-an-ldif-formatted-changelog-dump
Chapter 4. Specifics of Individual Software Collections
Chapter 4. Specifics of Individual Software Collections This chapter is focused on the specifics of certain Software Collections and provides additional details concerning these components. 4.1. Red Hat Developer Toolset Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. Red Hat Developer Toolset provides current versions of the GNU Compiler Collection , GNU Debugger , and other development, debugging, and performance monitoring tools. Similarly to other Software Collections, an additional set of tools is installed into the /opt/ directory. These tools are enabled by the user on demand using the supplied scl utility. Similarly to other Software Collections, these do not replace the Red Hat Enterprise Linux system versions of these tools, nor will they be used in preference to those system versions unless explicitly invoked using the scl utility. For an overview of features, refer to the Main Features section of the Red Hat Developer Toolset Release Notes . For a complete list of components, see the Red Hat Developer Toolset Components table in the Red Hat Developer Toolset User Guide . Note that since Red Hat Developer Toolset 3.1, Red Hat Developer Toolset requires the rh-java-common Software Collection. 4.2. Ruby on Rails 5.0 Red Hat Software Collections 3.2 provides the rh-ruby24 Software Collection together with the rh-ror50 Collection. To install Ruby on Rails 5.0 , type the following command as root : yum install rh-ror50 Installing any package from the rh-ror50 Software Collection automatically pulls in rh-ruby24 and rh-nodejs6 as dependencies. The rh-nodejs6 Collection is used by certain gems in an asset pipeline to post-process web resources, for example, sass or coffee-script source files. Additionally, the Action Cable framework uses rh-nodejs6 for handling WebSockets in Rails. To run the rails s command without requiring rh-nodejs6 , disable the coffee-rails and uglifier gems in the Gemfile . To run Ruby on Rails without Node.js , run the following command, which will automatically enable rh-ruby24 : scl enable rh-ror50 bash To run Ruby on Rails with all features, enable also the rh-nodejs6 Software Collection: scl enable rh-ror50 rh-nodejs6 bash The rh-ror50 Software Collection is supported together with the rh-ruby24 and rh-nodejs6 components. 4.3. MongoDB 3.6 The rh-mongodb36 Software Collection is available only for Red Hat Enterprise Linux 7. See Section 4.4, "MongoDB 3.4" for instructions on how to use MongoDB 3.4 on Red Hat Enterprise Linux 6. To install the rh-mongodb36 collection, type the following command as root : yum install rh-mongodb36 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb36 'mongo' Note The rh-mongodb36-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6 or later. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb36-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb36-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb36-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb36-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.4. MongoDB 3.4 To install the rh-mongodb34 collection, type the following command as root : yum install rh-mongodb34 To run the MongoDB shell utility, type the following command: scl enable rh-mongodb34 'mongo' Note The rh-mongodb34-mongo-cxx-driver package has been built with the -std=gnu++14 option using GCC from Red Hat Developer Toolset 6. Binaries using the shared library for the MongoDB C++ Driver that use C++11 (or later) features have to be built also with Red Hat Developer Toolset 6. See C++ compatibility details in the Red Hat Developer Toolset 6 User Guide . MongoDB 3.4 on Red Hat Enterprise Linux 6 If you are using Red Hat Enterprise Linux 6, the following instructions apply to your system. To start the MongoDB daemon, type the following command as root : service rh-mongodb34-mongod start To start the MongoDB daemon on boot, type this command as root : chkconfig rh-mongodb34-mongod on To start the MongoDB sharding server, type this command as root : service rh-mongodb34-mongos start To start the MongoDB sharding server on boot, type the following command as root : chkconfig rh-mongodb34-mongos on Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. MongoDB 3.4 on Red Hat Enterprise Linux 7 When using Red Hat Enterprise Linux 7, the following commands are applicable. To start the MongoDB daemon, type the following command as root : systemctl start rh-mongodb34-mongod.service To start the MongoDB daemon on boot, type this command as root : systemctl enable rh-mongodb34-mongod.service To start the MongoDB sharding server, type the following command as root : systemctl start rh-mongodb34-mongos.service To start the MongoDB sharding server on boot, type this command as root : systemctl enable rh-mongodb34-mongos.service Note that the MongoDB sharding server does not work unless the user starts at least one configuration server and specifies it in the mongos.conf file. 4.5. Maven The rh-maven35 Software Collection, available only for Red Hat Enterprise Linux 7, provides a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting, and documentation from a central piece of information. To install the rh-maven35 Collection, type the following command as root : yum install rh-maven35 To enable this collection, type the following command at a shell prompt: scl enable rh-maven35 bash Global Maven settings, such as remote repositories or mirrors, can be customized by editing the /opt/rh/rh-maven35/root/etc/maven/settings.xml file. For more information about using Maven, refer to the Maven documentation . Usage of plug-ins is described in this section ; to find documentation regarding individual plug-ins, see the index of plug-ins . 4.6. Passenger The rh-passenger40 Software Collection provides Phusion Passenger , a web and application server designed to be fast, robust and lightweight. The rh-passenger40 Collection supports multiple versions of Ruby , particularly the ruby193 , ruby200 , and rh-ruby22 Software Collections together with Ruby on Rails using the ror40 or rh-ror41 Collections. Prior to using Passenger with any of the Ruby Software Collections, install the corresponding package from the rh-passenger40 Collection: the rh-passenger-ruby193 , rh-passenger-ruby200 , or rh-passenger-ruby22 package. The rh-passenger40 Software Collection can also be used with Apache httpd from the httpd24 Software Collection. To do so, install the rh-passenger40-mod_passenger package. Refer to the default configuration file /opt/rh/httpd24/root/etc/httpd/conf.d/passenger.conf for an example of Apache httpd configuration, which shows how to use multiple Ruby versions in a single Apache httpd instance. Additionally, the rh-passenger40 Software Collection can be used with the nginx 1.6 web server from the nginx16 Software Collection. To use nginx 1.6 with rh-passenger40 , you can run Passenger in Standalone mode using the following command in the web appplication's directory: scl enable nginx16 rh-passenger40 'passenger start' Alternatively, edit the nginx16 configuration files as described in the upstream Passenger documentation . 4.7. Database Connectors Database connector packages provide the database client functionality, which is necessary for local or remote connection to a database server. Table 4.1, "Interoperability Between Languages and Databases" lists Software Collections with language runtimes that include connectors for certain database servers: yes - the combination is supported no - the combination is not supported Table 4.1. Interoperability Between Languages and Databases Database Language (Software Collection) MariaDB MongoDB MySQL PostgreSQL Redis rh-nodejs4 no no no no no rh-nodejs6 no no no no no rh-nodejs8 no no no no no rh-nodejs10 no no no no no rh-perl520 yes no yes yes no rh-perl524 yes no yes yes no rh-perl526 yes no yes yes no rh-php56 yes yes yes yes no rh-php70 yes no yes yes no rh-php71 yes no yes yes no rh-php72 yes no yes yes no python27 yes yes yes yes no rh-python34 no yes no yes no rh-python35 yes yes yes yes no rh-python36 yes yes yes yes no rh-ror41 yes yes yes yes no rh-ror42 yes yes yes yes no rh-ror50 yes yes yes yes no rh-ruby25 yes yes yes yes no
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.2_release_notes/chap-Individual_Collections
Chapter 2. Understanding OpenShift sandboxed containers
Chapter 2. Understanding OpenShift sandboxed containers OpenShift sandboxed containers support for OpenShift Container Platform provides you with built-in support for running Kata Containers as an additional optional runtime. The new runtime supports containers in dedicated virtual machines (VMs), providing improved workload isolation. This is particularly useful for performing the following tasks: Run privileged or untrusted workloads OpenShift sandboxed containers (OSC) makes it possible to safely run workloads that require specific privileges, without having to risk compromising cluster nodes by running privileged containers. Workloads that require special privileges include the following: Workloads that require special capabilities from the kernel, beyond the default ones granted by standard container runtimes such as CRI-O, for example to access low-level networking features. Workloads that need elevated root privileges, for example to access a specific physical device. With OpenShift sandboxed containers, it is possible to pass only a specific device through to the VM, ensuring that the workload cannot access or misconfigure the rest of the system. Workloads for installing or using set-uid root binaries. These binaries grant special privileges and, as such, can present a security risk. With OpenShift sandboxed containers, additional privileges are restricted to the virtual machines, and grant no special access to the cluster nodes. Some workloads may require privileges specifically for configuring the cluster nodes. Such workloads should still use privileged containers, because running on a virtual machine would prevent them from functioning. Ensure kernel isolation for each workload OpenShift sandboxed containers supports workloads that require custom kernel tuning (such as sysctl , scheduler changes, or cache tuning) and the creation of custom kernel modules (such as out of tree or special arguments). Share the same workload across tenants OpenShift sandboxed containers enables you to support multiple users (tenants) from different organizations sharing the same OpenShift cluster. The system also lets you run third-party workloads from multiple vendors, such as container network functions (CNFs) and enterprise applications. Third-party CNFs, for example, may not want their custom settings interfering with packet tuning or with sysctl variables set by other applications. Running inside a completely isolated kernel is helpful in preventing "noisy neighbor" configuration problems. Ensure proper isolation and sandboxing for testing software You can use OpenShift sandboxed containers to run a containerized workload with known vulnerabilities or to handle an issue in a legacy application. This isolation also enables administrators to give developers administrative control over pods, which is useful when the developer wants to test or validate configurations beyond those an administrator would typically grant. Administrators can, for example, safely and securely delegate kernel packet filtering (eBPF) to developers. Kernel packet filtering requires CAP_ADMIN or CAP_BPF privileges, and is therefore not allowed under a standard CRI-O configuration, as this would grant access to every process on the Container Host worker node. Similarly, administrators can grant access to intrusive tools such as SystemTap, or support the loading of custom kernel modules during their development. Ensure default resource containment through VM boundaries By default, resources such as CPU, memory, storage, or networking are managed in a more robust and secure way in OpenShift sandboxed containers. Since OpenShift sandboxed containers are deployed on VMs, additional layers of isolation and security give a finer-grained access control to the resource. For example, an errant container will not be able to allocate more memory than is available to the VM. Conversely, a container that needs dedicated access to a network card or to a disk can take complete control over that device without getting any access to other devices. 2.1. OpenShift sandboxed containers supported platforms You can install OpenShift sandboxed containers on a bare-metal server or on an Amazon Web Services (AWS) bare-metal instance. Bare-metal instances offered by other cloud providers are not supported. Red Hat Enterprise Linux CoreOS (RHCOS) is the only supported operating system for OpenShift sandboxed containers. OpenShift sandboxed containers 1.3 runs on Red Hat Enterprise Linux CoreOS (RHCOS) 8.6. OpenShift sandboxed containers 1.3 is compatible with OpenShift Container Platform 4.11. 2.2. OpenShift sandboxed containers common terms The following terms are used throughout the documentation. Sandbox A sandbox is an isolated environment where programs can run. In a sandbox, you can run untested or untrusted programs without risking harm to the host machine or the operating system. In the context of OpenShift sandboxed containers, sandboxing is achieved by running workloads in a different kernel using virtualization, providing enhanced control over the interactions between multiple workloads that run on the same host. Pod A pod is a construct that is inherited from Kubernetes and OpenShift Container Platform. It represents resources where containers can be deployed. Containers run inside of pods, and pods are used to specify resources that can be shared between multiple containers. In the context of OpenShift sandboxed containers, a pod is implemented as a virtual machine. Several containers can run in the same pod on the same virtual machine. OpenShift sandboxed containers Operator An Operator is a software component that automates operations, which are actions that a human operator could do on the system. The OpenShift sandboxed containers Operator is tasked with managing the lifecycle of sandboxed containers on a cluster. You can use the OpenShift sandboxed containers Operator to perform tasks such as the installation and removal of sandboxed containers, software updates, and status monitoring. Kata Containers Kata Containers is a core upstream project that is used to build OpenShift sandboxed containers. OpenShift sandboxed containers integrate Kata Containers with OpenShift Container Platform. KataConfig KataConfig objects represent configurations of sandboxed containers. They store information about the state of the cluster, such as the nodes on which the software is deployed. Runtime class A RuntimeClass object describes which runtime can be used to run a given workload. A runtime class that is named kata is installed and deployed by the OpenShift sandboxed containers Operator. The runtime class contains information about the runtime that describes resources that the runtime needs to operate, such as the pod overhead . 2.3. OpenShift sandboxed containers workload management OpenShift sandboxed containers provides the following features for enhancing workload management and allocation: 2.3.1. OpenShift sandboxed containers building blocks The OpenShift sandboxed containers Operator encapsulates all of the components from Kata containers. It manages installation, lifecycle, and configuration tasks. The OpenShift sandboxed containers Operator is packaged in the Operator bundle format as two container images. The bundle image contains metadata and is required to make the operator OLM-ready. The second container image contains the actual controller that monitors and manages the KataConfig resource. 2.3.2. RHCOS extensions The OpenShift sandboxed containers Operator is based on the Red Hat Enterprise Linux CoreOS (RHCOS) extensions concept. Red Hat Enterprise Linux CoreOS (RHCOS) extensions are a mechanism to install optional OpenShift Container Platform software. The OpenShift sandboxed containers Operator uses this mechanism to deploy sandboxed containers on a cluster. The sandboxed containers RHCOS extension contains RPMs for Kata, QEMU, and its dependencies. You can enable them by using the MachineConfig resources that the Machine Config Operator provides. Additional resources Adding extensions to RHCOS 2.3.3. Virtualization and OpenShift sandboxed containers You can use OpenShift sandboxed containers on clusters with OpenShift Virtualization. To run OpenShift Virtualization and OpenShift sandboxed containers at the same time, you must enable VMs to migrate, so that they do not block node reboots. Configure the following parameters on your VM: Use ocs-storagecluster-ceph-rbd as the storage class. Set the evictionStrategy parameter to LiveMigrate in the VM. Additional resources Configuring local storage for virtual machines Configuring virtual machine eviction strategy 2.4. Understanding compliance and risk management OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . OpenShift sandboxed containers can be used on FIPS enabled clusters. When running in FIPS mode, OpenShift sandboxed containers components, VMs, and VM images are adapted to comply with FIPS. FIPS compliance is one of the most critical components required in highly secure environments, to ensure that only supported cryptographic technologies are allowed on nodes. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. To understand Red Hat's view of OpenShift Container Platform compliance frameworks, refer to the Risk Management and Regulatory Readiness chapter of the OpenShift Security Guide Book .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/sandboxed_containers_support_for_openshift/understanding-sandboxed-containers
Chapter 14. Samba
Chapter 14. Samba Samba is an open source implementation of the Server Message Block (SMB) protocol. It allows the networking of Microsoft Windows (R), Linux, UNIX, and other operating systems together, enabling access to Windows-based file and printer shares. Samba's use of SMB allows it to appear as a Windows server to Windows clients. 14.1. Introduction to Samba The third major release of Samba, version 3.0.0, introduced numerous improvements from prior versions, including: The ability to join an Active Directory domain by means of LDAP and Kerberos Built in Unicode support for internationalization Support for Microsoft Windows XP Professional client connections to Samba servers without needing local registry hacking Two new documents developed by the Samba.org team, which include a 400+ page reference manual, and a 300+ page implementation and integration manual. For more information about these published titles, refer to Section 14.9.3, "Related Books" . 14.1.1. Samba Features Samba is a powerful and versatile server application. Even seasoned system administrators must know its abilities and limitations before attempting installation and configuration. What Samba can do: Serve directory trees and printers to Linux, UNIX, and Windows clients Assist in network browsing (with or without NetBIOS) Authenticate Windows domain logins Provide Windows Internet Name Service (WINS) name server resolution Act as a Windows NT (R)-style Primary Domain Controller (PDC) Act as a Backup Domain Controller (BDC) for a Samba-based PDC Act as an Active Directory domain member server Join a Windows NT/2000/2003 PDC What Samba cannot do: Act as a BDC for a Windows PDC (and vice versa) Act as an Active Directory domain controller
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-samba
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 4. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the legacy Migration Toolkit for Containers Operator on an OpenShift Container Platform 4.2 to 4.5 source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.10 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster . Install the Migration Toolkit for Containers Operator on the source cluster: OpenShift Container Platform 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager. OpenShift Container Platform 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface. Configure object storage to use as a replication repository. Note To install MTC on OpenShift Container Platform 3, see Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 . To uninstall MTC, see Uninstalling MTC and deleting resources . 4.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. Table 4.1. MTC compatibility: Migrating from a legacy platform OpenShift Container Platform 4.5 or earlier OpenShift Container Platform 4.6 or later Stable MTC version MTC 1.7. z Legacy 1.7 operator: Install manually with the operator.yml file. Important This cluster cannot be the control cluster. MTC 1.7. z Install with OLM, release channel release-v1.7 Note Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 4.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.10 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.10 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 4.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 4.2 to 4.5 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform versions 4.2 to 4.5. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.10. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: USD podman cp USD(podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 4.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.10, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 4.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 4.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 4.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 4.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 4.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 4.4.2.1. NetworkPolicy configuration 4.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 4.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 4.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 4.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 4.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 4.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 4.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 4.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 4.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 4.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Data Foundation. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate OpenShift Data Foundation deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. 4.5.3. Additional resources Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 4.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migration_toolkit_for_containers/installing-mtc-restricted
Chapter 1. Recommended host practices
Chapter 1. Recommended host practices This topic provides recommended host practices for OpenShift Container Platform. Important These guidelines apply to OpenShift Container Platform with software-defined networking (SDN), not Open Virtual Network (OVN). 1.1. Recommended node host practices The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. podsPerCore sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . podsPerCore cannot exceed maxPods . maxPods sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 1.2. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig CR for all of the pools. Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-max-pods 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-max-pods Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. In this example, use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=large-pods Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-max-pods 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-max-pods -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 1.3. Modifying the number of unavailable worker nodes By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process. Procedure Edit the worker machine config pool: USD oc edit machineconfigpool worker Set maxUnavailable to the value that you want: spec: maxUnavailable: <node_count> Important When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. 1.4. Control plane node sizing The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts: 12 image streams 3 build configurations 6 builds 1 deployment with 2 pod replicas mounting two secrets each 2 deployments with 1 pod replica mounting two secrets 3 services pointing to the deployments 3 routes pointing to the deployments 10 secrets, 2 of which are mounted by the deployments 10 config maps, 2 of which are mounted by the deployments Number of worker nodes Cluster load (namespaces) CPU cores Memory (GB) 25 500 4 16 100 1000 8 32 250 4000 16 96 On a large and dense cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails. The failures can be due to unexpected issues with power, network or underlying infrastructure in addition to intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available which leads to increase in the resource usage. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase. Operator Lifecycle Manager (OLM ) runs on the control plane nodes and it's memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.9 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Important The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin. Note In OpenShift Container Platform 4.9, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 1.4.1. Selecting a larger Amazon Web Services instance type for control plane machines If the control plane machines in an Amazon Web Services (AWS) cluster require more resources, you can select a larger AWS instance type for the control plane machines to use. 1.4.1.1. Changing the Amazon Web Services instance type by using the AWS console You can change the Amazon Web Services (AWS) instance type that your control plane machines use by updating the instance type in the AWS console. Prerequisites You have access to the AWS console with the permissions required to modify the EC2 Instance for your cluster. You have access to the OpenShift Container Platform cluster as a user with the cluster-admin role. Procedure Open the AWS console and fetch the instances for the control plane machines. Choose one control plane machine instance. For the selected control plane machine, back up the etcd data by creating an etcd snapshot. For more information, see "Backing up etcd". In the AWS console, stop the control plane machine instance. Select the stopped instance, and click Actions Instance Settings Change instance type . Change the instance to a larger type, ensuring that the type is the same base as the selection, and apply changes. For example, you can change m6i.xlarge to m6i.2xlarge or m6i.4xlarge . Start the instance. If your OpenShift Container Platform cluster has a corresponding Machine object for the instance, update the instance type of the object to match the instance type set in the AWS console. Repeat this process for each control plane machine. Additional resources Backing up etcd 1.5. Recommended etcd practices Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd's consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies. Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes. In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 20ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio. To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads. The following hard disk features provide optimal etcd performance: Low latency to support fast read operation. High-bandwidth writes for faster compactions and defragmentation. High-bandwidth reads for faster recovery from failures. Solid state drives as a minimum selection, however NVMe drives are preferred. Server-grade hardware from various manufacturers for increased reliability. RAID 0 technology for increased performance. Dedicated etcd drives. Do not place log files or other heavy workloads on etcd drives. Avoid NAS or SAN setups and spinning drives. Always benchmark by using utilities such as fio. Continuously monitor the cluster performance as it increases. Note Avoid using the Network File System (NFS) protocol or other network based file systems. Some key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. To validate the hardware for etcd before or after you create the OpenShift Container Platform cluster, you can use fio. Prerequisites Container runtimes such as Podman or Docker are installed on the machine that you're testing. Data is written to the /var/lib/etcd path. Procedure Run fio and analyze the results: If you use Podman, run this command: USD sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf If you use Docker, run this command: USD sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 20 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow: etcd_disk_wal_fsync_duration_seconds_bucket metric reports the etcd's WAL fsync duration etcd_disk_backend_commit_duration_seconds_bucket metric reports the etcd backend commit latency duration etcd_server_leader_changes_seen_total metric reports the leader changes Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OpenShift Container Platform cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric. The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) metric reports the round trip time for etcd to finish replicating the client requests between the members. Ensure that it is less than 50 ms. 1.6. Moving etcd to a different disk You can move etcd from a shared disk to a separate disk to prevent or resolve performance issues. Prerequisites The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role] . This applies to a controller, worker, or a custom pool. The node's auxiliary storage device, such as /dev/sdb , must match the sdb. Change this reference in all places in the file. Note This procedure does not move parts of the root file system, such as /var/ , to another disk or partition on an installed node. The Machine Config Operator (MCO) is responsible for mounting a secondary disk for an OpenShift Container Platform 4.9 container storage. Use the following steps to move etcd to a different device: Procedure Create a machineconfig YAML file named etcd-mc.yml and add the following information: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Make File System on /dev/sdb DefaultDependencies=no BindsTo=dev-sdb.device After=dev-sdb.device var.mount [email protected] [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/sdb TimeoutSec=0 [Install] WantedBy=var-lib-containers.mount enabled: true name: [email protected] - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setenforce 0 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/setenforce 1 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: sync-var-lib-etcd-to-etcd.service - contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: restorecon-var-lib-etcd.service Create the machine configuration by entering the following commands: USD oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} ... output omitted ... USD oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created USD oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} [... output omitted ...] USD oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created The nodes are updated and rebooted. After the reboot completes, the following events occur: An XFS file system is created on the specified disk. The disk mounts to /var/lib/etc . The content from /sysroot/ostree/deploy/rhcos/var/lib/etcd syncs to /var/lib/etcd . A restore of SELinux labels is forced for /var/lib/etcd . The old content is not removed. After the nodes are on a separate disk, update the machine configuration file, etcd-mc.yml with the following information: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount Apply the modified version that removes the logic for creating and syncing the device by entering the following command: USD oc replace -f etcd-mc.yml The step prevents the nodes from rebooting. Additional resources Red Hat Enterprise Linux CoreOS (RHCOS) 1.7. Defragmenting etcd data For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes. Monitor these key metrics: etcd_server_quota_backend_bytes , which is the current quota limit etcd_mvcc_db_total_size_in_use_in_bytes , which indicates the actual database usage after a history compaction etcd_mvcc_db_total_size_in_bytes , which shows the database size, including free space waiting for defragmentation Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction. History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system. Defragmentation occurs automatically, but you can also trigger it manually. Note Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user. 1.7.1. Automatic defragmentation The etcd Operator automatically defragments disks. No manual intervention is needed. Verify that the defragmentation process is successful by viewing one of these logs: etcd logs cluster-etcd-operator pod operator status error log Warning Automatic defragmentation can cause leader election failure in various OpenShift core components, such as the Kubernetes controller manager, which triggers a restart of the failing component. The restart is harmless and either triggers failover to the running instance or the component resumes work again after the restart. Example log output I0907 08:43:12.171919 1 defragcontroller.go:198] etcd member "ip- 10-0-191-150.example.redhat.com" backend store fragmented: 39.33 %, dbSize: 349138944 1.7.2. Manual defragmentation You can monitor the etcd_db_total_size_in_bytes metric to determine whether manual defragmentation is necessary. You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024 Warning Defragmenting etcd is a blocking action. The etcd member will not response until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover. Follow this procedure to defragment etcd data on each etcd member. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Determine which etcd member is the leader, because the leader should be defragmented last. Get the list of etcd pods: USD oc -n openshift-etcd get pods -l k8s-app=etcd -o wide Example output etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none> Choose a pod and run the following command to determine which etcd member is the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table Example output Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com . Defragment an etcd member. Connect to the running etcd container, passing in the name of a pod that is not the leader: USD oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com Unset the ETCDCTL_ENDPOINTS environment variable: sh-4.4# unset ETCDCTL_ENDPOINTS Defragment the etcd member: sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag Example output Finished defragmenting etcd member[https://localhost:2379] If a timeout error occurs, increase the value for --command-timeout until the command succeeds. Verify that the database size was reduced: sh-4.4# etcdctl endpoint status -w table --cluster Example output +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last. Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them. Check if there are any NOSPACE alarms: sh-4.4# etcdctl alarm list Example output memberID:12345678912345678912 alarm:NOSPACE Clear the alarms: sh-4.4# etcdctl alarm disarm 1.8. OpenShift Container Platform infrastructure components The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions: Kubernetes and OpenShift Container Platform control plane services that run on masters The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Service brokers Red Hat Quay Red Hat OpenShift Container Storage Red Hat Advanced Cluster Manager Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information on infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. 1.9. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Grafana, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 1.10. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 1.11. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1 Because the role list includes infra , the pod is running on the correct node. 1.12. Infrastructure node sizing Infrastructure nodes are nodes that are labeled to run pieces of the OpenShift Container Platform environment. The infrastructure node resource requirements depend on the cluster age, nodes, and objects in the cluster, as these factors can lead to an increase in the number of metrics or time series in Prometheus. The following infrastructure node size recommendations are based on the results of cluster maximums and control plane density focused testing. Number of worker nodes CPU cores Memory (GB) 25 4 16 100 8 32 250 16 128 500 32 128 In general, three infrastructure nodes are recommended per cluster. Important These sizing recommendations are based on scale tests, which create a large number of objects across the cluster. These tests include reaching some of the cluster maximums. In the case of 250 and 500 node counts on an OpenShift Container Platform 4.9 cluster, these maximums are 10000 namespaces with 61000 pods, 10000 deployments, 181000 secrets, 400 config maps, and so on. Prometheus is a highly memory intensive application; the resource usage depends on various factors including the number of nodes, objects, the Prometheus metrics scraping interval, metrics or time series, and the age of the cluster. The disk size also depends on the retention period. You must take these factors into consideration and size them accordingly. These sizing recommendations are only applicable for the Prometheus, Router, and Registry infrastructure components, which are installed during cluster installation. Logging is a day-two operation and is not included in these recommendations. Note In OpenShift Container Platform 4.9, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. This influences the stated sizing recommendations. 1.13. Additional resources OpenShift Container Platform cluster maximums Creating infrastructure machine sets
[ "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1", "oc label machineconfigpool worker custom-kubelet=set-max-pods", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=large-pods", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-max-pods 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-max-pods -o yaml", "spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf", "sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Make File System on /dev/sdb DefaultDependencies=no BindsTo=dev-sdb.device After=dev-sdb.device var.mount [email protected] [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/sdb TimeoutSec=0 [Install] WantedBy=var-lib-containers.mount enabled: true name: [email protected] - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setenforce 0 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/setenforce 1 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: sync-var-lib-etcd-to-etcd.service - contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: restorecon-var-lib-etcd.service", "oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} ... output omitted", "oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created", "oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} [... output omitted ...]", "oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount", "oc replace -f etcd-mc.yml", "I0907 08:43:12.171919 1 defragcontroller.go:198] etcd member \"ip- 10-0-191-150.example.redhat.com\" backend store fragmented: 39.33 %, dbSize: 349138944", "oc -n openshift-etcd get pods -l k8s-app=etcd -o wide", "etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table", "Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com", "sh-4.4# unset ETCDCTL_ENDPOINTS", "sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag", "Finished defragmenting etcd member[https://localhost:2379]", "sh-4.4# etcdctl endpoint status -w table --cluster", "+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+", "sh-4.4# etcdctl alarm list", "memberID:12345678912345678912 alarm:NOSPACE", "sh-4.4# etcdctl alarm disarm", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/recommended-host-practices
Chapter 1. Red Hat OpenShift Service on AWS storage overview
Chapter 1. Red Hat OpenShift Service on AWS storage overview Red Hat OpenShift Service on AWS supports Amazon Elastic Block Store (Amazon EBS) and Amazon Elastic File System (Amazon EFS) storage. You can manage container storage for persistent and non-persistent data in an Red Hat OpenShift Service on AWS cluster. 1.1. Glossary of common terms for Red Hat OpenShift Service on AWS storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. fsGroup The fsGroup defines a file system group ID of a pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. Red Hat OpenShift Service on AWS uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) Red Hat OpenShift Service on AWS uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your Red Hat OpenShift Service on AWS cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in Red Hat OpenShift Service on AWS to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage Red Hat OpenShift Service on AWS supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an Red Hat OpenShift Service on AWS cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. 1.2. Storage types Red Hat OpenShift Service on AWS storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. Red Hat OpenShift Service on AWS uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/storage/storage-overview
Chapter 1. Introduction
Chapter 1. Introduction Security Enhanced Linux (SELinux) provides an additional layer of system security. SELinux fundamentally answers the question: "May <subject> do <action> to <object>", for example: "May a web server access files in users' home directories?". The standard access policy based on the user, group, and other permissions, known as Discretionary Access Control (DAC), does not enable system administrators to create comprehensive and fine-grained security policies, such as restricting specific applications to only viewing log files, while allowing other applications to append new data to the log files SELinux implements Mandatory Access Control (MAC). Every process and system resource has a special security label called a SELinux context . A SELinux context, sometimes referred to as a SELinux label , is an identifier which abstracts away the system-level details and focuses on the security properties of the entity. Not only does this provide a consistent way of referencing objects in the SELinux policy, but it also removes any ambiguity that can be found in other identification methods; for example, a file can have multiple valid path names on a system that makes use of bind mounts. The SELinux policy uses these contexts in a series of rules which define how processes can interact with each other and the various system resources. By default, the policy does not allow any interaction unless a rule explicitly grants access. Note It is important to remember that SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first, which means that no SELinux denial is logged if the traditional DAC rules prevent the access. SELinux contexts have several fields: user, role, type, and security level. The SELinux type information is perhaps the most important when it comes to the SELinux policy, as the most common policy rule which defines the allowed interactions between processes and system resources uses SELinux types and not the full SELinux context. SELinux types usually end with _t . For example, the type name for the web server is httpd_t . The type context for files and directories normally found in /var/www/html/ is httpd_sys_content_t . The type contexts for files and directories normally found in /tmp and /var/tmp/ is tmp_t . The type context for web server ports is http_port_t . For example, there is a policy rule that permits Apache (the web server process running as httpd_t ) to access files and directories with a context normally found in /var/www/html/ and other web server directories ( httpd_sys_content_t ). There is no allow rule in the policy for files normally found in /tmp and /var/tmp/ , so access is not permitted. With SELinux, even if Apache is compromised, and a malicious script gains access, it is still not able to access the /tmp directory. Figure 1.1. SELinux allows the Apache process running as httpd_t to access the /var/www/html/ directory and it denies the same process to access the /data/mysql/ directory because there is no allow rule for the httpd_t and mysqld_db_t type contexts). On the other hand, the MariaDB process running as mysqld_t is able to access the /data/mysql/ directory and SELinux also correctly denies the process with the mysqld_t type to access the /var/www/html/ directory labeled as httpd_sys_content_t. Additional Resources For more information, see the following documentation: The selinux(8) man page and man pages listed by the apropos selinux command. Man pages listed by the man -k _selinux command when the selinux-policy-doc package is installed. See Section 11.3.3, "Manual Pages for Services" for more information. The SELinux Coloring Book SELinux Wiki FAQ 1.1. Benefits of running SELinux SELinux provides the following benefits: All processes and files are labeled. SELinux policy rules define how processes interact with files, as well as how processes interact with each other. Access is only allowed if an SELinux policy rule exists that specifically allows it. Fine-grained access control. Stepping beyond traditional UNIX permissions that are controlled at user discretion and based on Linux user and group IDs, SELinux access decisions are based on all available information, such as an SELinux user, role, type, and, optionally, a security level. SELinux policy is administratively-defined and enforced system-wide. Improved mitigation for privilege escalation attacks. Processes run in domains, and are therefore separated from each other. SELinux policy rules define how processes access files and other processes. If a process is compromised, the attacker only has access to the normal functions of that process, and to files the process has been configured to have access to. For example, if the Apache HTTP Server is compromised, an attacker cannot use that process to read files in user home directories, unless a specific SELinux policy rule was added or configured to allow such access. SELinux can be used to enforce data confidentiality and integrity, as well as protecting processes from untrusted inputs. However, SELinux is not: antivirus software, replacement for passwords, firewalls, and other security systems, all-in-one security solution. SELinux is designed to enhance existing security solutions, not replace them. Even when running SELinux, it is important to continue to follow good security practices, such as keeping software up-to-date, using hard-to-guess passwords, or firewalls.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-Security-Enhanced_Linux-Introduction
Chapter 14. Web Servers
Chapter 14. Web Servers A web server is a network service that serves content to a client over the web. This typically means web pages, but any other documents can be served as well. Web servers are also known as HTTP servers, as they use the hypertext transport protocol ( HTTP ). The web servers available in Red Hat Enterprise Linux 7 are: Apache HTTP Server nginx Important Note that the nginx web server is available only as a Software Collection for Red Hat Enterprise Linux 7. See the Red Hat Software Collections Release Notes for information regarding getting access to nginx, usage of Software Collections, and other. 14.1. The Apache HTTP Server This section focuses on the Apache HTTP Server 2.4 , httpd , an open source web server developed by the Apache Software Foundation . If you are upgrading from a release of Red Hat Enterprise Linux, you will need to update the httpd service configuration accordingly. This section reviews some of the newly added features, outlines important changes between Apache HTTP Server 2.4 and version 2.2, and guides you through the update of older configuration files. 14.1.1. Notable Changes The Apache HTTP Server in Red Hat Enterprise Linux 7 has the following changes compared to Red Hat Enterprise Linux 6: httpd Service Control With the migration away from SysV init scripts, server administrators should switch to using the apachectl and systemctl commands to control the service, in place of the service command. The following examples are specific to the httpd service. The command: is replaced by The systemd unit file for httpd has different behavior from the init script as follows: A graceful restart is used by default when the service is reloaded. A graceful stop is used by default when the service is stopped. The command: is replaced by Private /tmp To enhance system security, the systemd unit file runs the httpd daemon using a private /tmp directory, separate to the system /tmp directory. Configuration Layout Configuration files which load modules are now placed in the /etc/httpd/conf.modules.d/ directory. Packages that provide additional loadable modules for httpd , such as php , will place a file in this directory. An Include directive before the main section of the /etc/httpd/conf/httpd.conf file is used to include files within the /etc/httpd/conf.modules.d/ directory. This means any configuration files within conf.modules.d/ are processed before the main body of httpd.conf . An IncludeOptional directive for files within the /etc/httpd/conf.d/ directory is placed at the end of the httpd.conf file. This means the files within /etc/httpd/conf.d/ are now processed after the main body of httpd.conf . Some additional configuration files are provided by the httpd package itself: /etc/httpd/conf.d/autoindex.conf - This configures mod_autoindex directory indexing. /etc/httpd/conf.d/userdir.conf - This configures access to user directories, for example http://example.com/~username/ ; such access is disabled by default for security reasons. /etc/httpd/conf.d/welcome.conf - As in releases, this configures the welcome page displayed for http://localhost/ when no content is present. Default Configuration A minimal httpd.conf file is now provided by default. Many common configuration settings, such as Timeout or KeepAlive are no longer explicitly configured in the default configuration; hard-coded settings will be used instead, by default. The hard-coded default settings for all configuration directives are specified in the manual. See the section called "Installable Documentation" for more information. Incompatible Syntax Changes If migrating an existing configuration from httpd 2.2 to httpd 2.4 , a number of backwards-incompatible changes to the httpd configuration syntax were made which will require changes. See the following Apache document for more information on upgrading http://httpd.apache.org/docs/2.4/upgrading.html Processing Model In releases of Red Hat Enterprise Linux, different multi-processing models ( MPM ) were made available as different httpd binaries: the forked model, "prefork", as /usr/sbin/httpd , and the thread-based model "worker" as /usr/sbin/httpd.worker . In Red Hat Enterprise Linux 7, only a single httpd binary is used, and three MPMs are available as loadable modules: worker, prefork (default), and event. Edit the configuration file /etc/httpd/conf.modules.d/00-mpm.conf as required, by adding and removing the comment character # so that only one of the three MPM modules is loaded. Packaging Changes The LDAP authentication and authorization modules are now provided in a separate sub-package, mod_ldap . The new module mod_session and associated helper modules are provided in a new sub-package, mod_session . The new modules mod_proxy_html and mod_xml2enc are provided in a new sub-package, mod_proxy_html . These packages are all in the Optional channel. Note Before subscribing to the Optional and Supplementary channels see the Scope of Coverage Details . If you decide to install packages from these channels, follow the steps documented in the article called How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Manager (RHSM)? on the Red Hat Customer Portal. Packaging Filesystem Layout The /var/cache/mod_proxy/ directory is no longer provided; instead, the /var/cache/httpd/ directory is packaged with a proxy and ssl subdirectory. Packaged content provided with httpd has been moved from /var/www/ to /usr/share/httpd/ : /usr/share/httpd/icons/ - The directory containing a set of icons used with directory indices, previously contained in /var/www/icons/ , has moved to /usr/share/httpd/icons/ . Available at http://localhost/icons/ in the default configuration; the location and the availability of the icons is configurable in the /etc/httpd/conf.d/autoindex.conf file. /usr/share/httpd/manual/ - The /var/www/manual/ has moved to /usr/share/httpd/manual/ . This directory, contained in the httpd-manual package, contains the HTML version of the manual for httpd . Available at http://localhost/manual/ if the package is installed, the location and the availability of the manual is configurable in the /etc/httpd/conf.d/manual.conf file. /usr/share/httpd/error/ - The /var/www/error/ has moved to /usr/share/httpd/error/ . Custom multi-language HTTP error pages. Not configured by default, the example configuration file is provided at /usr/share/doc/httpd- VERSION /httpd-multilang-errordoc.conf . Authentication, Authorization and Access Control The configuration directives used to control authentication, authorization and access control have changed significantly. Existing configuration files using the Order , Deny and Allow directives should be adapted to use the new Require syntax. See the following Apache document for more information http://httpd.apache.org/docs/2.4/howto/auth.html suexec To improve system security, the suexec binary is no longer installed as if by the root user; instead, it has file system capability bits set which allow a more restrictive set of permissions. In conjunction with this change, the suexec binary no longer uses the /var/log/httpd/suexec.log logfile. Instead, log messages are sent to syslog ; by default these will appear in the /var/log/secure log file. Module Interface Third-party binary modules built against httpd 2.2 are not compatible with httpd 2.4 due to changes to the httpd module interface. Such modules will need to be adjusted as necessary for the httpd 2.4 module interface, and then rebuilt. A detailed list of the API changes in version 2.4 is available here: http://httpd.apache.org/docs/2.4/developer/new_api_2_4.html . The apxs binary used to build modules from source has moved from /usr/sbin/apxs to /usr/bin/apxs . Removed modules List of httpd modules removed in Red Hat Enterprise Linux 7: mod_auth_mysql, mod_auth_pgsql httpd 2.4 provides SQL database authentication support internally in the mod_authn_dbd module. mod_perl mod_perl is not officially supported with httpd 2.4 by upstream. mod_authz_ldap httpd 2.4 provides LDAP support in sub-package mod_ldap using mod_authnz_ldap . 14.1.2. Updating the Configuration To update the configuration files from the Apache HTTP Server version 2.2, take the following steps: Make sure all module names are correct, since they may have changed. Adjust the LoadModule directive for each module that has been renamed. Recompile all third party modules before attempting to load them. This typically means authentication and authorization modules. If you use the mod_userdir module, make sure the UserDir directive indicating a directory name (typically public_html ) is provided. If you use the Apache HTTP Secure Server, see Section 14.1.8, "Enabling the mod_ssl Module" for important information on enabling the Secure Sockets Layer (SSL) protocol. Note that you can check the configuration for possible errors by using the following command: For more information on upgrading the Apache HTTP Server configuration from version 2.2 to 2.4, see http://httpd.apache.org/docs/2.4/upgrading.html . 14.1.3. Running the httpd Service This section describes how to start, stop, restart, and check the current status of the Apache HTTP Server. To be able to use the httpd service, make sure you have the httpd installed. You can do so by using the following command: For more information on the concept of targets and how to manage system services in Red Hat Enterprise Linux in general, see Chapter 10, Managing Services with systemd . 14.1.3.1. Starting the Service To run the httpd service, type the following at a shell prompt as root : If you want the service to start automatically at boot time, use the following command: Note If running the Apache HTTP Server as a secure server, a password may be required after the machine boots if using an encrypted private SSL key. 14.1.3.2. Stopping the Service To stop the running httpd service, type the following at a shell prompt as root : To prevent the service from starting automatically at boot time, type: 14.1.3.3. Restarting the Service There are three different ways to restart a running httpd service: To restart the service completely, enter the following command as root : This stops the running httpd service and immediately starts it again. Use this command after installing or removing a dynamically loaded module such as PHP. To only reload the configuration, as root , type: This causes the running httpd service to reload its configuration file. Any requests currently being processed will be interrupted, which may cause a client browser to display an error message or render a partial page. To reload the configuration without affecting active requests, enter the following command as root : This causes the running httpd service to reload its configuration file. Any requests currently being processed will continue to use the old configuration. For more information on how to manage system services in Red Hat Enterprise Linux 7, see Chapter 10, Managing Services with systemd . 14.1.3.4. Verifying the Service Status To verify that the httpd service is running, type the following at a shell prompt: 14.1.4. Editing the Configuration Files When the httpd service is started, by default, it reads the configuration from locations that are listed in Table 14.1, "The httpd service configuration files" . Table 14.1. The httpd service configuration files Path Description /etc/httpd/conf/httpd.conf The main configuration file. /etc/httpd/conf.d/ An auxiliary directory for configuration files that are included in the main configuration file. Although the default configuration should be suitable for most situations, it is a good idea to become at least familiar with some of the more important configuration options. Note that for any changes to take effect, the web server has to be restarted first. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. To check the configuration for possible errors, type the following at a shell prompt: To make the recovery from mistakes easier, it is recommended that you make a copy of the original file before editing it. 14.1.5. Working with Modules Being a modular application, the httpd service is distributed along with a number of Dynamic Shared Objects ( DSO s), which can be dynamically loaded or unloaded at runtime as necessary. On Red Hat Enterprise Linux 7, these modules are located in /usr/lib64/httpd/modules/ . 14.1.5.1. Loading a Module To load a particular DSO module, use the LoadModule directive. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.d/ directory. Example 14.1. Loading the mod_ssl DSO Once you are finished, restart the web server to reload the configuration. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. 14.1.5.2. Writing a Module If you intend to create a new DSO module, make sure you have the httpd-devel package installed. To do so, enter the following command as root : This package contains the include files, the header files, and the APache eXtenSion ( apxs ) utility required to compile a module. Once written, you can build the module with the following command: If the build was successful, you should be able to load the module the same way as any other module that is distributed with the Apache HTTP Server. 14.1.6. Setting Up Virtual Hosts The Apache HTTP Server's built in virtual hosting allows the server to provide different information based on which IP address, host name, or port is being requested. To create a name-based virtual host, copy the example configuration file /usr/share/doc/httpd- VERSION /httpd-vhosts.conf into the /etc/httpd/conf.d/ directory, and replace the @@Port@@ and @@ServerRoot@@ placeholder values. Customize the options according to your requirements as shown in Example 14.2, "Example virtual host configuration" . Example 14.2. Example virtual host configuration Note that ServerName must be a valid DNS name assigned to the machine. The <VirtualHost> container is highly customizable, and accepts most of the directives available within the main server configuration. Directives that are not supported within this container include User and Group , which were replaced by SuexecUserGroup . Note If you configure a virtual host to listen on a non-default port, make sure you update the Listen directive in the global settings section of the /etc/httpd/conf/httpd.conf file accordingly. To activate a newly created virtual host, the web server has to be restarted first. See Section 14.1.3.3, "Restarting the Service" for more information on how to restart the httpd service. 14.1.7. Setting Up an SSL Server Secure Sockets Layer ( SSL ) is a cryptographic protocol that allows a server and a client to communicate securely. Along with its extended and improved version called Transport Layer Security ( TLS ), it ensures both privacy and data integrity. The Apache HTTP Server in combination with mod_ssl , a module that uses the OpenSSL toolkit to provide the SSL/TLS support, is commonly referred to as the SSL server . Red Hat Enterprise Linux also supports the use of Mozilla NSS as the TLS implementation. Support for Mozilla NSS is provided by the mod_nss module. Unlike an HTTP connection that can be read and possibly modified by anybody who is able to intercept it, the use of SSL/TLS over HTTP, referred to as HTTPS, prevents any inspection or modification of the transmitted content. This section provides basic information on how to enable this module in the Apache HTTP Server configuration, and guides you through the process of generating private keys and self-signed certificates. 14.1.7.1. An Overview of Certificates and Security Secure communication is based on the use of keys. In conventional or symmetric cryptography , both ends of the transaction have the same key they can use to decode each other's transmissions. On the other hand, in public or asymmetric cryptography , two keys co-exist: a private key that is kept a secret, and a public key that is usually shared with the public. While the data encoded with the public key can only be decoded with the private key, data encoded with the private key can in turn only be decoded with the public key. To provide secure communications using SSL, an SSL server must use a digital certificate signed by a Certificate Authority ( CA ). The certificate lists various attributes of the server (that is, the server host name, the name of the company, its location, etc.), and the signature produced using the CA's private key. This signature ensures that a particular certificate authority has signed the certificate, and that the certificate has not been modified in any way. When a web browser establishes a new SSL connection, it checks the certificate provided by the web server. If the certificate does not have a signature from a trusted CA, or if the host name listed in the certificate does not match the host name used to establish the connection, it refuses to communicate with the server and usually presents a user with an appropriate error message. By default, most web browsers are configured to trust a set of widely used certificate authorities. Because of this, an appropriate CA should be chosen when setting up a secure server, so that target users can trust the connection, otherwise they will be presented with an error message, and will have to accept the certificate manually. Since encouraging users to override certificate errors can allow an attacker to intercept the connection, you should use a trusted CA whenever possible. For more information on this, see Table 14.2, "Information about CA lists used by common web browsers" . Table 14.2. Information about CA lists used by common web browsers Web Browser Link Mozilla Firefox Mozilla root CA list . Opera Information on root certificates used by Opera . Internet Explorer Information on root certificates used by Microsoft Windows . Chromium Information on root certificates used by the Chromium project . When setting up an SSL server, you need to generate a certificate request and a private key, and then send the certificate request, proof of the company's identity, and payment to a certificate authority. Once the CA verifies the certificate request and your identity, it will send you a signed certificate you can use with your server. Alternatively, you can create a self-signed certificate that does not contain a CA signature, and thus should be used for testing purposes only. 14.1.8. Enabling the mod_ssl Module If you intend to set up an SSL or HTTPS server using mod_ssl , you cannot have the another application or module, such as mod_nss configured to use the same port. Port 443 is the default port for HTTPS. To set up an SSL server using the mod_ssl module and the OpenSSL toolkit, install the mod_ssl and openssl packages. Enter the following command as root : This will create the mod_ssl configuration file at /etc/httpd/conf.d/ssl.conf , which is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . Backwards compatibility can be achieved using TLSv1.0 . Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against. 14.1.8.1. Enabling and Disabling SSL and TLS in mod_ssl To disable and enable specific versions of the SSL and TLS protocol, either do it globally by adding the SSLProtocol directive in the " # SSL Global Context" section of the configuration file and removing it everywhere else, or edit the default entry under " SSL Protocol support" in all "VirtualHost" sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify SSLProtocol in the "SSL Global Context" section, or specify it in all per-domain VirtualHost sections. Disable SSLv2 and SSLv3 To disable SSL version 2 and SSL version 3, which implies enabling everything except SSL version 2 and SSL version 3, in all VirtualHost sections, proceed as follows: As root , open the /etc/httpd/conf.d/ssl.conf file and search for all instances of the SSLProtocol directive. By default, the configuration file contains one section that looks as follows: This section is within the VirtualHost section. Edit the SSLProtocol line as follows: Repeat this action for all VirtualHost sections. Save and close the file. Verify that all occurrences of the SSLProtocol directive have been changed as follows: This step is particularly important if you have more than the one default VirtualHost section. Restart the Apache daemon as follows: Note that any sessions will be interrupted. Disable All SSL and TLS Protocols Except TLS 1 and Up To disable all SSL and TLS protocol versions except TLS version 1 and higher, proceed as follows: As root , open the /etc/httpd/conf.d/ssl.conf file and search for all instances of SSLProtocol directive. By default the file contains one section that looks as follows: Edit the SSLProtocol line as follows: Save and close the file. Verify the change as follows: Restart the Apache daemon as follows: Note that any sessions will be interrupted. Testing the Status of SSL and TLS Protocols To check which versions of SSL and TLS are enabled or disabled, make use of the openssl s_client -connect command. The command has the following form: Where port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows: The above output indicates that the handshake failed and therefore no cipher was negotiated. The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated. The openssl s_client command options are documented in the s_client(1) manual page. For more information on the SSLv3 vulnerability and how to test for it, see the Red Hat Knowledgebase article POODLE: SSLv3 vulnerability (CVE-2014-3566) . 14.1.9. Enabling the mod_nss Module If you intend to set up an HTTPS server using mod_nss , you cannot have the mod_ssl package installed with its default settings as mod_ssl will use port 443 by default, however this is the default HTTPS port. If at all possible, remove the package. To remove mod_ssl , enter the following command as root : Note If mod_ssl is required for other purposes, modify the /etc/httpd/conf.d/ssl.conf file to use a port other than 443 to prevent mod_ssl conflicting with mod_nss when its port to listen on is changed to 443 . Only one module can own a port, therefore mod_nss and mod_ssl can only co-exist at the same time if they use unique ports. For this reason mod_nss by default uses 8443 , but the default port for HTTPS is port 443 . The port is specified by the Listen directive as well as in the VirtualHost name or address. Everything in NSS is associated with a "token". The software token exists in the NSS database but you can also have a physical token containing certificates. With OpenSSL, discrete certificates and private keys are held in PEM files. With NSS, these are stored in a database. Each certificate and key is associated with a token and each token can have a password protecting it. This password is optional, but if a password is used then the Apache HTTP server needs a copy of it in order to open the database without user intervention at system start. Configuring mod_nss Install mod_nss as root : This will create the mod_nss configuration file at /etc/httpd/conf.d/nss.conf . The /etc/httpd/conf.d/ directory is included in the main Apache HTTP Server configuration file by default. For the module to be loaded, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . As root , open the /etc/httpd/conf.d/nss.conf file and search for all instances of the Listen directive. Edit the Listen 8443 line as follows: Port 443 is the default port for HTTPS . Edit the default VirtualHost default :8443 line as follows: Edit any other non-default virtual host sections if they exist. Save and close the file. Mozilla NSS stores certificates in a server certificate database indicated by the NSSCertificateDatabase directive in the /etc/httpd/conf.d/nss.conf file. By default the path is set to /etc/httpd/alias , the NSS database created during installation. To view the default NSS database, issue a command as follows: In the above command output, Server-Cert is the default NSSNickname . The -L option lists all the certificates, or displays information about a named certificate, in a certificate database. The -d option specifies the database directory containing the certificate and key database files. See the certutil(1) man page for more command line options. To configure mod_nss to use another database, edit the NSSCertificateDatabase line in the /etc/httpd/conf.d/nss.conf file. The default file has the following lines within the VirtualHost section. In the above command output, alias is the default NSS database directory, /etc/httpd/alias/ . To apply a password to the default NSS certificate database, use the following command as root : Before deploying the HTTPS server, create a new certificate database using a certificate signed by a certificate authority (CA). Example 14.3. Adding a Certificate to the Mozilla NSS database The certutil command is used to add a CA certificate to the NSS database files: The above command adds a CA certificate stored in a PEM-formatted file named certificate.pem . The -d option specifies the NSS database directory containing the certificate and key database files, the -n option sets a name for the certificate, -t CT,, means that the certificate is trusted to be used in TLS clients and servers. The -A option adds an existing certificate to a certificate database. If the database does not exist it will be created. The -a option allows the use of ASCII format for input or output, and the -i option passes the certificate.pem input file to the command. See the certutil(1) man page for more command line options. The NSS database should be password protected to safeguard the private key. Example 14.4. Setting a Password for a Mozilla NSS database The certutil tool can be used set a password for an NSS database as follows: For example, for the default database, issue a command as root as follows: Configure mod_nss to use the NSS internal software token by changing the line with the NSSPassPhraseDialog directive as follows: This is to avoid manual password entry on system start. The software token exists in the NSS database but you can also have a physical token containing your certificates. If the SSL Server Certificate contained in the NSS database is an RSA certificate, make certain that the NSSNickname parameter is uncommented and matches the nickname displayed in step 4 above: If the SSL Server Certificate contained in the NSS database is an ECC certificate, make certain that the NSSECCNickname parameter is uncommented and matches the nickname displayed in step 4 above: Make certain that the NSSCertificateDatabase parameter is uncommented and points to the NSS database directory displayed in step 4 or configured in step 5 above: Replace /etc/httpd/alias with the path to the certificate database to be used. Create the /etc/httpd/password.conf file as root : Add a line with the following form: Replacing password with the password that was applied to the NSS security databases in step 6 above. Apply the appropriate ownership and permissions to the /etc/httpd/password.conf file: To configure mod_nss to use the NSS the software token in /etc/httpd/password.conf , edit /etc/httpd/conf.d/nss.conf as follows: Restart the Apache server for the changes to take effect as described in Section 14.1.3.3, "Restarting the Service" Important Due to the vulnerability described in POODLE: SSLv3 vulnerability (CVE-2014-3566) , Red Hat recommends disabling SSL and using only TLSv1.1 or TLSv1.2 . Backwards compatibility can be achieved using TLSv1.0 . Many products Red Hat supports have the ability to use SSLv2 or SSLv3 protocols, or enable them by default. However, the use of SSLv2 or SSLv3 is now strongly recommended against. 14.1.9.1. Enabling and Disabling SSL and TLS in mod_nss To disable and enable specific versions of the SSL and TLS protocol, either do it globally by adding the NSSProtocol directive in the " # SSL Global Context" section of the configuration file and removing it everywhere else, or edit the default entry under " SSL Protocol" in all "VirtualHost" sections. If you do not specify it in the per-domain VirtualHost section then it will inherit the settings from the global section. To make sure that a protocol version is being disabled the administrator should either only specify NSSProtocol in the "SSL Global Context" section, or specify it in all per-domain VirtualHost sections. Disable All SSL and TLS Protocols Except TLS 1 and Up in mod_nss To disable all SSL and TLS protocol versions except TLS version 1 and higher, proceed as follows: As root , open the /etc/httpd/conf.d/nss.conf file and search for all instances of the NSSProtocol directive. By default, the configuration file contains one section that looks as follows: This section is within the VirtualHost section. Edit the NSSProtocol line as follows: Repeat this action for all VirtualHost sections. Edit the Listen 8443 line as follows: Edit the default VirtualHost default :8443 line as follows: Edit any other non-default virtual host sections if they exist. Save and close the file. Verify that all occurrences of the NSSProtocol directive have been changed as follows: This step is particularly important if you have more than one VirtualHost section. Restart the Apache daemon as follows: Note that any sessions will be interrupted. Testing the Status of SSL and TLS Protocols in mod_nss To check which versions of SSL and TLS are enabled or disabled in mod_nss , make use of the openssl s_client -connect command. Install the openssl package as root : The openssl s_client -connect command has the following form: Where port is the port to test and protocol is the protocol version to test for. To test the SSL server running locally, use localhost as the host name. For example, to test the default port for secure HTTPS connections, port 443 to see if SSLv3 is enabled, issue a command as follows: The above output indicates that the handshake failed and therefore no cipher was negotiated. The above output indicates that no failure of the handshake occurred and a set of ciphers was negotiated. The openssl s_client command options are documented in the s_client(1) manual page. For more information on the SSLv3 vulnerability and how to test for it, see the Red Hat Knowledgebase article POODLE: SSLv3 vulnerability (CVE-2014-3566) . 14.1.10. Using an Existing Key and Certificate If you have a previously created key and certificate, you can configure the SSL server to use these files instead of generating new ones. There are only two situations where this is not possible: You are changing the IP address or domain name. Certificates are issued for a particular IP address and domain name pair. If one of these values changes, the certificate becomes invalid. You have a certificate from VeriSign, and you are changing the server software. VeriSign, a widely used certificate authority, issues certificates for a particular software product, IP address, and domain name. Changing the software product renders the certificate invalid. In either of the above cases, you will need to obtain a new certificate. For more information on this topic, see Section 14.1.11, "Generating a New Key and Certificate" . If you want to use an existing key and certificate, move the relevant files to the /etc/pki/tls/private/ and /etc/pki/tls/certs/ directories respectively. You can do so by issuing the following commands as root : Then add the following lines to the /etc/httpd/conf.d/ssl.conf configuration file: To load the updated configuration, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" . Example 14.5. Using a key and certificate from the Red Hat Secure Web Server 14.1.11. Generating a New Key and Certificate In order to generate a new key and certificate pair, the crypto-utils package must be installed on the system. To install it, enter the following command as root : This package provides a set of tools to generate and manage SSL certificates and private keys, and includes genkey , the Red Hat Keypair Generation utility that will guide you through the key generation process. Important If the server already has a valid certificate and you are replacing it with a new one, specify a different serial number. This ensures that client browsers are notified of this change, update to this new certificate as expected, and do not fail to access the page. To create a new certificate with a custom serial number, as root , use the following command instead of genkey : Note If there already is a key file for a particular host name in your system, genkey will refuse to start. In this case, remove the existing file using the following command as root : To run the utility enter the genkey command as root , followed by the appropriate host name (for example, penguin.example.com ): To complete the key and certificate creation, take the following steps: Review the target locations in which the key and certificate will be stored. Figure 14.1. Running the genkey utility Use the Tab key to select the button, and press Enter to proceed to the screen. Using the up and down arrow keys, select a suitable key size. Note that while a larger key increases the security, it also increases the response time of your server. The NIST recommends using 2048 bits . See NIST Special Publication 800-131A . Figure 14.2. Selecting the key size Once finished, use the Tab key to select the button, and press Enter to initiate the random bits generation process. Depending on the selected key size, this may take some time. Decide whether you want to send a certificate request to a certificate authority. Figure 14.3. Generating a certificate request Use the Tab key to select Yes to compose a certificate request, or No to generate a self-signed certificate. Then press Enter to confirm your choice. Using the Spacebar key, enable ( [*] ) or disable ( [ ] ) the encryption of the private key. Figure 14.4. Encrypting the private key Use the Tab key to select the button, and press Enter to proceed to the screen. If you have enabled the private key encryption, enter an adequate passphrase. Note that for security reasons, it is not displayed as you type, and it must be at least five characters long. Figure 14.5. Entering a passphrase Use the Tab key to select the button, and press Enter to proceed to the screen. Important Entering the correct passphrase is required in order for the server to start. If you lose it, you will need to generate a new key and certificate. Customize the certificate details. Figure 14.6. Specifying certificate information Use the Tab key to select the button, and press Enter to finish the key generation. If you have previously enabled the certificate request generation, you will be prompted to send it to a certificate authority. Figure 14.7. Instructions on how to send a certificate request Press Enter to return to a shell prompt. Once generated, add the key and certificate locations to the /etc/httpd/conf.d/ssl.conf configuration file: Finally, restart the httpd service as described in Section 14.1.3.3, "Restarting the Service" , so that the updated configuration is loaded. 14.1.12. Configure the Firewall for HTTP and HTTPS Using the Command Line Red Hat Enterprise Linux does not allow HTTP and HTTPS traffic by default. To enable the system to act as a web server, make use of firewalld 's supported services to enable HTTP and HTTPS traffic to pass through the firewall as required. To enable HTTP using the command line, issue the following command as root : To enable HTTPS using the command line, issue the following command as root : Note that these changes will not persist after the system start. To make permanent changes to the firewall, repeat the commands adding the --permanent option. 14.1.12.1. Checking Network Access for Incoming HTTPS and HTTPS Using the Command Line To check what services the firewall is configured to allow, using the command line, issue the following command as root : In this example taken from a default installation, the firewall is enabled but HTTP and HTTPS have not been allowed to pass through. Once the HTTP and HTTP firewall services are enabled, the services line will appear similar to the following: For more information on enabling firewall services, or opening and closing ports with firewalld , see the Red Hat Enterprise Linux 7 Security Guide . 14.1.13. Additional Resources To learn more about the Apache HTTP Server, see the following resources. Installed Documentation httpd(8) - The manual page for the httpd service containing the complete list of its command-line options. genkey(1) - The manual page for genkey utility, provided by the crypto-utils package. apachectl(8) - The manual page for the Apache HTTP Server Control Interface. Installable Documentation http://localhost/manual/ - The official documentation for the Apache HTTP Server with the full description of its directives and available modules. Note that in order to access this documentation, you must have the httpd-manual package installed, and the web server must be running. Before accessing the documentation, issue the following commands as root : Online Documentation http://httpd.apache.org/ - The official website for the Apache HTTP Server with documentation on all the directives and default modules. http://www.openssl.org/ - The OpenSSL home page containing further documentation, frequently asked questions, links to the mailing lists, and other useful resources.
[ "service httpd graceful", "apachectl graceful", "service httpd configtest", "apachectl configtest", "~]# apachectl configtest Syntax OK", "~]# yum install httpd", "~]# systemctl start httpd.service", "~]# systemctl enable httpd.service Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.", "~]# systemctl stop httpd.service", "~]# systemctl disable httpd.service Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service.", "~]# systemctl restart httpd.service", "~]# systemctl reload httpd.service", "~]# apachectl graceful", "~]# systemctl is-active httpd.service active", "~]# apachectl configtest Syntax OK", "LoadModule ssl_module modules/mod_ssl.so", "~]# yum install httpd-devel", "~]# apxs -i -a -c module_name.c", "<VirtualHost *:80> ServerAdmin [email protected] DocumentRoot \"/www/docs/penguin.example.com\" ServerName penguin.example.com ServerAlias www.penguin.example.com ErrorLog \"/var/log/httpd/dummy-host.example.com-error_log\" CustomLog \"/var/log/httpd/dummy-host.example.com-access_log\" common </VirtualHost>", "~]# yum install mod_ssl openssl", "~]# vi /etc/httpd/conf.d/ssl.conf SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2", "SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2 -SSLv3", "~]# grep SSLProtocol /etc/httpd/conf.d/ssl.conf SSLProtocol all -SSLv2 -SSLv3", "~]# systemctl restart httpd", "~]# vi /etc/httpd/conf.d/ssl.conf SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol all -SSLv2", "SSL Protocol support: List the enable protocol levels with which clients will be able to connect. Disable SSLv2 access by default: SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2", "~]# grep SSLProtocol /etc/httpd/conf.d/ssl.conf SSLProtocol -all +TLSv1 +TLSv1.1 +TLSv1.2", "~]# systemctl restart httpd", "openssl s_client -connect hostname : port - protocol", "~]# openssl s_client -connect localhost:443 -ssl3 CONNECTED(00000003) 139809943877536:error:14094410:SSL routines:SSL3_READ_BYTES: sslv3 alert handshake failure :s3_pkt.c:1257:SSL alert number 40 139809943877536:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES: ssl handshake failure :s3_pkt.c:596: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated", "~]USD openssl s_client -connect localhost:443 -tls1_2 CONNECTED(00000003) depth=0 C = --, ST = SomeState, L = SomeCity, O = SomeOrganization, OU = SomeOrganizationalUnit, CN = localhost.localdomain, emailAddress = [email protected] output omitted New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1.2 output truncated", "~]# yum remove mod_ssl", "~]# yum install mod_nss", "Listen 443", "VirtualHost default :443", "~]# certutil -L -d /etc/httpd/alias Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI cacert CTu,Cu,Cu Server-Cert u,u,u alpha u,pu,u", "Server Certificate Database: The NSS security database directory that holds the certificates and keys. The database consists of 3 files: cert8.db, key3.db and secmod.db. Provide the directory that these files exist. NSSCertificateDatabase /etc/httpd/alias", "~]# certutil -W -d /etc/httpd/alias Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully.", "certutil -d /etc/httpd/nss-db-directory/ -A -n \" CA_certificate \" -t CT,, -a -i certificate.pem", "certutil -W -d /etc/httpd/ nss-db-directory /", "~]# certutil -W -d /etc/httpd/alias Enter Password or Pin for \"NSS Certificate DB\": Enter a password which will be used to encrypt your keys. The password should be at least 8 characters long, and should contain at least one non-alphabetic character. Enter new password: Re-enter password: Password changed successfully.", "~]# vi /etc/httpd/conf.d/nss.conf NSSPassPhraseDialog file:/etc/httpd/password.conf", "~]# vi /etc/httpd/conf.d/nss.conf NSSNickname Server-Cert", "~]# vi /etc/httpd/conf.d/nss.conf NSSECCNickname Server-Cert", "~]# vi /etc/httpd/conf.d/nss.conf NSSCertificateDatabase /etc/httpd/alias", "~]# vi /etc/httpd/password.conf", "internal: password", "~]# chgrp apache /etc/httpd/password.conf ~]# chmod 640 /etc/httpd/password.conf ~]# ls -l /etc/httpd/password.conf -rw-r-----. 1 root apache 10 Dec 4 17:13 /etc/httpd/password.conf", "~]# vi /etc/httpd/conf.d/nss.conf", "~]# vi /etc/httpd/conf.d/nss.conf SSL Protocol: output omitted Since all protocol ranges are completely inclusive, and no protocol in the middle of a range may be excluded, the entry \"NSSProtocol SSLv3,TLSv1.1\" is identical to the entry \"NSSProtocol SSLv3,TLSv1.0,TLSv1.1\". NSSProtocol SSLv3,TLSv1.0,TLSv1.1", "SSL Protocol: NSSProtocol TLSv1.0,TLSv1.1", "Listen 443", "VirtualHost default :443", "~]# grep NSSProtocol /etc/httpd/conf.d/nss.conf middle of a range may be excluded, the entry \" NSSProtocol SSLv3,TLSv1.1\" is identical to the entry \" NSSProtocol SSLv3,TLSv1.0,TLSv1.1\". NSSProtocol TLSv1.0,TLSv1.1", "~]# service httpd restart", "~]# yum install openssl", "openssl s_client -connect hostname : port - protocol", "~]# openssl s_client -connect localhost:443 -ssl3 CONNECTED(00000003) 3077773036:error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number:s3_pkt.c:337: output omitted New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : SSLv3 output truncated", "~]USD openssl s_client -connect localhost:443 -tls1 CONNECTED(00000003) depth=1 C = US, O = example.com, CN = Certificate Shack output omitted New, TLSv1/SSLv3, Cipher is AES128-SHA Server public key is 1024 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 output truncated", "~]# mv key_file.key /etc/pki/tls/private/hostname.key ~]# mv certificate.crt /etc/pki/tls/certs/hostname.crt", "SSLCertificateFile /etc/pki/tls/certs/ hostname .crt SSLCertificateKeyFile /etc/pki/tls/private/ hostname .key", "~]# mv /etc/httpd/conf/httpsd.key /etc/pki/tls/private/penguin.example.com.key ~]# mv /etc/httpd/conf/httpsd.crt /etc/pki/tls/certs/penguin.example.com.crt", "~]# yum install crypto-utils", "~]# openssl req -x509 -new -set_serial number -key hostname.key -out hostname.crt", "~]# rm /etc/pki/tls/private/hostname.key", "~]# genkey hostname", "SSLCertificateFile /etc/pki/tls/certs/ hostname .crt SSLCertificateKeyFile /etc/pki/tls/private/ hostname .key", "~]# firewall-cmd --add-service http success", "~]# firewall-cmd --add-service https success", "~]# firewall-cmd --list-all public (default, active) interfaces: em1 sources: services: dhcpv6-client ssh output truncated", "services: dhcpv6-client http https ssh", "~] yum install httpd-manual ~] apachectl graceful" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-Web_Servers
Chapter 10. Modifying printer settings
Chapter 10. Modifying printer settings In GNOME, you can modify printer settings using the Settings application. Prerequisites You have started Settings for setting up printing by following the procedure Accessing printer settings in GNOME 10.1. Displaying and modifying printer's details To maintain a configuration of a printer, use the Settings application: Procedure Click the settings (⚙\ufe0f) button on the right to display a settings menu for the selected printer: Click Printer Details to display and modify selected printer's settings: In this menu, you can select the following actions: Search for Drivers GNOME Control Center communicates with PackageKit that searches for a suitable driver suitable in available repositories. Select from Database This option enables you to select a suitable driver from databases that have already been installed on the system. Install PPD File This option enables you to select from a list of available postscript printer description (PPD) files that can be used as a driver for your printer. 10.2. Setting the default printer You can set the selected printer as the default printer. Procedure Click the settings (⚙\ufe0f) button on the right to display a settings menu for the selected printer: Click Use Printer by Default to set the selected printer as the default printer: 10.3. Setting printing options Procedure Click the settings (⚙\ufe0f) button on the right to display a settings menu for the selected printer: Click Printing Options . 10.4. Removing a printer You can remove a printer using the Settings application. Procedure Click the settings (⚙\ufe0f) button on the right to display a settings menu for the selected printer: Click Remove Printer to remove the selected printer:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/administering_the_system_using_the_gnome_desktop_environment/assembly_modifying-printer-settings_administering-the-system-using-the-gnome-desktop-environment
14.13.5. Configuring Virtual CPU Affinity
14.13.5. Configuring Virtual CPU Affinity To configure the affinity of virtual CPUs with physical CPUs: The domain-id parameter is the guest virtual machine's ID number or name. The vcpu parameter denotes the number of virtualized CPUs allocated to the guest virtual machine.The vcpu parameter must be provided. The cpulist parameter is a list of physical CPU identifier numbers separated by commas. The cpulist parameter determines which physical CPUs the VCPUs can run on. Additional parameters such as --config affect the boot, whereas --live affects the running domain, and --current affects the current domain.
[ "virsh vcpupin domain-id vcpu cpulist" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-displaying_per_guest_virtual_machine_information-configuring_virtual_cpu_affinity
10.5.9. MPM Specific Server-Pool Directives
10.5.9. MPM Specific Server-Pool Directives As explained in Section 10.2.1.2, "Server-Pool Size Regulation" , the responsibility for managing characteristics of the server-pool falls to a module group called MPMs under Apache HTTP Server 2.0. The characteristics of the server-pool differ depending upon which MPM is used. For this reason, an IfModule container is necessary to define the server-pool for the MPM in use. By default, Apache HTTP Server 2.0 defines the server-pool for both the prefork and worker MPMs. The following section list directives found within the MPM-specific server-pool containers. 10.5.9.1. StartServers The StartServers directive sets how many server processes are created upon startup. Since the Web server dynamically kills and creates server processes based on traffic load, it is not necessary to change this parameter. The Web server is set to start 8 server processes at startup for the prefork MPM and 2 for the worker MPM.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-mpm-containers
Chapter 5. Serving and chatting with your new model
Chapter 5. Serving and chatting with your new model You must deploy the model to your machine by serving the model. This deploys the model and makes the model available for interacting and chatting. 5.1. Serving the new model To interact with your new model, you must activate the model in a machine through serving. The ilab model serve command starts a vLLM server that allows you to chat with the model. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You customized your taxonomy tree, ran synthetic data generation, trained, and evaluated your new model. You need root user access on your machine. Procedure You can serve the model by running the following command: USD ilab model serve --model-path <path-to-best-performed-checkpoint> where: <path-to-best-performed-checkpoint> Specify the full path to the checkpoint you built after training. Your new model is the best performed checkpoint with its file path displayed after training. Example command: USD ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945/ Important Ensure you have a slash / at the end of your model path. Example output of the ilab model serve command USD ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> INFO 2024-03-02 02:21:11,352 lab.py:201 Using model /home/example-user/.local/share/instructlab/checkpoints/hf_format/checkpoint_1945 with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server. 5.2. Chatting with the new model You can chat with your model that has been trained with your data. Prerequisites You installed RHEL AI with the bootable container image. You initialized InstructLab. You customized your taxonomy tree, ran synthetic data generated, trained and evaluated your new model. You served your checkpoint model. You need root user access on your machine. Procedure Since you are serving the model in one terminal window, you must open a new terminal window to chat with the model. To chat with your new model, run the following command: USD ilab model chat --model <path-to-best-performed-checkpoint-file> where: <path-to-best-performed-checkpoint-file> Specify the new model checkpoint file you built after training. Your new model is the best performed checkpoint with its file path displayed after training. Example command: USD ilab model chat --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945 Example output of the InstructLab chatbot USD ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ CHECKPOINT_1945 (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default] Type exit to leave the chatbot.
[ "ilab model serve --model-path <path-to-best-performed-checkpoint>", "ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945/", "ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> INFO 2024-03-02 02:21:11,352 lab.py:201 Using model /home/example-user/.local/share/instructlab/checkpoints/hf_format/checkpoint_1945 with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.", "ilab model chat --model <path-to-best-performed-checkpoint-file>", "ilab model chat --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945", "ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ CHECKPOINT_1945 (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/creating_a_custom_llm_using_rhel_ai/serving_chatting_new_model
Chapter 3. Converting virtualization hosts to hyperconverged hosts
Chapter 3. Converting virtualization hosts to hyperconverged hosts Follow this process to convert virtualization hosts to hyperconverged hosts. This lets you use and manage the host's local storage as Red Hat Gluster Storage volumes. Log in to the Administration Portal. Move all hosts except the self-hosted engine node into maintenance mode. Click Compute Hosts . For each host except the self-hosted engine node: Select the host to move to maintenance. Click Management Maintenance and click OK . Enable the gluster service in the cluster. Click Compute Clusters and select the cluster. The Edit Cluster window appears. Check the Enable Gluster service checkbox. Click OK . Reinstall all hosts except the self-hosted engine node. Click Compute Hosts . For each host except the self-hosted engine node: Select the host to reinstall. Click Management Reinstall and click OK . Wait for the reinstall to complete and for the hosts to become active again. Move the self-hosted engine node into maintenance mode. Select the self-hosted engine node. Click Management Maintenance and click OK The hosted engine migrates to one of the freshly installed hosts. Reinstall the self-hosted engine node. Select the self-hosted engine node. Click Management Reinstall and click OK . Wait for the reinstall to complete and for the host to become active again. Your hosts are now able to use and manage storage as Red Hat Gluster Storage volumes.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/converting_a_virtualization_cluster_to_a_hyperconverged_cluster/task-convert-vhost-hchost
Chapter 144. Validator
Chapter 144. Validator Only producer is supported The Validation component performs XML validation of the message body using the JAXP Validation API and based on any of the supported XML schema languages, which defaults to XML Schema Note that the component also supports the following useful schema languages: RelaxNG Compact Syntax RelaxNG XML Syntax The MSV component also supports RelaxNG XML Syntax. 144.1. Dependencies When using validator with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-validator-starter</artifactId> </dependency> 144.2. URI format Where someLocalOrRemoteResource is some URL to a local resource on the classpath or a full URL to a remote resource or resource on the file system which contains the XSD to validate against. For example: msv:org/foo/bar.xsd msv:file:../foo/bar.xsd msv:http://acme.com/cheese.xsd validator:com/mypackage/myschema.xsd The Validation component is provided directly in the camel-core. 144.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 144.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 144.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 144.4. Component Options The Validator component supports 3 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean resourceResolverFactory (advanced) To use a custom LSResourceResolver which depends on a dynamic endpoint resource URI. ValidatorResourceResolverFactory 144.5. Endpoint Options The Validator endpoint is configured using URI syntax: with the following path and query parameters: 144.5.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required URL to a local resource on the classpath, or a reference to lookup a bean in the Registry, or a full URL to a remote resource or resource on the file system which contains the XSD to validate against. String 144.5.2. Query Parameters (10 parameters) Name Description Default Type failOnNullBody (producer) Whether to fail if no body exists. true boolean failOnNullHeader (producer) Whether to fail if no header exists when validating against a header. true boolean headerName (producer) To validate against a header instead of the message body. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean errorHandler (advanced) To use a custom org.apache.camel.processor.validation.ValidatorErrorHandler. The default error handler captures the errors and throws an exception. ValidatorErrorHandler resourceResolver (advanced) To use a custom LSResourceResolver. Do not use together with resourceResolverFactory. LSResourceResolver resourceResolverFactory (advanced) To use a custom LSResourceResolver which depends on a dynamic endpoint resource URI. The default resource resolver factory resturns a resource resolver which can read files from the class path and file system. Do not use together with resourceResolver. ValidatorResourceResolverFactory schemaFactory (advanced) To use a custom javax.xml.validation.SchemaFactory. SchemaFactory schemaLanguage (advanced) Configures the W3C XML Schema Namespace URI. http://www.w3.org/2001/XMLSchema String useSharedSchema (advanced) Whether the Schema instance should be shared or not. This option is introduced to work around a JDK 1.6.x bug. Xerces should not have this issue. true boolean 144.6. Example The following example shows how to configure a route from endpoint direct:start which then goes to one of two endpoints, either mock:valid or mock:invalid based on whether or not the XML matches the given schema (which is supplied on the classpath). 144.7. Advanced: JMX method clearCachedSchema You can force that the cached schema in the validator endpoint is cleared and reread with the process call with the JMX operation clearCachedSchema . You can also use this method to programmatically clear the cache. This method is available on the ValidatorEndpoint class. 144.8. Spring Boot Auto-Configuration The component supports 4 options, which are listed below. Name Description Default Type camel.component.validator.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.validator.enabled Whether to enable auto configuration of the validator component. This is enabled by default. Boolean camel.component.validator.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.validator.resource-resolver-factory To use a custom LSResourceResolver which depends on a dynamic endpoint resource URI. The option is a org.apache.camel.component.validator.ValidatorResourceResolverFactory type. ValidatorResourceResolverFactory
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-validator-starter</artifactId> </dependency>", "validator:someLocalOrRemoteResource", "validator:resourceUri" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-validator-component-starter
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.372/making-open-source-more-inclusive
Appendix A. The Device Mapper
Appendix A. The Device Mapper The Device Mapper is a kernel driver that provides a framework for volume management. It provides a generic way of creating mapped devices, which may be used as logical volumes. It does not specifically know about volume groups or metadata formats. The Device Mapper provides the foundation for a number of higher-level technologies. In addition to LVM, Device-Mapper multipath and the dmraid command use the Device Mapper. The application interface to the Device Mapper is the ioctl system call. The user interface is the dmsetup command. LVM logical volumes are activated using the Device Mapper. Each logical volume is translated into a mapped device. Each segment translates into a line in the mapping table that describes the device. The Device Mapper supports a variety of mapping targets, including linear mapping, striped mapping, and error mapping. For example, two disks may be concatenated into one logical volume with a pair of linear mappings, one for each disk. When LVM creates a volume, it creates an underlying device-mapper device that can be queried with the dmsetup command. For information about the format of devices in a mapping table, see Section A.1, "Device Table Mappings" . For information about using the dmsetup command to query a device, see Section A.2, "The dmsetup Command" . A.1. Device Table Mappings A mapped device is defined by a table that specifies how to map each range of logical sectors of the device using a supported Device Table mapping. The table for a mapped device is constructed from a list of lines of the form: In the first line of a Device Mapper table, the start parameter must equal 0. The start + length parameters on one line must equal the start on the line. Which mapping parameters are specified in a line of the mapping table depends on which mapping type is specified on the line. Sizes in the Device Mapper are always specified in sectors (512 bytes). When a device is specified as a mapping parameter in the Device Mapper, it can be referenced by the device name in the filesystem (for example, /dev/hda ) or by the major and minor numbers in the format major : minor . The major:minor format is preferred because it avoids pathname lookups. The following shows a sample mapping table for a device. In this table there are four linear targets: The first 2 parameters of each line are the segment starting block and the length of the segment. The keyword is the mapping target, which in all of the cases in this example is linear . The rest of the line consists of the parameters for a linear target. The following subsections describe the format of the following mappings: linear striped mirror snapshot and snapshot-origin error zero multipath crypt A.1.1. The linear Mapping Target A linear mapping target maps a continuous range of blocks onto another block device. The format of a linear target is as follows: start starting block in virtual device length length of this segment device block device, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor offset starting offset of the mapping on the device The following example shows a linear target with a starting block in the virtual device of 0, a segment length of 1638400, a major:minor number pair of 8:2, and a starting offset for the device of 41146992. The following example shows a linear target with the device parameter specified as the device /dev/hda . A.1.2. The striped Mapping Target The striped mapping target supports striping across physical devices. It takes as arguments the number of stripes and the striping chunk size followed by a list of pairs of device name and sector. The format of a striped target is as follows: There is one set of device and offset parameters for each stripe. start starting block in virtual device length length of this segment #stripes number of stripes for the virtual device chunk_size number of sectors written to each stripe before switching to the ; must be power of 2 at least as big as the kernel page size device block device, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor . offset starting offset of the mapping on the device The following example shows a striped target with three stripes and a chunk size of 128: 0 starting block in virtual device 73728 length of this segment striped 3 128 stripe across three devices with chunk size of 128 blocks 8:9 major:minor numbers of first device 384 starting offset of the mapping on the first device 8:8 major:minor numbers of second device 384 starting offset of the mapping on the second device 8:7 major:minor numbers of third device 9789824 starting offset of the mapping on the third device The following example shows a striped target for 2 stripes with 256 KiB chunks, with the device parameters specified by the device names in the file system rather than by the major and minor numbers. A.1.3. The mirror Mapping Target The mirror mapping target supports the mapping of a mirrored logical device. The format of a mirrored target is as follows: start starting block in virtual device length length of this segment log_type The possible log types and their arguments are as follows: core The mirror is local and the mirror log is kept in core memory. This log type takes 1 - 3 arguments: regionsize [[ no ] sync ] [ block_on_error ] disk The mirror is local and the mirror log is kept on disk. This log type takes 2 - 4 arguments: logdevice regionsize [[ no ] sync ] [ block_on_error ] clustered_core The mirror is clustered and the mirror log is kept in core memory. This log type takes 2 - 4 arguments: regionsize UUID [[ no ] sync ] [ block_on_error ] clustered_disk The mirror is clustered and the mirror log is kept on disk. This log type takes 3 - 5 arguments: logdevice regionsize UUID [[ no ] sync ] [ block_on_error ] LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. The regionsize argument specifies the size of these regions. In a clustered environment, the UUID argument is a unique identifier associated with the mirror log device so that the log state can be maintained throughout the cluster. The optional [no]sync argument can be used to specify the mirror as "in-sync" or "out-of-sync". The block_on_error argument is used to tell the mirror to respond to errors rather than ignoring them. #log_args number of log arguments that will be specified in the mapping logargs the log arguments for the mirror; the number of log arguments provided is specified by the #log-args parameter and the valid log arguments are determined by the log_type parameter. #devs the number of legs in the mirror; a device and an offset is specified for each leg device block device for each mirror leg, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor . A block device and offset is specified for each mirror leg, as indicated by the #devs parameter. offset starting offset of the mapping on the device. A block device and offset is specified for each mirror leg, as indicated by the #devs parameter. The following example shows a mirror mapping target for a clustered mirror with a mirror log kept on disk. 0 starting block in virtual device 52428800 length of this segment mirror clustered_disk mirror target with a log type specifying that mirror is clustered and the mirror log is maintained on disk 4 4 mirror log arguments will follow 253:2 major:minor numbers of log device 1024 region size the mirror log uses to keep track of what is in sync UUID UUID of mirror log device to maintain log information throughout a cluster block_on_error mirror should respond to errors 3 number of legs in mirror 253:3 0 253:4 0 253:5 0 major:minor numbers and offset for devices constituting each leg of mirror A.1.4. The snapshot and snapshot-origin Mapping Targets When you create the first LVM snapshot of a volume, four Device Mapper devices are used: A device with a linear mapping containing the original mapping table of the source volume. A device with a linear mapping used as the copy-on-write (COW) device for the source volume; for each write, the original data is saved in the COW device of each snapshot to keep its visible content unchanged (until the COW device fills up). A device with a snapshot mapping combining #1 and #2, which is the visible snapshot volume. The "original" volume (which uses the device number used by the original source volume), whose table is replaced by a "snapshot-origin" mapping from device #1. A fixed naming scheme is used to create these devices, For example, you might use the following commands to create an LVM volume named base and a snapshot volume named snap based on that volume. This yields four devices, which you can view with the following commands: The format for the snapshot-origin target is as follows: start starting block in virtual device length length of this segment origin base volume of snapshot The snapshot-origin will normally have one or more snapshots based on it. Reads will be mapped directly to the backing device. For each write, the original data will be saved in the COW device of each snapshot to keep its visible content unchanged until the COW device fills up. The format for the snapshot target is as follows: start starting block in virtual device length length of this segment origin base volume of snapshot COW-device device on which changed chunks of data are stored P|N P (Persistent) or N (Not persistent); indicates whether the snapshot will survive after reboot. For transient snapshots (N) less metadata must be saved on disk; they can be kept in memory by the kernel. chunksize size in sectors of changed chunks of data that will be stored on the COW device The following example shows a snapshot-origin target with an origin device of 254:11. The following example shows a snapshot target with an origin device of 254:11 and a COW device of 254:12. This snapshot device is persistent across reboots and the chunk size for the data stored on the COW device is 16 sectors. A.1.5. The error Mapping Target With an error mapping target, any I/O operation to the mapped sector fails. An error mapping target can be used for testing. To test how a device behaves in failure, you can create a device mapping with a bad sector in the middle of a device, or you can swap out the leg of a mirror and replace the leg with an error target. An error target can be used in place of a failing device, as a way of avoiding timeouts and retries on the actual device. It can serve as an intermediate target while you rearrange LVM metadata during failures. The error mapping target takes no additional parameters besides the start and length parameters. The following example shows an error target. A.1.6. The zero Mapping Target The zero mapping target is a block device equivalent of /dev/zero . A read operation to this mapping returns blocks of zeros. Data written to this mapping is discarded, but the write succeeds. The zero mapping target takes no additional parameters besides the start and length parameters. The following example shows a zero target for a 16Tb Device. A.1.7. The multipath Mapping Target The multipath mapping target supports the mapping of a multipathed device. The format for the multipath target is as follows: There is one set of pathgroupargs parameters for each path group. start starting block in virtual device length length of this segment #features The number of multipath features, followed by those features. If this parameter is zero, then there is no feature parameter and the device mapping parameter is #handlerargs . Currently there is one supported feature that can be set with the features attribute in the multipath.conf file, queue_if_no_path . This indicates that this multipathed device is currently set to queue I/O operations if there is no path available. In the following example, the no_path_retry attribute in the multipath.conf file has been set to queue I/O operations only until all paths have been marked as failed after a set number of attempts have been made to use the paths. In this case, the mapping appears as follows until all the path checkers have failed the specified number of checks. After all the path checkers have failed the specified number of checks, the mapping would appear as follows. #handlerargs The number of hardware handler arguments, followed by those arguments. A hardware handler specifies a module that will be used to perform hardware-specific actions when switching path groups or handling I/O errors. If this is set to 0, then the parameter is #pathgroups . #pathgroups The number of path groups. A path group is the set of paths over which a multipathed device will load balance. There is one set of pathgroupargs parameters for each path group. pathgroup The path group to try. pathgroupsargs Each path group consists of the following arguments: There is one set of path arguments for each path in the path group. pathselector Specifies the algorithm in use to determine what path in this path group to use for the I/O operation. #selectorargs The number of path selector arguments which follow this argument in the multipath mapping. Currently, the value of this argument is always 0. #paths The number of paths in this path group. #pathargs The number of path arguments specified for each path in this group. Currently this number is always 1, the ioreqs argument. device The block device number of the path, referenced by the major and minor numbers in the format major : minor ioreqs The number of I/O requests to route to this path before switching to the path in the current group. Figure A.1, "Multipath Mapping Target" shows the format of a multipath target with two path groups. Figure A.1. Multipath Mapping Target The following example shows a pure failover target definition for the same multipath device. In this target there are four path groups, with only one open path per path group so that the multipathed device will use only one path at a time. The following example shows a full spread (multibus) target definition for the same multipathed device. In this target there is only one path group, which includes all of the paths. In this setup, multipath spreads the load evenly to all of the paths. For further information about multipathing, see the DM Multipath manual. A.1.8. The crypt Mapping Target The crypt target encrypts the data passing through the specified device. It uses the kernel Crypto API. The format for the crypt target is as follows: start starting block in virtual device length length of this segment cipher Cipher consists of cipher[-chainmode]-ivmode[:iv options] . cipher Ciphers available are listed in /proc/crypto (for example, aes ). chainmode Always use cbc . Do not use ebc ; it does not use an initial vector (IV). ivmode[:iv options] IV is an initial vector used to vary the encryption. The IV mode is plain or essiv:hash . An ivmode of -plain uses the sector number (plus IV offset) as the IV. An ivmode of -essiv is an enhancement avoiding a watermark weakness. key Encryption key, supplied in hex IV-offset Initial Vector (IV) offset device block device, referenced by the device name in the filesystem or by the major and minor numbers in the format major : minor offset starting offset of the mapping on the device The following is an example of a crypt target.
[ "start length mapping [ mapping_parameters... ]", "0 35258368 linear 8:48 65920 35258368 35258368 linear 8:32 65920 70516736 17694720 linear 8:16 17694976 88211456 17694720 linear 8:16 256", "start length linear device offset", "0 16384000 linear 8:2 41156992", "0 20971520 linear /dev/hda 384", "start length striped #stripes chunk_size device1 offset1 ... deviceN offsetN", "0 73728 striped 3 128 8:9 384 8:8 384 8:7 9789824", "0 65536 striped 2 512 /dev/hda 0 /dev/hdb 0", "start length mirror log_type #logargs logarg1 ... logargN #devs device1 offset1 ... deviceN offsetN", "0 52428800 mirror clustered_disk 4 253:2 1024 UUID block_on_error 3 253:3 0 253:4 0 253:5 0", "lvcreate -L 1G -n base volumeGroup lvcreate -L 100M --snapshot -n snap volumeGroup/base", "dmsetup table|grep volumeGroup volumeGroup-base-real: 0 2097152 linear 8:19 384 volumeGroup-snap-cow: 0 204800 linear 8:19 2097536 volumeGroup-snap: 0 2097152 snapshot 254:11 254:12 P 16 volumeGroup-base: 0 2097152 snapshot-origin 254:11 ls -lL /dev/mapper/volumeGroup-* brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real brw------- 1 root root 254, 12 29 ago 18:15 /dev/mapper/volumeGroup-snap-cow brw------- 1 root root 254, 13 29 ago 18:15 /dev/mapper/volumeGroup-snap brw------- 1 root root 254, 10 29 ago 18:14 /dev/mapper/volumeGroup-base", "start length snapshot-origin origin", "start length snapshot origin COW-device P|N chunksize", "0 2097152 snapshot-origin 254:11", "0 2097152 snapshot 254:11 254:12 P 16", "0 65536 error", "0 65536 zero", "start length multipath #features [feature1 ... featureN] #handlerargs [handlerarg1 ... handlerargN] #pathgroups pathgroup pathgroupargs1 ... pathgroupargsN", "0 71014400 multipath 1 queue_if_no_path 0 2 1 round-robin 0 2 1 66:128 1000 65:64 1000 round-robin 0 2 1 8:0 1000 67:192 1000", "0 71014400 multipath 0 0 2 1 round-robin 0 2 1 66:128 1000 65:64 1000 round-robin 0 2 1 8:0 1000 67:192 1000", "pathselector #selectorargs #paths #pathargs device1 ioreqs1 ... deviceN ioreqsN", "0 71014400 multipath 0 0 4 1 round-robin 0 1 1 66:112 1000 round-robin 0 1 1 67:176 1000 round-robin 0 1 1 68:240 1000 round-robin 0 1 1 65:48 1000", "0 71014400 multipath 0 0 1 1 round-robin 0 4 1 66:112 1000 67:176 1000 68:240 1000 65:48 1000", "start length crypt cipher key IV-offset device offset", "0 2097152 crypt aes-plain 0123456789abcdef0123456789abcdef 0 /dev/hda 0" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/device_mapper
8.6 Release Notes
8.6 Release Notes Red Hat Enterprise Linux 8.6 Release Notes for Red Hat Enterprise Linux 8.6 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/index
Chapter 9. Data Centers
Chapter 9. Data Centers 9.1. Data Center Elements The datacenters collection provides information about the data centers in a Red Hat Virtualization environment. An API user accesses this information through the rel="datacenters" link obtained from the entry point URI. The following table shows specific elements contained in a data center resource representation. Table 9.1. Data center elements Element Type Description Properties name string A plain text, human-readable name for the data center. The name is unique across all data center resources. description string A plain text, human-readable description of the data center link rel="storagedomains" relationship A link to the sub-collection for storage domains attached to this data center. link rel="clusters" relationship A link to the sub-collection for clusters attached to this data center. link rel="networks" relationship A link to the sub-collection for networks available to this data center. link rel="permissions" relationship A link to the sub-collection for data center permissions. link rel="quotas" relationship A link to the sub-collection for quotas associated with this data center. local Boolean: true or false Specifies whether the data center is a local data center, such as created in all-in-one instances. storage_format enumerated Describes the storage format version for the data center. A list of enumerated values are available in capabilities . version major= minor= complex The compatibility level of the data center. supported_versions complex A list of possible version levels for the data center, including version major= minor= . mac_pool string The MAC address pool associated with the data center. If no MAC address pool is specified the default MAC address pool is used. status see below The data center status. The status contains one of the following enumerated values: uninitialized , up , maintenance , not_operational , problematic and contend . These states are listed in data_center_states under capabilities .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/chap-data_centers
Chapter 1. The Ceph Object Gateway
Chapter 1. The Ceph Object Gateway Ceph Object Gateway, also known as RADOS Gateway (RGW), is an object storage interface built on top of the librados library to provide applications with a RESTful gateway to Ceph storage clusters. Ceph Object Gateway supports three interfaces: S3-compatibility: Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API. You can run S3 select to accelerate throughput. Users can run S3 select queries directly without a mediator. There are two S3 select workflows, one for CSV and one for Apache Parquet (Parquet), that provide S3 select operations with CSV and Parquet objects. For more details about these S3 select operations, see section S3 select operations in the Red Hat Ceph Storage Developer Guide . Swift-compatibility: Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API. The Ceph Object Gateway is a service interacting with a Ceph storage cluster. Since it provides interfaces compatible with OpenStack Swift and Amazon S3, the Ceph Object Gateway has its own user management system. Ceph Object Gateway can store data in the same Ceph storage cluster used to store data from Ceph block device clients; however, it would involve separate pools and likely a different CRUSH hierarchy. The S3 and Swift APIs share a common namespace, so you can write data with one API and retrieve it with the other. Administrative API: Provides an administrative interface for managing the Ceph Object Gateways. Administrative API requests are done on a URI that starts with the admin resource end point. Authorization for the administrative API mimics the S3 authorization convention. Some operations require the user to have special administrative capabilities. The response type can be either XML or JSON by specifying the format option in the request, but defaults to the JSON format. Introduction to WORM Write-Once-Read-Many (WORM) is a secured data storage model that is used to guarantee data protection and data retrieval even in cases where objects and buckets are compromised in production zones. In Red Hat Ceph Storage, data security is achieved through the use of S3 Object Lock with read-only capability that is used to store objects and buckets using a Write-Once-Read-Many (WORM) model, preventing them from being deleted or overwritten. They cannot be deleted even by the Red Hat Ceph Storage administrator. S3 Object Lock provides two retention modes: GOVERNANCE COMPLIANCE These retention modes apply different levels of protection to your objects. You can apply either retention mode to any object version that is protected by Object Lock. In GOVERNANCE, users cannot overwrite or delete an object version or alter its lock settings unless they have special permissions. With GOVERNANCE mode, you can protect objects against deletion by most users, although you can still grant some users permission to alter the retention settings or delete the object if necessary. In COMPLIANCE mode, a protected object version cannot be overwritten or deleted by any user. When an object is locked in COMPLIANCE mode, its retention mode cannot be changed or shortened. Additional Resources See Enabling object lock for S3 in the Red Hat Ceph Storage Object Gateway Guide for more details.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/the-ceph-object-gateway
8.151. nmap
8.151. nmap 8.151.1. RHBA-2014:0683 - nmap bug fix update Updated nmap packages that fix one bug are now available for Red Hat Enterprise Linux 6. The nmap packages provide a network exploration utility and a security scanner. Bug Fix BZ# 1000770 Previously, the ncat utility printed debug messages even in verbose mode. As a consequence, after connecting through an HTTP proxy, a debug message was displayed together with the received data, which could interfere with the automated processing of standard output. With this update, ncat prints debug messages only in verbose mode as expected. Users of nmap are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/nmap
Chapter 80. KafkaConnect schema reference
Chapter 80. KafkaConnect schema reference Property Property type Description spec KafkaConnectSpec The specification of the Kafka Connect cluster. status KafkaConnectStatus The status of the Kafka Connect cluster.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaConnect-reference
13.3. Creating New Indexes to Existing Databases
13.3. Creating New Indexes to Existing Databases Learn how to initiate indexing operations on Directory Server. You must create indexes manually because Directory Server does not automatically index databases. Important Before you regenerate the index, searches proceed but can return incorrect or inconsistent results. 13.3.1. Creating an Index While the Instance is Running 13.3.1.1. Creating an Index Using the dsconf backend index reindex Command To recreate the index of a database while the instance is running: 13.3.1.2. Creating an Index Using a cn=tasks Entry The cn=tasks,cn=config entry in the Directory Server configuration is a container entry for temporary entries the server uses to manage tasks. To initiate an index operation, create a task in the cn=index,cn=tasks,cn=config entry. Use the ldapadd utility to add a new index task. For example, to add a task that creates the presence index for the cn attribute in the userRoot database: When the task is completed, the entry is removed from the directory configuration. For further details, about the cn=index,cn=tasks,cn=config entry, see the cn=index section in the Red Hat Directory Server Configuration, Command, and File Reference . 13.3.2. Creating an Index While the Instance Offline After creating an indexing entry or adding additional index types to an existing indexing entry, use the dsconf db2index command: Shut down the instance: Recreate the index: Start the instance:
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend index reindex database_name", "ldapadd -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x dn: cn= example_presence_index ,cn=index,cn=tasks,cn=config objectclass: top objectclass: extensibleObject cn: example presence index nsInstance: userRoot nsIndexAttribute: \" cn:pres \"", "dsctl instance_name stop", "dsctl instance_name db2index userRoot [13/Aug/2019:15:25:37.277426483 +0200] - INFO - ldbm_instance_config_cachememsize_set - force a minimal value 512000 [13/Aug/2019:15:25:37.289257996 +0200] - INFO - check_and_set_import_cache - pagesize: 4096, available bytes 1704378368, process usage 22212608 [13/Aug/2019:15:25:37.291738104 +0200] - INFO - check_and_set_import_cache - Import allocates 665772KB import cache. db2index successful", "dsctl instance_name start" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/creating_new_indexes_to_existing_databases
C.2.3. Choosing a Good Passphrase
C.2.3. Choosing a Good Passphrase While dm-crypt/LUKS supports both keys and passphrases, the anaconda installer only supports the use of passphrases for creating and accessing encrypted block devices during installation. LUKS does provide passphrase strengthening but it is still a good idea to choose a good (meaning "difficult to guess") passphrase. Note the use of the term "passphrase", as opposed to the term "password". This is intentional. Providing a phrase containing multiple words to increase the security of your data is important.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/apcs02s03
Index
Index Symbols 802.11x, Wireless Networks and security, Wireless Networks A Apache HTTP Server cgi security, Restrict Permissions for Executable Directories directives, Securing the Apache HTTP Server introducing, Securing the Apache HTTP Server attackers and risks, Attackers and Vulnerabilities B basic input output system (see BIOS) BIOS non-x86 equivalents passwords, Securing Non-x86 Platforms security, BIOS and Boot Loader Security passwords, BIOS Passwords black hat hacker (see crackers) boot loaders GRUB password protecting, Password Protecting GRUB security, Boot Loader Passwords C co-location services, Hardware Security collecting evidence (see incident response) file auditing tools, Gathering Post-Breach Information dd, Gathering Post-Breach Information file, Gathering Post-Breach Information find, Gathering Post-Breach Information grep, Gathering Post-Breach Information md5sum, Gathering Post-Breach Information script, Investigating the Incident stat, Gathering Post-Breach Information strings, Gathering Post-Breach Information common exploits and attacks, Common Exploits and Attacks table, Common Exploits and Attacks common ports table, Common Ports communication ports, Common Ports communication tools secure, Security Enhanced Communication Tools GPG, Security Enhanced Communication Tools OpenSSH, Security Enhanced Communication Tools computer emergency response team, The Computer Emergency Response Team (CERT) controls, Security Controls administrative, Administrative Controls physical, Physical Controls technical, Technical Controls cracker black hat hacker, Shades of Grey crackers definition, A Quick History of Hackers cupsd, Identifying and Configuring Services D dd collecting evidence with, Collecting an Evidential Image file auditing using, Gathering Post-Breach Information Demilitarized Zone, DMZs and iptables Denial of Service (DoS) distributed, Security Today DMZ (see Demilitarized Zone) (see networks) E EFI Shell security passwords, Securing Non-x86 Platforms F file file auditing using, Gathering Post-Breach Information file auditing tools, Gathering Post-Breach Information find file auditing using, Gathering Post-Breach Information firewall types, Firewalls network address translation (NAT), Firewalls packet filter, Firewalls proxy, Firewalls firewalls, Firewalls additional resources, Additional Resources and connection tracking, iptables and Connection Tracking and viruses, Viruses and Spoofed IP Addresses personal, Personal Firewalls policies, Basic Firewall Policies stateful, iptables and Connection Tracking types, Firewalls Firewalls iptables, Netfilter and iptables FTP anonymous access, Anonymous Access anonymous upload, Anonymous Upload greeting banner, FTP Greeting Banner introducing, Securing FTP TCP wrappers and, Use TCP Wrappers To Control Access user accounts, User Accounts vsftpd, Securing FTP G grep file auditing using, Gathering Post-Breach Information grey hat hacker (see hackers) H hacker ethic, A Quick History of Hackers hackers black hat (see cracker) definition, A Quick History of Hackers grey hat, Shades of Grey white hat, Shades of Grey hardware, Hardware and Network Protection and security, Hardware Security laptops, Hardware Security servers, Hardware Security workstations, Hardware Security I IDS (see intrusion detection systems) incident response and legal issues, Legal Considerations collecting evidence using dd, Collecting an Evidential Image computer emergency response team (CERT), The Computer Emergency Response Team (CERT) creating a plan, Creating an Incident Response Plan definition of, Defining Incident Response gathering post-breach information, Gathering Post-Breach Information implementation, Implementing the Incident Response Plan introducing, Incident Response investigation, Investigating the Incident post-mortem, Investigating the Incident reporting the incident, Reporting the Incident restoring and recovering resources, Restoring and Recovering Resources incident response plan, Creating an Incident Response Plan insecure services, Insecure Services rsh, Insecure Services Telnet, Insecure Services vsftpd, Insecure Services introduction, Introduction categories, using this manual, Introduction other Red Hat Enterprise Linux manuals, Introduction topics, Introduction intrusion detection systems, Intrusion Detection and log files, Host-based IDS defining, Defining Intrusion Detection Systems host-based, Host-based IDS network-based, Network-based IDS Snort, Snort RPM Package Manager (RPM), RPM as an IDS Tripwire, Tripwire types, IDS Types ip6tables, ip6tables IPsec, IPsec configuration, IPsec Network-to-Network configuration host-to-host, IPsec Host-to-Host Configuration host-to-host, IPsec Host-to-Host Configuration installing, IPsec Installation network-to-network, IPsec Network-to-Network configuration phases, IPsec iptables, Netfilter and iptables additional resources, Additional Resources and DMZs, DMZs and iptables and viruses, Viruses and Spoofed IP Addresses chains, Using iptables FORWARD, FORWARD and NAT Rules INPUT, Common iptables Filtering OUTPUT, Common iptables Filtering POSTROUTING, FORWARD and NAT Rules PREROUTING, FORWARD and NAT Rules , DMZs and iptables connection tracking, iptables and Connection Tracking states, iptables and Connection Tracking policies, Basic Firewall Policies rules, Saving and Restoring iptables Rules common, Common iptables Filtering forwarding, FORWARD and NAT Rules NAT, FORWARD and NAT Rules , DMZs and iptables restoring, Saving and Restoring iptables Rules saving, Saving and Restoring iptables Rules stateful inspection, iptables and Connection Tracking states, iptables and Connection Tracking using, Using iptables K Kerberos NIS, Use Kerberos Authentication L legal issues, Legal Considerations lpd, Identifying and Configuring Services lsof, Verifying Which Ports Are Listening M md5sum file auditing using, Gathering Post-Breach Information N NAT (see Network Address Translation) Nessus, Nessus Netfilter, Netfilter and iptables additional resources, Additional Resources Netfilter 6, ip6tables netstat, Verifying Which Ports Are Listening Network Address Translation, FORWARD and NAT Rules with iptables, FORWARD and NAT Rules network services, Available Network Services buffer overflow ExecShield, Risks To Services identifying and configuring, Identifying and Configuring Services risks, Risks To Services buffer overflow, Risks To Services denial-of-service, Risks To Services script vulnerability, Risks To Services network topologies, Secure Network Topologies linear bus, Physical Topologies ring, Physical Topologies star, Physical Topologies networks, Hardware and Network Protection and security, Secure Network Topologies de-militarized zones (DMZs), Network Segmentation and DMZs hubs, Transmission Considerations segmentation, Network Segmentation and DMZs switches, Transmission Considerations wireless, Wireless Networks NFS, Securing NFS and Sendmail, NFS and Sendmail network design, Carefully Plan the Network syntax errors, Beware of Syntax Errors Nikto, Nikto NIS introducing, Securing NIS IPTables, Assign Static Ports and Use IPTables Rules Kerberos, Use Kerberos Authentication NIS domain name, Use a Password-like NIS Domain Name and Hostname planning network, Carefully Plan the Network securenets, Edit the /var/yp/securenets File static ports, Assign Static Ports and Use IPTables Rules nmap, Verifying Which Ports Are Listening Nmap, Scanning Hosts with Nmap command line version, Using Nmap O OpenSSH, Security Enhanced Communication Tools scp, Security Enhanced Communication Tools sftp, Security Enhanced Communication Tools ssh, Security Enhanced Communication Tools overview, Security Overview P password aging, Password Aging password security, Password Security aging, Password Aging and PAM, Forcing Strong Passwords auditing tools, Forcing Strong Passwords Crack, Forcing Strong Passwords John the Ripper, Forcing Strong Passwords Slurpie, Forcing Strong Passwords enforcement, Forcing Strong Passwords in an organization, Creating User Passwords Within an Organization methodology, Secure Password Creation Methodology strong passwords, Creating Strong Passwords passwords within an organization, Creating User Passwords Within an Organization pluggable authentication modules (PAM) strong password enforcement, Forcing Strong Passwords portmap, Identifying and Configuring Services and IPTables, Protect portmap With IPTables and TCP wrappers, Protect portmap With TCP Wrappers ports common, Common Ports monitoring, Verifying Which Ports Are Listening post-mortem, Investigating the Incident R reporting the incident, Reporting the Incident restoring and recovering resources, Restoring and Recovering Resources patching the system, Patching the System reinstalling the system, Reinstalling the System risks insecure services, Inherently Insecure Services networks, Threats to Network Security architectures, Insecure Architectures open ports, Unused Services and Open Ports patches and errata, Unpatched Services servers, Threats to Server Security inattentive administration, Inattentive Administration workstations and PCs, Threats to Workstation and Home PC Security , Bad Passwords applications, Vulnerable Client Applications root, Allowing Root Access allowing access, Allowing Root Access disallowing access, Disallowing Root Access limiting access, Limiting Root Access and su, The su Command and sudo, The sudo Command with User Manager, The su Command methods of disabling, Disallowing Root Access changing the root shell, Disallowing Root Access disabling access via tty, Disallowing Root Access disabling SSH logins, Disallowing Root Access with PAM, Disallowing Root Access root user (see root) RPM and intrusion detection, RPM as an IDS importing GPG key, Using the Red Hat Errata Website verifying signed packages, Verifying Signed Packages , Installing Signed Packages S security considerations hardware, Hardware and Network Protection network transmission, Transmission Considerations physical networks, Hardware and Network Protection wireless, Wireless Networks security errata, Security Updates applying changes, Applying the Changes via Red Hat errata website, Using the Red Hat Errata Website via Red Hat Network, Using Red Hat Network when to reboot, Applying the Changes security overview, Security Overview conclusion, Conclusion controls (see controls) defining computer security, What is Computer Security? Denial of Service (DoS), Security Today evolution of computer security, How did Computer Security Come about? viruses, Security Today sendmail, Identifying and Configuring Services Sendmail and NFS, NFS and Sendmail introducing, Securing Sendmail limiting DoS, Limiting a Denial of Service Attack server security Apache HTTP Server, Securing the Apache HTTP Server cgi security, Restrict Permissions for Executable Directories directives, Securing the Apache HTTP Server FTP, Securing FTP anonymous access, Anonymous Access anonymous upload, Anonymous Upload greeting banner, FTP Greeting Banner TCP wrappers and, Use TCP Wrappers To Control Access user accounts, User Accounts vsftpd, Securing FTP NFS, Securing NFS network design, Carefully Plan the Network syntax errors, Beware of Syntax Errors NIS, Securing NIS IPTables, Assign Static Ports and Use IPTables Rules Kerberos, Use Kerberos Authentication NIS domain name, Use a Password-like NIS Domain Name and Hostname planning network, Carefully Plan the Network securenets, Edit the /var/yp/securenets File static ports, Assign Static Ports and Use IPTables Rules overview of, Server Security portmap, Securing Portmap ports monitoring, Verifying Which Ports Are Listening Sendmail, Securing Sendmail and NFS, NFS and Sendmail limiting DoS, Limiting a Denial of Service Attack TCP wrappers, Enhancing Security With TCP Wrappers attack warnings, TCP Wrappers and Attack Warnings banners, TCP Wrappers and Connection Banners logging, TCP Wrappers and Enhanced Logging xinetd, Enhancing Security With xinetd managing resources with, Controlling Server Resources preventing DoS with, Controlling Server Resources SENSOR trap, Setting a Trap services, Verifying Which Ports Are Listening Services Configuration Tool, Identifying and Configuring Services Snort, Snort sshd, Identifying and Configuring Services stat file auditing using, Gathering Post-Breach Information strings file auditing using, Gathering Post-Breach Information su and root, The su Command sudo and root, The sudo Command T TCP wrappers and FTP, Use TCP Wrappers To Control Access and portmap, Protect portmap With TCP Wrappers attack warnings, TCP Wrappers and Attack Warnings banners, TCP Wrappers and Connection Banners logging, TCP Wrappers and Enhanced Logging Tripwire, Tripwire U updates (see security errata) V Virtual Private Networks, Virtual Private Networks IPsec, IPsec configuration, IPsec Network-to-Network configuration host-to-host, IPsec Host-to-Host Configuration installing, IPsec Installation viruses trojans, Security Today VLAD the Scanner, VLAD the Scanner VPN, Virtual Private Networks vulnerabilities assessing with Nessus, Nessus assessing with Nikto, Nikto assessing with Nmap, Scanning Hosts with Nmap assessing with VLAD the Scanner, VLAD the Scanner assessment, Vulnerability Assessment defining, Defining Assessment and Testing establishing a methodology, Establishing a Methodology testing, Defining Assessment and Testing W white hat hacker (see hackers) Wi-Fi networks (see 802.11x) wireless security, Wireless Networks 802.11x, Wireless Networks workstation security, Workstation Security BIOS, BIOS and Boot Loader Security boot loaders passwords, Boot Loader Passwords evaluating administrative control, Evaluating Workstation Security BIOS, Evaluating Workstation Security boot loaders, Evaluating Workstation Security communications, Evaluating Workstation Security passwords, Evaluating Workstation Security personal firewalls, Evaluating Workstation Security X xinetd, Identifying and Configuring Services managing resources with, Controlling Server Resources preventing DoS with, Controlling Server Resources SENSOR trap, Setting a Trap
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ix01
Chapter 70. KafkaConnect schema reference
Chapter 70. KafkaConnect schema reference Property Property type Description spec KafkaConnectSpec The specification of the Kafka Connect cluster. status KafkaConnectStatus The status of the Kafka Connect cluster.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaconnect-reference
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. For submitting your feedback, create a Bugzilla ticket: Go to the Bugzilla website. As the Component, use Documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/upgrading_from_rhel_6_to_rhel_7/proc_providing-feedback-on-red-hat-documentation_upgrading-from-rhel-6-to-rhel-7
17.5. Routed Mode
17.5. Routed Mode When using Routed mode , the virtual switch connects to the physical LAN connected to the host physical machine, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, all of the virtual machines are in their own subnet, routed through a virtual switch. This situation is not always ideal as no other host physical machines on the physical network are aware of the virtual machines without manual physical router configuration, and cannot access the virtual machines. Routed mode operates at Layer 3 of the OSI networking model. Figure 17.5. Virtual network switch in routed mode
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-networking_protocols-routed_mode
Chapter 5. Pipelines CLI (tkn)
Chapter 5. Pipelines CLI (tkn) 5.1. Installing tkn Use the CLI tool to manage Red Hat OpenShift Pipelines from a terminal. The following section describes how to install the CLI tool on different platforms. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Important Running Red Hat OpenShift Pipelines on ARM hardware is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Both the archives and the RPMs contain the following executables: tkn tkn-pac opc Important Running Red Hat OpenShift Pipelines with the opc CLI tool is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1.1. Installing the Red Hat OpenShift Pipelines CLI on Linux For Linux distributions, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. Linux (x86_64, amd64) Linux on IBM Z and IBM(R) LinuxONE (s390x) Linux on IBM Power (ppc64le) Linux on ARM (aarch64, arm64) Unpack the archive: USD tar xvzf <file> Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 5.1.2. Installing the Red Hat OpenShift Pipelines CLI on Linux using an RPM For Red Hat Enterprise Linux (RHEL) version 8, you can install the Red Hat OpenShift Pipelines CLI as an RPM. Prerequisites You have an active OpenShift Container Platform subscription on your Red Hat account. You have root or sudo privileges on your local system. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*pipelines*' In the output for the command, find the pool ID for your OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by Red Hat OpenShift Pipelines: Linux (x86_64, amd64) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-x86_64-rpms" Linux on IBM Z and IBM(R) LinuxONE (s390x) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-s390x-rpms" Linux on IBM Power (ppc64le) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-ppc64le-rpms" Linux on ARM (aarch64, arm64) # subscription-manager repos --enable="pipelines-1.13-for-rhel-8-aarch64-rpms" Install the openshift-pipelines-client package: # yum install openshift-pipelines-client After you install the CLI, it is available using the tkn command: USD tkn version 5.1.3. Installing the Red Hat OpenShift Pipelines CLI on Windows For Windows, you can download the CLI as a zip archive. Procedure Download the CLI tool . Extract the archive with a ZIP program. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: C:\> path 5.1.4. Installing the Red Hat OpenShift Pipelines CLI on macOS For macOS, you can download the CLI as a tar.gz archive. Procedure Download the relevant CLI tool. macOS macOS on ARM Unpack and extract the archive. Add the location of your tkn , tkn-pac , and opc files to your PATH environment variable. To check your PATH , run the following command: USD echo USDPATH 5.2. Configuring the OpenShift Pipelines tkn CLI Configure the Red Hat OpenShift Pipelines tkn CLI to enable tab completion. 5.2.1. Enabling tab completion After you install the tkn CLI, you can enable tab completion to automatically complete tkn commands or suggest options when you press Tab. Prerequisites You must have the tkn CLI tool installed. You must have bash-completion installed on your local system. Procedure The following procedure enables tab completion for Bash. Save the Bash completion code to a file: USD tkn completion bash > tkn_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp tkn_bash_completion /etc/bash_completion.d/ Alternatively, you can save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 5.3. OpenShift Pipelines tkn reference This section lists the basic tkn CLI commands. 5.3.1. Basic syntax tkn [command or options] [arguments... ] 5.3.2. Global options --help, -h 5.3.3. Utility commands 5.3.3.1. tkn Parent command for tkn CLI. Example: Display all options USD tkn 5.3.3.2. completion [shell] Print shell completion code which must be evaluated to provide interactive completion. Supported shells are bash and zsh . Example: Completion code for bash shell USD tkn completion bash 5.3.3.3. version Print version information of the tkn CLI. Example: Check the tkn version USD tkn version 5.3.4. Pipelines management commands 5.3.4.1. pipeline Manage pipelines. Example: Display help USD tkn pipeline --help 5.3.4.2. pipeline delete Delete a pipeline. Example: Delete the mypipeline pipeline from a namespace USD tkn pipeline delete mypipeline -n myspace 5.3.4.3. pipeline describe Describe a pipeline. Example: Describe the mypipeline pipeline USD tkn pipeline describe mypipeline 5.3.4.4. pipeline list Display a list of pipelines. Example: Display a list of pipelines USD tkn pipeline list 5.3.4.5. pipeline logs Display the logs for a specific pipeline. Example: Stream the live logs for the mypipeline pipeline USD tkn pipeline logs -f mypipeline 5.3.4.6. pipeline start Start a pipeline. Example: Start the mypipeline pipeline USD tkn pipeline start mypipeline 5.3.5. Pipeline run commands 5.3.5.1. pipelinerun Manage pipeline runs. Example: Display help USD tkn pipelinerun -h 5.3.5.2. pipelinerun cancel Cancel a pipeline run. Example: Cancel the mypipelinerun pipeline run from a namespace USD tkn pipelinerun cancel mypipelinerun -n myspace 5.3.5.3. pipelinerun delete Delete a pipeline run. Example: Delete pipeline runs from a namespace USD tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace Example: Delete all pipeline runs from a namespace, except the five most recently executed pipeline runs USD tkn pipelinerun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed pipeline runs you want to retain. Example: Delete all pipelines USD tkn pipelinerun delete --all Note Starting with Red Hat OpenShift Pipelines 1.6, the tkn pipelinerun delete --all command does not delete any resources that are in the running state. 5.3.5.4. pipelinerun describe Describe a pipeline run. Example: Describe the mypipelinerun pipeline run in a namespace USD tkn pipelinerun describe mypipelinerun -n myspace 5.3.5.5. pipelinerun list List pipeline runs. Example: Display a list of pipeline runs in a namespace USD tkn pipelinerun list -n myspace 5.3.5.6. pipelinerun logs Display the logs of a pipeline run. Example: Display the logs of the mypipelinerun pipeline run with all tasks and steps in a namespace USD tkn pipelinerun logs mypipelinerun -a -n myspace 5.3.6. Task management commands 5.3.6.1. task Manage tasks. Example: Display help USD tkn task -h 5.3.6.2. task delete Delete a task. Example: Delete mytask1 and mytask2 tasks from a namespace USD tkn task delete mytask1 mytask2 -n myspace 5.3.6.3. task describe Describe a task. Example: Describe the mytask task in a namespace USD tkn task describe mytask -n myspace 5.3.6.4. task list List tasks. Example: List all the tasks in a namespace USD tkn task list -n myspace 5.3.6.5. task logs Display task logs. Example: Display logs for the mytaskrun task run of the mytask task USD tkn task logs mytask mytaskrun -n myspace 5.3.6.6. task start Start a task. Example: Start the mytask task in a namespace USD tkn task start mytask -s <ServiceAccountName> -n myspace 5.3.7. Task run commands 5.3.7.1. taskrun Manage task runs. Example: Display help USD tkn taskrun -h 5.3.7.2. taskrun cancel Cancel a task run. Example: Cancel the mytaskrun task run from a namespace USD tkn taskrun cancel mytaskrun -n myspace 5.3.7.3. taskrun delete Delete a TaskRun. Example: Delete the mytaskrun1 and mytaskrun2 task runs from a namespace USD tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace Example: Delete all but the five most recently executed task runs from a namespace USD tkn taskrun delete -n myspace --keep 5 1 1 Replace 5 with the number of most recently executed task runs you want to retain. 5.3.7.4. taskrun describe Describe a task run. Example: Describe the mytaskrun task run in a namespace USD tkn taskrun describe mytaskrun -n myspace 5.3.7.5. taskrun list List task runs. Example: List all the task runs in a namespace USD tkn taskrun list -n myspace 5.3.7.6. taskrun logs Display task run logs. Example: Display live logs for the mytaskrun task run in a namespace USD tkn taskrun logs -f mytaskrun -n myspace 5.3.8. Condition management commands 5.3.8.1. condition Manage Conditions. Example: Display help USD tkn condition --help 5.3.8.2. condition delete Delete a Condition. Example: Delete the mycondition1 Condition from a namespace USD tkn condition delete mycondition1 -n myspace 5.3.8.3. condition describe Describe a Condition. Example: Describe the mycondition1 Condition in a namespace USD tkn condition describe mycondition1 -n myspace 5.3.8.4. condition list List Conditions. Example: List Conditions in a namespace USD tkn condition list -n myspace 5.3.9. Pipeline Resource management commands 5.3.9.1. resource Manage Pipeline Resources. Example: Display help USD tkn resource -h 5.3.9.2. resource create Create a Pipeline Resource. Example: Create a Pipeline Resource in a namespace USD tkn resource create -n myspace This is an interactive command that asks for input on the name of the Resource, type of the Resource, and the values based on the type of the Resource. 5.3.9.3. resource delete Delete a Pipeline Resource. Example: Delete the myresource Pipeline Resource from a namespace USD tkn resource delete myresource -n myspace 5.3.9.4. resource describe Describe a Pipeline Resource. Example: Describe the myresource Pipeline Resource USD tkn resource describe myresource -n myspace 5.3.9.5. resource list List Pipeline Resources. Example: List all Pipeline Resources in a namespace USD tkn resource list -n myspace 5.3.10. ClusterTask management commands Important In Red Hat OpenShift Pipelines 1.10, ClusterTask functionality of the tkn command line utility is deprecated and is planned to be removed in a future release. 5.3.10.1. clustertask Manage ClusterTasks. Example: Display help USD tkn clustertask --help 5.3.10.2. clustertask delete Delete a ClusterTask resource in a cluster. Example: Delete mytask1 and mytask2 ClusterTasks USD tkn clustertask delete mytask1 mytask2 5.3.10.3. clustertask describe Describe a ClusterTask. Example: Describe the mytask ClusterTask USD tkn clustertask describe mytask1 5.3.10.4. clustertask list List ClusterTasks. Example: List ClusterTasks USD tkn clustertask list 5.3.10.5. clustertask start Start ClusterTasks. Example: Start the mytask ClusterTask USD tkn clustertask start mytask 5.3.11. Trigger management commands 5.3.11.1. eventlistener Manage EventListeners. Example: Display help USD tkn eventlistener -h 5.3.11.2. eventlistener delete Delete an EventListener. Example: Delete mylistener1 and mylistener2 EventListeners in a namespace USD tkn eventlistener delete mylistener1 mylistener2 -n myspace 5.3.11.3. eventlistener describe Describe an EventListener. Example: Describe the mylistener EventListener in a namespace USD tkn eventlistener describe mylistener -n myspace 5.3.11.4. eventlistener list List EventListeners. Example: List all the EventListeners in a namespace USD tkn eventlistener list -n myspace 5.3.11.5. eventlistener logs Display logs of an EventListener. Example: Display the logs of the mylistener EventListener in a namespace USD tkn eventlistener logs mylistener -n myspace 5.3.11.6. triggerbinding Manage TriggerBindings. Example: Display TriggerBindings help USD tkn triggerbinding -h 5.3.11.7. triggerbinding delete Delete a TriggerBinding. Example: Delete mybinding1 and mybinding2 TriggerBindings in a namespace USD tkn triggerbinding delete mybinding1 mybinding2 -n myspace 5.3.11.8. triggerbinding describe Describe a TriggerBinding. Example: Describe the mybinding TriggerBinding in a namespace USD tkn triggerbinding describe mybinding -n myspace 5.3.11.9. triggerbinding list List TriggerBindings. Example: List all the TriggerBindings in a namespace USD tkn triggerbinding list -n myspace 5.3.11.10. triggertemplate Manage TriggerTemplates. Example: Display TriggerTemplate help USD tkn triggertemplate -h 5.3.11.11. triggertemplate delete Delete a TriggerTemplate. Example: Delete mytemplate1 and mytemplate2 TriggerTemplates in a namespace USD tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace` 5.3.11.12. triggertemplate describe Describe a TriggerTemplate. Example: Describe the mytemplate TriggerTemplate in a namespace USD tkn triggertemplate describe mytemplate -n `myspace` 5.3.11.13. triggertemplate list List TriggerTemplates. Example: List all the TriggerTemplates in a namespace USD tkn triggertemplate list -n myspace 5.3.11.14. clustertriggerbinding Manage ClusterTriggerBindings. Example: Display ClusterTriggerBindings help USD tkn clustertriggerbinding -h 5.3.11.15. clustertriggerbinding delete Delete a ClusterTriggerBinding. Example: Delete myclusterbinding1 and myclusterbinding2 ClusterTriggerBindings USD tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2 5.3.11.16. clustertriggerbinding describe Describe a ClusterTriggerBinding. Example: Describe the myclusterbinding ClusterTriggerBinding USD tkn clustertriggerbinding describe myclusterbinding 5.3.11.17. clustertriggerbinding list List ClusterTriggerBindings. Example: List all ClusterTriggerBindings USD tkn clustertriggerbinding list 5.3.12. Hub interaction commands Interact with Tekton Hub for resources such as tasks and pipelines. 5.3.12.1. hub Interact with hub. Example: Display help USD tkn hub -h Example: Interact with a hub API server USD tkn hub --api-server https://api.hub.tekton.dev Note For each example, to get the corresponding sub-commands and flags, run tkn hub <command> --help . 5.3.12.2. hub downgrade Downgrade an installed resource. Example: Downgrade the mytask task in the mynamespace namespace to it's older version USD tkn hub downgrade task mytask --to version -n mynamespace 5.3.12.3. hub get Get a resource manifest by its name, kind, catalog, and version. Example: Get the manifest for a specific version of the myresource pipeline or task from the tekton catalog USD tkn hub get [pipeline | task] myresource --from tekton --version version 5.3.12.4. hub info Display information about a resource by its name, kind, catalog, and version. Example: Display information about a specific version of the mytask task from the tekton catalog USD tkn hub info task mytask --from tekton --version version 5.3.12.5. hub install Install a resource from a catalog by its kind, name, and version. Example: Install a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub install task mytask --from tekton --version version -n mynamespace 5.3.12.6. hub reinstall Reinstall a resource by its kind and name. Example: Reinstall a specific version of the mytask task from the tekton catalog in the mynamespace namespace USD tkn hub reinstall task mytask --from tekton --version version -n mynamespace 5.3.12.7. hub search Search a resource by a combination of name, kind, and tags. Example: Search a resource with a tag cli USD tkn hub search --tags cli 5.3.12.8. hub upgrade Upgrade an installed resource. Example: Upgrade the installed mytask task in the mynamespace namespace to a new version USD tkn hub upgrade task mytask --to version -n mynamespace
[ "tar xvzf <file>", "echo USDPATH", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*pipelines*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-s390x-rpms\"", "subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-ppc64le-rpms\"", "subscription-manager repos --enable=\"pipelines-1.13-for-rhel-8-aarch64-rpms\"", "yum install openshift-pipelines-client", "tkn version", "C:\\> path", "echo USDPATH", "tkn completion bash > tkn_bash_completion", "sudo cp tkn_bash_completion /etc/bash_completion.d/", "tkn", "tkn completion bash", "tkn version", "tkn pipeline --help", "tkn pipeline delete mypipeline -n myspace", "tkn pipeline describe mypipeline", "tkn pipeline list", "tkn pipeline logs -f mypipeline", "tkn pipeline start mypipeline", "tkn pipelinerun -h", "tkn pipelinerun cancel mypipelinerun -n myspace", "tkn pipelinerun delete mypipelinerun1 mypipelinerun2 -n myspace", "tkn pipelinerun delete -n myspace --keep 5 1", "tkn pipelinerun delete --all", "tkn pipelinerun describe mypipelinerun -n myspace", "tkn pipelinerun list -n myspace", "tkn pipelinerun logs mypipelinerun -a -n myspace", "tkn task -h", "tkn task delete mytask1 mytask2 -n myspace", "tkn task describe mytask -n myspace", "tkn task list -n myspace", "tkn task logs mytask mytaskrun -n myspace", "tkn task start mytask -s <ServiceAccountName> -n myspace", "tkn taskrun -h", "tkn taskrun cancel mytaskrun -n myspace", "tkn taskrun delete mytaskrun1 mytaskrun2 -n myspace", "tkn taskrun delete -n myspace --keep 5 1", "tkn taskrun describe mytaskrun -n myspace", "tkn taskrun list -n myspace", "tkn taskrun logs -f mytaskrun -n myspace", "tkn condition --help", "tkn condition delete mycondition1 -n myspace", "tkn condition describe mycondition1 -n myspace", "tkn condition list -n myspace", "tkn resource -h", "tkn resource create -n myspace", "tkn resource delete myresource -n myspace", "tkn resource describe myresource -n myspace", "tkn resource list -n myspace", "tkn clustertask --help", "tkn clustertask delete mytask1 mytask2", "tkn clustertask describe mytask1", "tkn clustertask list", "tkn clustertask start mytask", "tkn eventlistener -h", "tkn eventlistener delete mylistener1 mylistener2 -n myspace", "tkn eventlistener describe mylistener -n myspace", "tkn eventlistener list -n myspace", "tkn eventlistener logs mylistener -n myspace", "tkn triggerbinding -h", "tkn triggerbinding delete mybinding1 mybinding2 -n myspace", "tkn triggerbinding describe mybinding -n myspace", "tkn triggerbinding list -n myspace", "tkn triggertemplate -h", "tkn triggertemplate delete mytemplate1 mytemplate2 -n `myspace`", "tkn triggertemplate describe mytemplate -n `myspace`", "tkn triggertemplate list -n myspace", "tkn clustertriggerbinding -h", "tkn clustertriggerbinding delete myclusterbinding1 myclusterbinding2", "tkn clustertriggerbinding describe myclusterbinding", "tkn clustertriggerbinding list", "tkn hub -h", "tkn hub --api-server https://api.hub.tekton.dev", "tkn hub downgrade task mytask --to version -n mynamespace", "tkn hub get [pipeline | task] myresource --from tekton --version version", "tkn hub info task mytask --from tekton --version version", "tkn hub install task mytask --from tekton --version version -n mynamespace", "tkn hub reinstall task mytask --from tekton --version version -n mynamespace", "tkn hub search --tags cli", "tkn hub upgrade task mytask --to version -n mynamespace" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/cli_tools/pipelines-cli-tkn
2.6. Storage
2.6. Storage 2.6.1. About Red Hat Virtualization storage Red Hat Virtualization uses a centralized storage system for virtual disks, ISO files and snapshots. Storage networking can be implemented using: Network File System (NFS) Other POSIX compliant file systems Internet Small Computer System Interface (iSCSI) Local storage attached directly to the virtualization hosts Fibre Channel Protocol (FCP) Parallel NFS (pNFS) Setting up storage is a prerequisite for a new data center because a data center cannot be initialized unless storage domains are attached and activated. As a Red Hat Virtualization system administrator, you create, configure, attach and maintain storage for the virtualized enterprise. You must be familiar with the storage types and their use. Read your storage array vendor's guides, and see Red Hat Enterprise Linux Managing storage devices for more information on the concepts, protocols, requirements, and general usage of storage. To add storage domains you must be able to successfully access the Administration Portal, and there must be at least one host connected with a status of Up . Red Hat Virtualization has three types of storage domains: Data Domain: A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center. In addition, snapshots of the virtual machines are also stored in the data domain. The data domain cannot be shared across data centers. Data domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, provided they are all shared, rather than local, domains. You must attach a data domain to a data center before you can attach domains of other types to it. ISO Domain: ISO domains store ISO files (or logical CDs) used to install and boot operating systems and applications for the virtual machines. An ISO domain removes the data center's need for physical media. An ISO domain can be shared across different data centers. ISO domains can only be NFS-based. Only one ISO domain can be added to a data center. Export Domain: Export domains are temporary storage repositories that are used to copy and move images between data centers and Red Hat Virtualization environments. Export domains can be used to backup virtual machines. An export domain can be moved between data centers, however, it can only be active in one data center at a time. Export domains can only be NFS-based. Only one export domain can be added to a data center. Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. See Importing Existing Storage Domains for information on importing storage domains. Important Only commence configuring and attaching storage for your Red Hat Virtualization environment once you have determined the storage needs of your data center(s). 2.6.2. Understanding Storage Domains A storage domain is a collection of images that have a common storage interface. A storage domain contains complete images of templates and virtual machines (including snapshots), or ISO files. A storage domain can be made of block devices (SAN - iSCSI or FCP) or a file system (NAS - NFS, GlusterFS, or other POSIX compliant file systems). By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO. Note GlusterFS Storage is deprecated, and will no longer be supported in future releases. On NFS, all virtual disks, templates, and snapshots are files. On SAN (iSCSI/FCP), each virtual disk, template or snapshot is a logical volume. Block devices are aggregated into a logical entity called a volume group, and then divided by LVM (Logical Volume Manager) into logical volumes for use as virtual hard disks. See Red Hat Enterprise Linux Configuring and managing logical volumes for more information on LVM. Virtual disks can have one of two formats, either QCOW2 or raw. The type of storage can be sparse or preallocated. Snapshots are always sparse but can be taken for disks of either format. Virtual machines that share the same storage domain can be migrated between hosts that belong to the same cluster. 2.6.3. Preparing and Adding NFS Storage 2.6.3.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Prerequisites Install the NFS utils package. # dnf install nfs-utils -y To check the enabled versions: # cat /proc/fs/nfsd/versions Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure Create the group kvm : # groupadd kvm -g 36 Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm Create the storage directory and modify the access rights. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world> Note If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 2.6.3.2. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 2.6.3.3. Increasing NFS Storage To increase the amount of NFS storage, you can either create a new storage domain and add it to an existing data center, or increase the available free space on the NFS server. For the former option, see Adding NFS Storage . The following procedure explains how to increase the available free space on the existing NFS server. Procedure Click Storage Domains . Click the NFS storage domain's name. This opens the details view. Click the Data Center tab and click Maintenance to place the storage domain into maintenance mode. This unmounts the existing share and makes it possible to resize the storage domain. On the NFS server, resize the storage. For Red Hat Enterprise Linux 6 systems, see Red Hat Enterprise Linux 6 Storage Administration Guide . For Red Hat Enterprise Linux 7 systems, see Red Hat Enterprise Linux 7 Storage Administration Guide . For Red Hat Enterprise Linux 8 systems, see Resizing a partition . In the details view, click the Data Center tab and click Activate to mount the storage domain. 2.6.4. Preparing and adding local storage A virtual machine's disk that uses a storage device that is physically installed on the virtual machine's host is referred to as a local storage device. A storage device must be part of a storage domain. The storage domain type for local storage is referred to as a local storage domain. Configuring a host to use local storage automatically creates, and adds the host to, a new local storage domain, data center and cluster to which no other host can be added. Multiple-host clusters require that all hosts have access to all storage domains, which is not possible with local storage. Virtual machines created in a single-host cluster cannot be migrated, fenced, or scheduled. 2.6.4.1. Preparing local storage On Red Hat Virtualization Host (RHVH), local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk, to prevent possible loss of data during upgrades. Procedure for Red Hat Enterprise Linux hosts On the host, create the directory to be used for the local storage: # mkdir -p /data/images Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /data/images # chmod 0755 /data /data/images Procedure for Red Hat Virtualization Hosts Create the local storage on a logical volume: Create a local storage directory: # mkdir /data # lvcreate -L USDSIZE rhvh -n data # mkfs.ext4 /dev/mapper/rhvh-data # echo "/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2" >> /etc/fstab # mount /data Mount the new local storage: # mount -a Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /rhvh-data # chmod 0755 /data /rhvh-data 2.6.4.2. Adding a local storage domain When adding a local storage domain to a host, setting the path to the local storage directory automatically creates and places the host in a local data center, local cluster, and local storage domain. Procedure Click Compute Hosts and select the host. Click Management Maintenance and OK . The host's status changes to Maintenance . Click Management Configure Local Storage . Click the Edit buttons to the Data Center , Cluster , and Storage fields to configure and name the local storage domain. Set the path to your local storage in the text entry field. If applicable, click the Optimization tab to configure the memory optimization policy for the new local storage cluster. Click OK . The Manager sets up the local data center with a local cluster, local storage domain. It also changes the host's status to Up . Verification Click Storage Domains . Locate the local storage domain you just added. The domain's status should be Active ( ), and the value in the Storage Type column should be Local on Host . You can now upload a disk image in the new local storage domain. 2.6.5. Preparing and Adding POSIX-compliant File System Storage 2.6.5.1. Preparing POSIX-compliant File System Storage POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP. Any POSIX-compliant file system used as a storage domain in Red Hat Virtualization must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization. For information on setting up and configuring POSIX-compliant file system storage, see Red Hat Enterprise Linux Global File System 2 . Important Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead. 2.6.5.2. Adding POSIX-compliant File System Storage This procedure shows you how to attach existing POSIX-compliant file system storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name for the storage domain. Select the Data Center to be associated with the storage domain. The data center selected must be of type POSIX (POSIX compliant FS) . Alternatively, select (none) . Select Data from the Domain Function drop-down list, and POSIX compliant FS from the Storage Type drop-down list. If applicable, select the Format from the drop-down menu. Select a host from the Host drop-down list. Enter the Path to the POSIX file system, as you would normally provide it to the mount command. Enter the VFS Type , as you would normally provide it to the mount command using the -t argument. See man mount for a list of valid VFS types. Enter additional Mount Options , as you would normally provide them to the mount command using the -o argument. The mount options should be provided in a comma-separated list. See man mount for a list of valid mount options. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value in the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value in the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . 2.6.6. Preparing and Adding Block Storage 2.6.6.1. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8. Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 2.6.6.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally for the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Important If you use the REST API method discoveriscsi to discover the iscsi targets, you can use an FQDN or an IP address, but you must use the iscsi details from the discovered targets results to log in using the REST API method iscsilogin . See discoveriscsi in the REST API Guide for more information. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Important When using the REST API iscsilogin method to log in, you must use the iscsi details from the discovered targets results in the discoveriscsi method. See iscsilogin in the REST API Guide for more information. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 2.6.6.3. Configuring iSCSI Multipathing iSCSI multipathing enables you to create and manage groups of logical networks and iSCSI storage connections. Multiple network paths between the hosts and iSCSI storage prevent host downtime caused by network path failure. The Manager connects each host in the data center to each target, using the NICs or VLANs that are assigned to the logical networks in the iSCSI bond. You can create an iSCSI bond with multiple targets and logical networks for redundancy. Prerequisites One or more iSCSI targets One or more logical networks that meet the following requirements: Not defined as Required or VM Network Assigned to a host interface Assigned a static IP address in the same VLAN and subnet as the other logical networks in the iSCSI bond Note Multipath is not supported for Self-Hosted Engine deployments. Procedure Click Compute Data Centers . Click the data center name. This opens the details view. In the iSCSI Multipathing tab, click Add . In the Add iSCSI Bond window, enter a Name and a Description . Select a logical network from Logical Networks and a storage domain from Storage Targets . You must select all the paths to the same target. Click OK . The hosts in the data center are connected to the iSCSI targets through the logical networks in the iSCSI bond. 2.6.6.4. Migrating a Logical Network to an iSCSI Bond If you have a logical network that you created for iSCSI traffic and configured on top of an existing network bond , you can migrate it to an iSCSI bond on the same subnet without disruption or downtime. Procedure Modify the current logical network so that it is not Required : Click Compute Clusters . Click the cluster name. This opens the details view. In the Logical Networks tab, select the current logical network ( net-1 ) and click Manage Networks . Clear the Require check box and click OK . Create a new logical network that is not Required and not VM network : Click Add Network . This opens the New Logical Network window. In the General tab, enter the Name ( net-2 ) and clear the VM network check box. In the Cluster tab, clear the Require check box and click OK . Remove the current network bond and reassign the logical networks: Click Compute Hosts . Click the host name. This opens the details view. In the Network Interfaces tab, click Setup Host Networks . Drag net-1 to the right to unassign it. Drag the current bond to the right to remove it. Drag net-1 and net-2 to the left to assign them to physical interfaces. Click the pencil icon of net-2 . This opens the Edit Network window. In the IPV4 tab, select Static . Enter the IP and Netmask/Routing Prefix of the subnet and click OK . Create the iSCSI bond: Click Compute Data Centers . Click the data center name. This opens the details view. In the iSCSI Multipathing tab, click Add . In the Add iSCSI Bond window, enter a Name , select the networks, net-1 and net-2 , and click OK . Your data center has an iSCSI bond containing the old and new logical networks. 2.6.6.5. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 2.6.6.6. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 2.6.6.7. Increasing iSCSI or FCP Storage There are several ways to increase iSCSI or FCP storage size: Add an existing LUN to the current storage domain. Create a new storage domain with new LUNs and add it to an existing data center. See Adding iSCSI Storage . Expand the storage domain by resizing the underlying LUNs. For information about configuring or resizing FCP storage, see Using Fibre Channel Devices in Managing storage devices for Red Hat Enterprise Linux 8. The following procedure explains how to expand storage area network (SAN) storage by adding a new LUN to an existing storage domain. Prerequisites The storage domain's status must be UP . The LUN must be accessible to all the hosts whose status is UP , or else the operation will fail and the LUN will not be added to the domain. The hosts themselves, however, will not be affected. If a newly added host, or a host that is coming out of maintenance or a Non Operational state, cannot access the LUN, the host's state will be Non Operational . Increasing an Existing iSCSI or FCP Storage Domain Click Storage Domains and select an iSCSI or FCP domain. Click Manage Domain . Click Targets LUNs and click the Discover Targets expansion button. Enter the connection information for the storage server and click Discover to initiate the connection. Click LUNs Targets and select the check box of the newly available LUN. Click OK to add the LUN to the selected storage domain. This will increase the storage domain by the size of the added LUN. When expanding the storage domain by resizing the underlying LUNs, the LUNs must also be refreshed in the Administration Portal. Refreshing the LUN Size Click Storage Domains and select an iSCSI or FCP domain. Click Manage Domain . Click LUNs Targets . In the Additional Size column, click Add Additional_Storage_Size button of the LUN to refresh. Click OK to refresh the LUN to indicate the new storage size. 2.6.6.8. Reusing LUNs LUNs cannot be reused, as is, to create a storage domain or virtual disk. If you try to reuse the LUNs, the Administration Portal displays the following error message: Physical device initialization failed. Please check that the device is empty and accessible by the host. A self-hosted engine shows the following error during installation: [ ERROR ] Error creating Volume Group: Failed to initialize physical device: ("[u'/dev/mapper/000000000000000000000000000000000']",) [ ERROR ] Failed to execute stage 'Misc configuration': Failed to initialize physical device: ("[u'/dev/mapper/000000000000000000000000000000000']",) Before the LUN can be reused, the old partitioning table must be cleared. Procedure You must run this procedure on the correct LUN so that you do not inadvertently destroy data. Delete the partition mappings in <LUN_ID> : kpartx -dv /dev/mapper/<LUN_ID> Erase the fileystem or raid signatures in <LUN_ID> : wipefs -a /dev/mapper/<LUN_ID> Inform the operating system about the partition table changes on <LUN_ID> : partprobe 2.6.6.9. Removing stale LUNs When a storage domain is removed, stale LUN links can remain on the storage server. This can lead to slow multipath scans, cluttered log files, and LUN ID conflicts. Red Hat Virtualization does not manage the iSCSI servers and, therefore, cannot automatically remove LUNs when a storage domain is removed. The administrator can manually remove stale LUN links with the remove_stale_lun.yml Ansible role. This role removes stale LUN links from all hosts that belong to given data center. For more information about this role and its variables, see the Remove Stale LUN role in the oVirt Ansible collection . Note It is assumed that you are running remove_stale_lun.yml from the engine machine as the engine ssh key is already added on all the hosts. If the playbook is not running on the engine machine, a user's SSH key must be added to all hosts that belong to the data center, or the user must provide an appropriate inventory file. Procedure Click Storage Domains . Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detatch , then click OK . Click Remove . Click OK to remove the storage domain from the source environment. Remove the LUN from the storage server. Remove the stale LUNs from the host using Ansible: # ansible-playbook --extra-vars "lun=<LUN>" /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/remove_stale_lun/examples/remove_stale_lun.yml where LUN is the LUN removed from the storage server in the steps above. Note If you remove the stale LUN from the host using Ansible without first removing the LUN from the storage server, the stale LUN will reappear on the host the time VDSM performs an iSCSI rescan. 2.6.6.10. Creating an LVM filter An LVM filter is a capability that can be set in /etc/lvm/lvm.conf to accept devices into or reject devices from the list of volumes based on a regex query. For example, to ignore /dev/cdrom you can use filter=["r|^/dev/cdromUSD|"] , or add the following parameter to the lvm command: lvs --config 'devices{filter=["r|cdrom|"]}' . This provides a simple way to prevent a host from scanning and activating logical volumes that are not required directly by the host. In particular, the solution addresses logical volumes on shared storage managed by RHV, and logical volumes created by a guest in RHV raw volumes. This solution is needed because scanning and activating other logical volumes may cause data corruption, slow boot, or other issues. The solution is to configure an LVM filter on each host, which allows the LVM on a host to scan only the logical volumes that are required by the host. You can use the command vdsm-tool config-lvm-filter to analyze the current LVM configuration and decide if a filter needs to be configured. If the LVM filter has not yet been configured, the command generates an LVM filter option for the host, and adds the option to the LVM configuration. Scenario 1: An Unconfigured Host On a host yet to be configured, the command automatically configures the LVM once the user confirms the operation: Scenario 2: A Configured Host If the host is already configured, the command simply informs the user that the LVM filter is already configured: Scenario 3: Manual Configuration Required If the host configuration does not match the configuration required by VDSM, the LVM filter will need to be configured manually: 2.6.7. Preparing and Adding Red Hat Gluster Storage 2.6.7.1. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 2.6.7.2. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 2.6.8. Importing Existing Storage Domains 2.6.8.1. Overview of Importing Existing Storage Domains Aside from adding new storage domains, which contain no data, you can import existing storage domains and access the data they contain. By importing storage domains, you can recover data in the event of a failure in the Manager database, and migrate data from one data center or environment to another. The following is an overview of importing each storage domain type: Data Importing an existing data storage domain allows you to access all of the virtual machines and templates that the data storage domain contains. After you import the storage domain, you must manually import virtual machines, floating disk images, and templates into the destination data center. The process for importing the virtual machines and templates that a data storage domain contains is similar to that for an export storage domain. However, because data storage domains contain all the virtual machines and templates in a given data center, importing data storage domains is recommended for data recovery or large-scale migration of virtual machines between data centers or environments. Important You can import existing data storage domains that were attached to data centers with the correct supported compatibility level. See Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions for more information. ISO Importing an existing ISO storage domain allows you to access all of the ISO files and virtual diskettes that the ISO storage domain contains. No additional action is required after importing the storage domain to access these resources; you can attach them to virtual machines as required. Export Importing an existing export storage domain allows you to access all of the virtual machine images and templates that the export storage domain contains. Because export domains are designed for exporting and importing virtual machine images and templates, importing export storage domains is recommended method of migrating small numbers of virtual machines and templates inside an environment or between environments. For information on exporting and importing virtual machines and templates to and from export storage domains, see Exporting and Importing Virtual Machines and Templates in the Virtual Machine Management Guide . Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disks, and templates can then be uploaded from the imported storage domain to the attached data center. Warning Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. 2.6.8.2. Importing storage domains Import a storage domain that was previously attached to a data center in the same environment or in a different environment. This procedure assumes the storage domain is no longer attached to any data center in any environment, to avoid data corruption. To import and attach an existing data storage domain to a data center, the target data center must be initialized. Procedure Click Storage Domains . Click Import Domain . Select the Data Center you want to import the storage domain to. Enter a Name for the storage domain. Select the Domain Function and Storage Type from the drop-down lists. Select a host from the Host drop-down list. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. Enter the details of the storage domain. Note The fields for specifying the details of the storage domain change depending on the values you select in the Domain Function and Storage Type lists. These fields are the same as those available for adding a new storage domain. Select the Activate Domain in Data Center check box to activate the storage domain after attaching it to the selected data center. Click OK . You can now import virtual machines and templates from the storage domain to the data center. Warning Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. Related information Importing Virtual Machines from a Data Domain Importing Templates from Imported Data Storage Domains 2.6.8.3. Migrating Storage Domains between Data Centers in the Same Environment Migrate a storage domain from one data center to another in the same Red Hat Virtualization environment to allow the destination data center to access the data contained in the storage domain. This procedure involves detaching the storage domain from one data center, and attaching it to a different data center. Warning Migrating a data storage domain to a data center that has a higher compatibility level than the original data center upgrades the storage domain's storage format version. If you want to move the storage domain back to the original data center for any reason, such as to migrate virtual machines to the new data center, be aware that the higher version prevents reattaching the data storage domain to the original data center. The Administration Portal prompts you to confirm that you want to update the storage domain format, for example, from V3 to V5 . It also warns that you will not be able to attach it back to an older data center with a lower DC level. To work around this issue, you can create a target data center that has the same compatibility version as the source data center. When you no longer need to maintain the lower compatibility version, you can increase the target data center's compatibility version. For details, see Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions . Procedure Shut down all virtual machines running on the required storage domain. Click Storage Domains . Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Attach . Select the destination data center and click OK . The storage domain is attached to the destination data center and is automatically activated. You can now import virtual machines and templates from the storage domain to the destination data center. 2.6.8.4. Migrating Storage Domains between Data Centers in Different Environments Migrate a storage domain from one Red Hat Virtualization environment to another to allow the destination environment to access the data contained in the storage domain. This procedure involves removing the storage domain from one Red Hat Virtualization environment, and importing it into a different environment. To import and attach an existing data storage domain to a Red Hat Virtualization data center, the storage domain's source data center must have the correct supported compatibility level. Warning Migrating a data storage domain to a data center that has a higher compatibility level than the original data center upgrades the storage domain's storage format version. If you want to move the storage domain back to the original data center for any reason, such as to migrate virtual machines to the new data center, be aware that the higher version prevents reattaching the data storage domain to the original data center. The Administration Portal prompts you to confirm that you want to update the storage domain format, for example, from V3 to V5 . It also warns that you will not be able to attach it back to an older data center with a lower DC level. To work around this issue, you can create a target data center that has the same compatibility version as the source data center. When you no longer need to maintain the lower compatibility version, you can increase the target data center's compatibility version. For details, see Supportability and constraints regarding importing Storage Domains and Virtual Machines from older RHV versions . Procedure Log in to the Administration Portal of the source environment. Shut down all virtual machines running on the required storage domain. Click Storage Domains . Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Remove . In the Remove Storage(s) window, ensure the Format Domain, i.e. Storage Content will be lost! check box is not selected. This step preserves the data in the storage domain for later use. Click OK to remove the storage domain from the source environment. Log in to the Administration Portal of the destination environment. Click Storage Domains . Click Import Domain . Select the destination data center from the Data Center drop-down list. Enter a name for the storage domain. Select the Domain Function and Storage Type from the appropriate drop-down lists. Select a host from the Host drop-down list. Enter the details of the storage domain. Note The fields for specifying the details of the storage domain change depending on the value you select in the Storage Type drop-down list. These fields are the same as those available for adding a new storage domain. Select the Activate Domain in Data Center check box to automatically activate the storage domain when it is attached. Click OK . The storage domain is attached to the destination data center in the new Red Hat Virtualization environment and is automatically activated. You can now import virtual machines and templates from the imported storage domain to the destination data center. Warning Upon attaching a Storage Domain to the destination Data-Center, it may be upgraded to a newer Storage Domain format and may not re-attach to the source Data-Center. This breaks the use of a Data-Domain as a replacement for Export Domains. 2.6.8.5. Importing Templates from Imported Data Storage Domains Import a template from a data storage domain you have imported into your Red Hat Virtualization environment. This procedure assumes that the imported data storage domain has been attached to a data center and has been activated. Procedure Click Storage Domains . Click the imported storage domain's name. This opens the details view. Click the Template Import tab. Select one or more templates to import. Click Import . For each template in the Import Templates(s) window, ensure the correct target cluster is selected in the Cluster list. Map external virtual machine vNIC profiles to profiles that are present on the target cluster(s): Click vNic Profiles Mapping . Select the vNIC profile to use from the Target vNic Profile drop-down list. If multiple target clusters are selected in the Import Templates window, select each target cluster in the Target Cluster drop-down list and ensure the mappings are correct. Click OK . Click OK . The imported templates no longer appear in the list under the Template Import tab. 2.6.9. Storage Tasks 2.6.9.1. Uploading Images to a Data Storage Domain You can upload virtual disk images and ISO images to your data storage domain in the Administration Portal or with the REST API. Note To upload images with the REST API, see IMAGETRANSFERS and IMAGETRANSFER in the REST API Guide . QEMU-compatible virtual disks can be attached to virtual machines. Virtual disk types must be either QCOW2 or raw. Disks created from a QCOW2 virtual disk cannot be shareable, and the QCOW2 virtual disk file must not have a backing file. ISO images can be attached to virtual machines as CDROMs or used to boot virtual machines. Prerequisites The upload function uses HTML 5 APIs, which requires your environment to have the following: Certificate authority, imported into the web browser used to access the Administration Portal. To import the certificate authority, browse to https:// engine_address /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA and enable all the trust settings. Refer to the instructions to install the certificate authority in Firefox , Internet Explorer , or Google Chrome . Browser that supports HTML 5, such as Firefox 35, Internet Explorer 10, Chrome 13, or later. Procedure Click Storage Disks . Select Start from the Upload menu. Click Choose File and select the image to upload. Fill in the Disk Options fields. See Explanation of Settings in the New Virtual Disk Window for descriptions of the relevant fields. Click OK . A progress bar indicates the status of the upload. You can pause, cancel, or resume uploads from the Upload menu. Tip If the upload times out with the message, Reason: timeout due to transfer inactivity , increase the timeout value and restart the ovirt-engine service: # engine-config -s TransferImageClientInactivityTimeoutInSeconds=6000 # systemctl restart ovirt-engine 2.6.9.2. Uploading the VirtIO image files to a storage domain The virtio-win _version .iso image contains the following for Windows virtual machines to improve performance and usability: VirtIO drivers an installer for the guest agents an installer for the drivers To install and upload the most recent version of virtio-win _version .iso : Install the image files on the Manager machine: # dnf -y install virtio-win After you install it on the Manager machine, the image file is /usr/share/virtio-win/virtio-win _version .iso Upload the image file to a data storage domain that was not created locally during installation. For more information, see Uploading Images to a Data Storage Domain in the Administration Guide . Attach the image file to virtual machines. The virtual machines can now use the virtio drivers and agents. For information on attaching the image files to a virtual machine, see Installing the Guest Agents, Tools, and Drivers on Windows in the Virtual Machine Management Guide . 2.6.9.3. Uploading images to an ISO domain Note The ISO domain is a deprecated storage domain type. The ISO Uploader tool, ovirt-iso-uploader , is removed in Red Hat Virtualization 4.4. You should upload ISO images to the data domain with the Administration Portal or with the REST API. See Uploading Images to a Data Storage Domain for details. Although the ISO domain is deprecated, this information is provided in case you must use an ISO domain. To upload an ISO image to an ISO storage domain in order to make it available from within the Manager, follow these steps. Procedure Login as root to the host that belongs to the Data Center where your ISO storage domain resides. Get a directory tree of /rhv/data-center : # tree /rhev/data-center . |-- 80dfacc7-52dd-4d75-ab82-4f9b8423dc8b | |-- 76d1ecba-b61d-45a4-8eb5-89ab710a6275 /rhev/data-center/mnt/10.10.10.10:_rhevnfssd/76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- b835cd1c-111c-468d-ba70-fec5346af227 /rhev/data-center/mnt/10.10.10.10:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227 | |-- mastersd 76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- tasks mastersd/master/tasks | `-- vms mastersd/master/vms |-- hsm-tasks `-- mnt |-- 10.10.10.10:_rhevisosd | |-- b835cd1c-111c-468d-ba70-fec5346af227 | | |-- dom_md | | | |-- ids | | | |-- inbox | | | |-- leases | | | |-- metadata | | | `-- outbox | | `-- images | | `-- 11111111-1111-1111-1111-111111111111 | `-- lost+found [error opening dir] (output trimmed) Securely copy the image from the source location into the full path of 11111111-1111-1111-1111-111111111111 : # scp root@isosource:/isos/example.iso /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111 File permissions for the newly copied ISO image should be 36:36 (vdsm:kvm). If they are not, change user and group ownership of the ISO file to 36:36 (vdsm's user and group): # cd /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111 # chown 36.36 example.iso The ISO image should now be available in the ISO domain in the data center. 2.6.9.4. Moving Storage Domains to Maintenance Mode A storage domain must be in maintenance mode before it can be detached and removed. This is required to redesignate another data domain as the master data domain. Important You cannot move a storage domain into maintenance mode if a virtual machine has a lease on the storage domain. The virtual machine needs to be shut down, or the lease needs to be to removed or moved to a different storage domain first. See the Virtual Machine Management Guide for information about virtual machine leases. Expanding iSCSI domains by adding more LUNs can only be done when the domain is active. Procedure Shut down all the virtual machines running on the storage domain. Click Storage Domains . Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance . Note The Ignore OVF update failure check box allows the storage domain to go into maintenance mode even if the OVF update fails. Click OK . The storage domain is deactivated and has an Inactive status in the results list. You can now edit, detach, remove, or reactivate the inactive storage domains from the data center. Note You can also activate, detach, and place domains into maintenance mode using the Storage tab in the details view of the data center it is associated with. 2.6.9.5. Editing Storage Domains You can edit storage domain parameters through the Administration Portal. Depending on the state of the storage domain, either active or inactive, different fields are available for editing. Fields such as Data Center , Domain Function , Storage Type , and Format cannot be changed. Active : When the storage domain is in an active state, the Name , Description , Comment , Warning Low Space Indicator (%) , Critical Space Action Blocker (GB) , Wipe After Delete , and Discard After Delete fields can be edited. The Name field can only be edited while the storage domain is active. All other fields can also be edited while the storage domain is inactive. Inactive : When the storage domain is in maintenance mode or unattached, thus in an inactive state, you can edit all fields except Name , Data Center , Domain Function , Storage Type , and Format . The storage domain must be inactive to edit storage connections, mount options, and other advanced parameters. This is only supported for NFS, POSIX, and Local storage types. Note iSCSI storage connections cannot be edited via the Administration Portal, but can be edited via the REST API. See Updating Storage Connections in the REST API Guide . Editing an Active Storage Domain* Click Storage Domains and select a storage domain. Click Manage Domain . Edit the available fields as required. Click OK . Editing an Inactive Storage Domain Click Storage Domains . If the storage domain is active, move it to maintenance mode: Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance . Click OK . Click Manage Domain . Edit the storage path and other details as required. The new connection details must be of the same storage type as the original connection. Click OK . Activate the storage domain: Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Activate . 2.6.9.6. Updating OVFs By default, OVFs are updated every 60 minutes. However, if you have imported an important virtual machine or made a critical update, you can update OVFs manually. Procedure Click Storage Domains . Select the storage domain and click More Actions ( ), then click Update OVFs . The OVFs are updated and a message appears in Events . 2.6.9.7. Activating Storage Domains from Maintenance Mode If you have been making changes to a data center's storage, you have to put storage domains into maintenance mode. Activate a storage domain to resume using it. Click Storage Domains . Click an inactive storage domain's name. This opens the details view. Click the Data Centers tab. Click Activate . Important If you attempt to activate the ISO domain before activating the data domain, an error message displays and the domain is not activated. 2.6.9.8. Detaching a Storage Domain from a Data Center Detach a storage domain from one data center to migrate it to another data center. Procedure Click Storage Domains . Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance . Click OK to initiate maintenance mode. Click Detach . Click OK to detach the storage domain. The storage domain has been detached from the data center, ready to be attached to another data center. 2.6.9.9. Attaching a Storage Domain to a Data Center Attach a storage domain to a data center. Procedure Click Storage Domains . Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Attach . Select the appropriate data center. Click OK . The storage domain is attached to the data center and is automatically activated. 2.6.9.10. Removing a Storage Domain You have a storage domain in your data center that you want to remove from the virtualized environment. Procedure Click Storage Domains . Move the storage domain to maintenance mode and detach it: Click the storage domain's name. This opens the details view. Click the Data Center tab. Click Maintenance , then click OK . Click Detach , then click OK . Click Remove . Optionally select the Format Domain, i.e. Storage Content will be lost! check box to erase the content of the domain. Click OK . The storage domain is permanently removed from the environment. 2.6.9.11. Destroying a Storage Domain A storage domain encountering errors may not be able to be removed through the normal procedure. Destroying a storage domain forcibly removes the storage domain from the virtualized environment. Procedure Click Storage Domains . Select the storage domain and click More Actions ( ), then click Destroy . Select the Approve operation check box. Click OK . 2.6.9.12. Creating a Disk Profile Disk profiles define the maximum level of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are created based on storage profiles defined under data centers, and must be manually assigned to individual virtual disks for the profile to take effect. This procedure assumes you have already defined one or more storage quality of service entries under the data center to which the storage domain belongs. Procedure Click Storage Domains . Click the data storage domain's name. This opens the details view. Click the Disk Profiles tab. Click New . Enter a Name and a Description for the disk profile. Select the quality of service to apply to the disk profile from the QoS list. Click OK . 2.6.9.13. Removing a Disk Profile Remove an existing disk profile from your Red Hat Virtualization environment. Procedure Click Storage Domains . Click the data storage domain's name. This opens the details view. Click the Disk Profiles tab. Select the disk profile to remove. Click Remove . Click OK . If the disk profile was assigned to any virtual disks, the disk profile is removed from those virtual disks. 2.6.9.14. Viewing the Health Status of a Storage Domain Storage domains have an external health status in addition to their regular Status . The external health status is reported by plug-ins or external systems, or set by an administrator, and appears to the left of the storage domain's Name as one of the following icons: OK : No icon Info : Warning : Error : Failure : To view further details about the storage domain's health status, click the storage domain's name. This opens the details view, and click the Events tab. The storage domain's health status can also be viewed using the REST API. A GET request on a storage domain will include the external_status element, which contains the health status. You can set a storage domain's health status in the REST API via the events collection. For more information, see Adding Events in the REST API Guide . 2.6.9.15. Setting Discard After Delete for a Storage Domain When the Discard After Delete check box is selected, a blkdiscard command is called on a logical volume when it is removed and the underlying storage is notified that the blocks are free. The storage array can use the freed space and allocate it when requested. Discard After Delete only works on block storage. The flag is not available on the Red Hat Virtualization Manager for file storage, for example NFS. Restrictions: Discard After Delete is only available on block storage domains, such as iSCSI or Fibre Channel. The underlying storage must support Discard . Discard After Delete can be enabled both when creating a block storage domain or when editing a block storage domain. See Preparing and Adding Block Storage and Editing Storage Domains . 2.6.9.16. Enabling 4K support on environments with more than 250 hosts By default, GlusterFS domains and local storage domains support 4K block size on Red Hat Virtualization environments with up to 250 hosts. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO. Note GlusterFS Storage is deprecated, and will no longer be supported in future releases. The lockspace area that Sanlock allocates is 1 MB when the maximum number of hosts is the default 250. When you increase the maximum number of hosts when using 4K storage, the lockspace area is larger. For example, when using 2000 hosts, the lockspace area could be as large as 8 MB. You can enable 4K block support on environments with more than 250 hosts by setting the engine configuration parameter MaxNumberOfHostsInStoragePool . Procedure On the Manager machine enable the required maximum number of hosts: # engine-config -s MaxNumberOfHostsInStoragePool= NUMBER_OF_HOSTS Restart the JBoss Application Server: # service jboss-as restart For example, if you have a cluster with 300 hosts, enter: # engine-config -s MaxNumberOfHostsInStoragePool=300 # service jboss-as restart Verification View the value of the MaxNumberOfHostsInStoragePool parameter on the Manager: # engine-config --get=MaxNumberOfHostsInStoragePool MaxNumberOfHostsInStoragePool: 250 version: general 2.6.9.17. Disabling 4K support By default, GlusterFS domains and local storage domains support 4K block size. 4K block size can provide better performance, especially when using large files, and it is also necessary when you use tools that require 4K compatibility, such as VDO. Note GlusterFS Storage is deprecated, and will no longer be supported in future releases. You can disable 4K block support. Procedure Ensure that 4K block support is enabled. USD vdsm-client Host getCapabilities ... { "GLUSTERFS" : [ 0, 512, 4096, ] ... Edit /etc/vdsm/vdsm.conf.d/gluster.conf and set enable_4k_storage to false . For example: USD vi /etc/vdsm/vdsm.conf.d/gluster.conf [gluster] # Use to disable 4k support # if needed. enable_4k_storage = false 2.6.9.18. Monitoring available space in a storage domain You can monitor available space in a storage domain and create an alert to warn you when a storage domain is nearing capacity. You can also define a critical threshold at which point the domain shuts down. With Virtual Data Optimizer (VDO) and thin pool support, you might see more available space than is physically available. For VDO this behavior is expected, but the Manager cannot predict how much data you can actually write. The Warning Low Confirmed Space Indicator parameter notifies you when the domain is nearing physical space capacity and shows how much confirmed space remains. Confirmed space refers to the actual space available to write data. Procedure In the Administration Portal, click Storage Storage Domain and click the name of a storage domain. Click Manage Domain . The Manage Domains dialog box opens. Expand Advanced Parameters . For Warning Low Space Indicator (%) enter a percentage value. When the available space in the storage domain reaches this value, the Manager alerts you that the domain is nearing capacity. For Critical Space Action Blocker (GB) , enter a value in gigabytes. When the available space in the storage domain reaches this value, the Manager shuts down. For Warning Low Confirmed Space Indicator (%) enter a percentage value. When the available space in the storage domain reaches this value, the Manager alerts you that the actual space available to write data is nearing capacity.
[ "dnf install nfs-utils -y", "cat /proc/fs/nfsd/versions", "systemctl enable nfs-server systemctl enable rpcbind", "groupadd kvm -g 36", "useradd vdsm -u 36 -g kvm", "mkdir /storage chmod 0755 /storage chown 36:36 /storage/", "vi /etc/exports cat /etc/exports /storage *(rw)", "systemctl restart rpcbind systemctl restart nfs-server", "exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>", "mkdir -p /data/images", "chown 36:36 /data /data/images chmod 0755 /data /data/images", "mkdir /data lvcreate -L USDSIZE rhvh -n data mkfs.ext4 /dev/mapper/rhvh-data echo \"/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2\" >> /etc/fstab mount /data", "mount -a", "chown 36:36 /data /rhvh-data chmod 0755 /data /rhvh-data", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } }", "Physical device initialization failed. Please check that the device is empty and accessible by the host.", "[ ERROR ] Error creating Volume Group: Failed to initialize physical device: (\"[u'/dev/mapper/000000000000000000000000000000000']\",) [ ERROR ] Failed to execute stage 'Misc configuration': Failed to initialize physical device: (\"[u'/dev/mapper/000000000000000000000000000000000']\",)", "kpartx -dv /dev/mapper/<LUN_ID>", "wipefs -a /dev/mapper/<LUN_ID>", "partprobe", "ansible-playbook --extra-vars \"lun=<LUN>\" /usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/remove_stale_lun/examples/remove_stale_lun.yml", "vdsm-tool config-lvm-filter", "Analyzing host Found these mounted logical volumes on this host:", "logical volume: /dev/mapper/vg0-lv_home mountpoint: /home devices: /dev/vda2", "logical volume: /dev/mapper/vg0-lv_root mountpoint: / devices: /dev/vda2", "logical volume: /dev/mapper/vg0-lv_swap mountpoint: [SWAP] devices: /dev/vda2", "This is the recommended LVM filter for this host:", "filter = [ \"a|^/dev/vda2USD|\", \"r|.*|\" ]", "This filter will allow LVM to access the local devices used by the hypervisor, but not shared storage owned by VDSM. If you add a new device to the volume group, you will need to edit the filter manually.", "Configure LVM filter? [yes,NO] ? [NO/yes] yes Configuration completed successfully!", "Please reboot to verify the LVM configuration.", "vdsm-tool config-lvm-filter", "Analyzing host LVM filter is already configured for Vdsm", "vdsm-tool config-lvm-filter", "Analyzing host Found these mounted logical volumes on this host:", "logical volume: /dev/mapper/vg0-lv_home mountpoint: /home devices: /dev/vda2", "logical volume: /dev/mapper/vg0-lv_root mountpoint: / devices: /dev/vda2", "logical volume: /dev/mapper/vg0-lv_swap mountpoint: [SWAP] devices: /dev/vda2", "This is the recommended LVM filter for this host:", "filter = [ \"a|^/dev/vda2USD|\", \"r|.*|\" ]", "This filter will allow LVM to access the local devices used by the hypervisor, but not shared storage owned by VDSM. If you add a new device to the volume group, you will need to edit the filter manually.", "This is the current LVM filter:", "filter = [ \"a|^/dev/vda2USD|\", \"a|^/dev/vdb1USD|\", \"r|.*|\" ]", "WARNING: The current LVM filter does not match the recommended filter, Vdsm cannot configure the filter automatically.", "Please edit /etc/lvm/lvm.conf and set the 'filter' option in the 'devices' section to the recommended value.", "It is recommended to reboot after changing LVM filter.", "engine-config -s TransferImageClientInactivityTimeoutInSeconds=6000 systemctl restart ovirt-engine", "dnf -y install virtio-win", "tree /rhev/data-center . |-- 80dfacc7-52dd-4d75-ab82-4f9b8423dc8b | |-- 76d1ecba-b61d-45a4-8eb5-89ab710a6275 /rhev/data-center/mnt/10.10.10.10:_rhevnfssd/76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- b835cd1c-111c-468d-ba70-fec5346af227 /rhev/data-center/mnt/10.10.10.10:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227 | |-- mastersd 76d1ecba-b61d-45a4-8eb5-89ab710a6275 | |-- tasks mastersd/master/tasks | `-- vms mastersd/master/vms |-- hsm-tasks `-- mnt |-- 10.10.10.10:_rhevisosd | |-- b835cd1c-111c-468d-ba70-fec5346af227 | | |-- dom_md | | | |-- ids | | | |-- inbox | | | |-- leases | | | |-- metadata | | | `-- outbox | | `-- images | | `-- 11111111-1111-1111-1111-111111111111 | `-- lost+found [error opening dir] (output trimmed)", "scp root@isosource:/isos/example.iso /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111", "cd /rhev/data-center/mnt/10.96.4.50:_rhevisosd/b835cd1c-111c-468d-ba70-fec5346af227/images/11111111-1111-1111-1111-111111111111 chown 36.36 example.iso", "engine-config -s MaxNumberOfHostsInStoragePool= NUMBER_OF_HOSTS", "service jboss-as restart", "engine-config -s MaxNumberOfHostsInStoragePool=300 service jboss-as restart", "engine-config --get=MaxNumberOfHostsInStoragePool MaxNumberOfHostsInStoragePool: 250 version: general", "vdsm-client Host getCapabilities ... { \"GLUSTERFS\" : [ 0, 512, 4096, ] ...", "vi /etc/vdsm/vdsm.conf.d/gluster.conf Use to disable 4k support if needed. enable_4k_storage = false" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-storage
Chapter 4. Configuring persistent storage
Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Container Platform supports Amazon Elastic Block Store (EBS) volumes. You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2 . The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision Amazon EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. By default, newly created clusters using OpenShift Container Platform version 4.10 and later use gp3 storage and the AWS EBS CSI driver . Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . 4.1.1. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.1.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.1.4. Maximum number of EBS volumes on a node By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type. For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator . 4.1.5. Encrypting container persistent volumes on AWS with a KMS key Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS. Prerequisites Underlying infrastructure must contain storage. You must create a customer KMS key on AWS. Procedure Create a storage class: USD cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: "true" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF 1 Specifies the name of the storage class. 2 File system that is created on provisioned volumes. 3 Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true , then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation. Create a persistent volume claim (PVC) with the storage class specifying the KMS key: USD cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF Create workload containers to consume the PVC: USD cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF 4.1.6. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 4.2. Persistent storage using Azure OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Microsoft Azure Disk 4.2.1. Creating the Azure storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/azure-disk from the drop down list. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS , PremiumV2_LRS , Standard_LRS , StandardSSD_LRS , and UltraSSD_LRS . Important The skuname PremiumV2_LRS is not supported in all regions, and in some supported regions, not all of the availability zones are supported. For more information, see Azure doc . Enter the kind of account. Valid options are shared , dedicated, and managed . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. Enter additional parameters for the storage class as desired. Click Create to create the storage class. Additional resources Azure Disk Storage Class 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.2.4. Machine sets that deploy machines with ultra disks using PVCs You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks as data disks 4.2.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 These lines enable the use of ultra disks. Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Create a storage class that contains the following YAML definition: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: "2000" 2 diskMbpsReadWrite: "320" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5 1 Specify the name of the storage class. This procedure uses ultra-disk-sc for this value. 2 Specify the number of IOPS for the storage class. 3 Specify the throughput in MBps for the storage class. 4 For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com . For earlier versions of AKS, use kubernetes.io/azure-disk . 5 Optional: Specify this parameter to wait for the creation of the pod that will use the disk. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3 1 Specify the name of the PVC. This procedure uses ultra-disk for this value. 2 This PVC references the ultra-disk-sc storage class. 3 Specify the size for the storage class. The minimum value is 4Gi . Create a pod that contains the following YAML definition: apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2 1 Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value. 2 This pod references the ultra-disk PVC. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered. For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, describe the pod by running the following command: USD oc -n <stuck_pod_namespace> describe pod <stuck_pod_name> 4.3. Persistent storage using Azure File OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically. Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications. Important High availability of storage in the infrastructure is left to the underlying storage provider. Important Azure File volumes use Server Message Block. Important OpenShift Container Platform 4.13 and later provides automatic migration for the Azure File in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Additional resources Azure Files 4.3.1. Create the Azure File share persistent volume claim To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications. Prerequisites An Azure File share exists. The credentials to access this share, specifically the storage account and key, are available. Procedure Create a Secret object that contains the Azure File credentials: USD oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2 1 The Azure File storage account name. 2 The Azure File storage account key. Create a PersistentVolume object that references the Secret object you created: apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false 1 The name of the persistent volume. 2 The size of this persistent volume. 3 The name of the secret that contains the Azure File share credentials. 4 The name of the Azure File share. Create a PersistentVolumeClaim object that maps to the persistent volume you created: apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4 1 The name of the persistent volume claim. 2 The size of this persistent volume claim. 3 The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition. 4 The name of the existing PersistentVolume object that references the Azure File share. 4.3.2. Mount the Azure File share in a pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying Azure File share. Procedure Create a pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3 1 The name of the pod. 2 The path to mount the Azure File share inside the pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the PersistentVolumeClaim object that has been previously created. 4.4. Persistent storage using Cinder OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed. Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Additional resources For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder . 4.4.1. Manual provisioning with Cinder Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Prerequisites OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP) Cinder volume ID 4.4.1.1. Creating the persistent volume You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform: Procedure Save your object definition to a file. cinder-persistentvolume.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5 1 The name of the volume that is used by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes. 4 The file system that is created when the volume is mounted for the first time. 5 The Cinder volume to use. Important Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure. Create the object definition file you saved in the step. USD oc create -f cinder-persistentvolume.yaml 4.4.1.2. Persistent volume formatting You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use. Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. 4.4.1.3. Cinder volume security If you use Cinder PVs in your application, configure security for their deployment configurations. Prerequisites An SCC must be created that uses the appropriate fsGroup strategy. Procedure Create a service account and add it to the SCC: USD oc create serviceaccount <service_account> USD oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project> In your application's deployment configuration, provide the service account name and securityContext : apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod that the controller creates. 4 The labels on the pod. They must include labels from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6 Specifies the service account you created. 7 Specifies an fsGroup for the pods. 4.5. Persistent storage using Fibre Channel OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed. Important Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Using Fibre Channel devices 4.5.1. Provisioning To provision Fibre Channel volumes using the PersistentVolume API the following must be available: The targetWWNs (array of Fibre Channel target's World Wide Names). A valid LUN number. The filesystem type. A persistent volume and a LUN have a one-to-one mapping between them. Prerequisites Fibre Channel LUNs must exist in the underlying infrastructure. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4 1 World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data ( page 0x83 ) or Unit Serial Number ( page 0x80 ). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. 2 3 Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#> , but you do not need to provide any part of the path leading up to the WWN , including the 0x , and anything after, including the - (hyphen). Important Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure. 4.5.1.1. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.5.1.2. Fibre Channel volume security Users request storage with a persistent volume claim. This claim only lives in the user's namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail. Each Fibre Channel LUN must be accessible by all nodes in the cluster. 4.6. Persistent storage using FlexVolume Important FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers. To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications. Pods interact with FlexVolume drivers through the flexvolume in-tree plugin. Additional resources Expanding persistent volumes 4.6.1. About FlexVolume drivers A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source. Important Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume. 4.6.2. FlexVolume driver example The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data. The FlexVolume driver contains: All flexVolume.options . Some options from flexVolume prefixed by kubernetes.io/ , such as fsType and readwrite . The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/ . FlexVolume driver JSON input example { "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", } 1 All options from flexVolume.options . 2 The value of flexVolume.fsType . 3 ro / rw based on flexVolume.readOnly . 4 All keys and their values from the secret referenced by flexVolume.secretRef . OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation. FlexVolume driver default output example { "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" } Exit code of the driver should be 0 for success and 1 for error. Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation. 4.6.3. Installing FlexVolume drivers FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required. Prerequisites FlexVolume drivers must implement these operations: init Initializes the driver. It is called during initialization of all nodes. Arguments: none Executed on: node Expected output: default JSON mount Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmount Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting. Arguments: <mount-dir> Executed on: node Expected output: default JSON mountdevice Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmountdevice Unmounts a volume's device from a directory. Arguments: <mount-dir> Executed on: node Expected output: default JSON All other operations should return JSON with {"status": "Not supported"} and exit code 1 . Procedure To install the FlexVolume driver: Ensure that the executable file exists on all nodes in the cluster. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> . For example, to install the FlexVolume driver for the storage foo , place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo . 4.6.4. Consuming storage using FlexVolume drivers Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume. Procedure Use the PersistentVolume object to reference the installed storage. Persistent volume object definition using FlexVolume drivers example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar 1 The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. 2 The amount of storage allocated to this volume. 3 The name of the driver. This field is mandatory. 4 The file system that is present on the volume. This field is optional. 5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional. 6 The read-only flag. This field is optional. 7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable: Note Secrets are passed only to mount or unmount call-outs. 4.7. Persistent storage using GCE Persistent Disk OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.7.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.7.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.7.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.8. Persistent storage using iSCSI You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI . Some familiarity with Kubernetes and iSCSI is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260 . Important Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi . The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS). For more information, see Managing Storage Devices . 4.8.1. Provisioning Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' 4.8.2. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi ) and be matched with a corresponding volume of equal or greater capacity. 4.8.3. iSCSI volume security Users request storage with a PersistentVolumeClaim object. This claim only lives in the user's namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail. Each iSCSI LUN must be accessible by all nodes in the cluster. 4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3 1 Enable CHAP authentication of iSCSI discovery. 2 Enable CHAP authentication of iSCSI session. 3 Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume. 4.8.4. iSCSI multipathing For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail. To specify multi-paths in the pod specification, use the portals field. For example: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false 1 Add additional target portals using the portals field. 4.8.5. iSCSI custom initiator IQN Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs. To specify a custom initiator IQN, use initiatorName field. apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false 1 Specify the name of the initiator. 4.9. Persistent storage using NFS OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. Additional resources Mounting NFS shares 4.9.1. Provisioning Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required. Procedure Create an object definition for the PV: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7 1 The name of the volume. This is the PV identity in various oc <command> pod commands. 2 The amount of storage allocated to this volume. 3 Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes . 4 The volume type being used, in this case the nfs plugin. 5 The path that is exported by the NFS server. 6 The hostname or IP address of the NFS server. 7 The reclaim policy for the PV. This defines what happens to a volume when released. Note Each NFS volume must be mountable by all schedulable nodes in the cluster. Verify that the PV was created: USD oc get pv Example output NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s Create a persistent volume claim that binds to the new PV: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: "" 1 The access modes do not enforce security, but rather act as labels to match a PV to a PVC. 2 This claim looks for PVs offering 5Gi or greater capacity. Verify that the persistent volume claim was created: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m 4.9.2. Enforcing disk quotas You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume's server and path is up to the administrator. Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.9.3. NFS volume security This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux. Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition. The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container's NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior. As an example, if the target NFS directory appears on the NFS server as: USD ls -lZ /opt/nfs -d Example output drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs USD id nfsnobody Example output uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody) Then the container must match SELinux labels, and either run with a UID of 65534 , the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root , uid 0 , to nfsnobody , uid 65534 , NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports. 4.9.3.1. Group IDs The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod. Note To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. Because the group ID on the example target NFS directory is 5555 , the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example: spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2 1 securityContext must be defined at the pod level, not under a specific container. 2 An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny , meaning that any supplied group ID is accepted without range checking. As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.9.3.2. User IDs User IDs can be defined in the container image or in the Pod definition. Note It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. In the example target NFS directory shown above, the container needs its UID set to 65534 , ignoring group IDs for the moment, so the following can be added to the Pod definition: spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2 1 Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod. 2 65534 is the nfsnobody user. Assuming that the project is default and the SCC is restricted , the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons: It requests 65534 as its user ID. All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 . While all policies of the SCCs are checked, the focus here is on user ID. Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required. 65534 is not included in the SCC or project's user ID range. It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.9.3.3. SELinux Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default. For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure. Prerequisites The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean. Procedure Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots. # setsebool -P virt_use_nfs 1 4.9.3.4. Export settings To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions: Every export must be exported using the following format: /<example_fs> *(rw,root_squash) The firewall must be configured to allow traffic to the mount point. For NFSv4, configure the default port 2049 ( nfs ). NFSv4 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT For NFSv3, there are three ports to configure: 2049 ( nfs ), 20048 ( mountd ), and 111 ( portmapper ). NFSv3 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups , as shown in the group IDs above. 4.9.4. Reclaiming resources NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume. By default, PVs are set to Retain . Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original. For example, the administrator creates a PV named nfs1 : apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" The user creates PVC1 , which binds to nfs1 . The user then deletes PVC1 , releasing claim to nfs1 . This results in nfs1 being Released . If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss. 4.9.5. Additional configuration and troubleshooting Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply: NFSv4 mount incorrectly shows all files with ownership of nobody:nobody Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS. See this Red Hat Solution . Disabling ID mapping on NFSv4 On both the NFS client and server, run: # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping 4.10. Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . 4.11. Persistent storage using VMware vSphere volumes OpenShift Container Platform allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image. Note OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important For new installations, OpenShift Container Platform 4.13 and later provides automatic migration for the vSphere in-tree volume plugin to its equivalent CSI driver. Updating to OpenShift Container Platform 4.15 and later also provides automatic migration. For more information about updating and migration, see CSI automatic migration . CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. Additional resources VMware vSphere 4.11.1. Dynamically provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the recommended method. 4.11.2. Prerequisites An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support. You can use either of the following procedures to dynamically provision these volumes using the default storage class. 4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI OpenShift Container Platform installs a default storage class, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the thin storage class. Enter a unique name for the storage claim. Select the access mode to determine the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI OpenShift Container Platform installs a default StorageClass, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure (CLI) You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml , with the following contents: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml 4.11.3. Statically provisioning VMware vSphere volumes To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods: Create using vmkfstools . Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume: USD vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk Create using vmware-diskmanager : USD shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk Create a persistent volume that references the VMDKs. Create a file, pv1.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. If you used vmkfstools , you must enclose the datastore name in square brackets, [] , in the volume definition, as shown previously. 5 The file system type to mount. For example, ext4, xfs, or other file systems. Important Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. Create the PersistentVolume object from the file: USD oc create -f pv1.yaml Create a persistent volume claim that maps to the persistent volume you created in the step. Create a file, pvc1.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. 4 The name of the existing persistent volume. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc1.yaml 4.11.3.1. Formatting VMware vSphere volumes Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system. Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs. 4.12. Persistent storage using local storage 4.12.1. Local storage overview You can use any of the following solutions to provision local storage: HostPath Provisioner (HPP) Local Storage Operator (LSO) Logical Volume Manager (LVM) Storage Warning These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms. 4.12.1.1. Overview of HostPath Provisioner functionality You can perform the following actions using HostPath Provisioner (HPP): Map the host filesystem paths to storage classes for provisioning local storage. Statically create storage classes to configure filesystem paths on a node for storage consumption. Statically provision Persistent Volumes (PVs) based on the storage class. Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology. Note HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes. 4.12.1.2. Overview of Local Storage Operator functionality You can perform the following actions using Local Storage Operator (LSO): Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration. Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR). Create workloads and PVCs while being aware of the underlying storage topology. Note LSO is developed and delivered by Red Hat. 4.12.1.3. Overview of LVM Storage functionality You can perform the following actions using Logical Volume Manager (LVM) Storage: Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes. Create workloads and request storage by using PVCs without considering the node topology. LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs. Note LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm". 4.12.1.4. Comparison of LVM Storage, LSO, and HPP The following sections compare the functionalities provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage. 4.12.1.4.1. Comparison of the support for storage types and filesystems The following table compares the support for storage types and filesystems provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage: Table 4.1. Comparison of the support for storage types and filesystems Functionality LVM Storage LSO HPP Support for block storage Yes Yes No Support for file storage Yes Yes Yes Support for object storage [1] No No No Available filesystems ext4 , xfs ext4 , xfs Any mounted system available on the node is supported. None of the solutions (LVM Storage, LSO, and HPP) provide support for object storage. Therefore, if you want to use object storage, you need an S3 object storage solution, such as MultiClusterGateway from the Red Hat OpenShift Data Foundation. All of the solutions can serve as underlying storage providers for the S3 object storage solutions. 4.12.1.4.2. Comparison of the support for core functionalities The following table compares how LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) support core functionalities for provisioning local storage: Table 4.2. Comparison of the support for core functionalities Functionality LVM Storage LSO HPP Support for automatic file system formatting Yes Yes N/A Support for dynamic provisioning Yes No No Support for using software Redundant Array of Independent Disks (RAID) arrays Yes Supported on 4.15 and later. Yes Yes Support for transparent disk encryption Yes Supported on 4.16 and later. Yes Yes Support for volume based disk encryption No No No Support for disconnected installation Yes Yes Yes Support for PVC expansion Yes No No Support for volume snapshots and volume clones Yes No No Support for thin provisioning Yes Devices are thin-provisioned by default. Yes You can configure the devices to point to the thin-provisioned volumes Yes You can configure a path to point to the thin-provisioned volumes. Support for automatic disk discovery and setup Yes Automatic disk discovery is available during installation and runtime. You can also dynamically add the disks to the LVMCluster custom resource (CR) to increase the storage capacity of the existing storage classes. Technology Preview Automatic disk discovery is available during installation. No 4.12.1.4.3. Comparison of performance and isolation capabilities The following table compares the performance and isolation capabilities of LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) in provisioning local storage. Table 4.3. Comparison of performance and isolation capabilities Functionality LVM Storage LSO HPP Performance I/O speed is shared for all workloads that use the same storage class. Block storage allows direct I/O operations. Thin provisioning can affect the performance. I/O depends on the LSO configuration. Block storage allows direct I/O operations. I/O speed is shared for all workloads that use the same storage class. The restrictions imposed by the underlying filesystem can affect the I/O speed. Isolation boundary [1] LVM Logical Volume (LV) It provides higher level of isolation compared to HPP. LVM Logical Volume (LV) It provides higher level of isolation compared to HPP Filesystem path It provides lower level of isolation compared to LSO and LVM Storage. Isolation boundary refers to the level of separation between different workloads or applications that use local storage resources. 4.12.1.4.4. Comparison of the support for additional functionalities The following table compares the additional features provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage: Table 4.4. Comparison of the support for additional functionalities Functionality LVM Storage LSO HPP Support for generic ephemeral volumes Yes No No Support for CSI inline ephemeral volumes No No No Support for storage topology Yes Supports CSI node topology Yes LSO provides partial support for storage topology through node tolerations. No Support for ReadWriteMany (RWX) access mode [1] No No No All of the solutions (LVM Storage, LSO, and HPP) have the ReadWriteOnce (RWO) access mode. RWO access mode allows access from multiple pods on the same node. 4.12.2. Persistent storage using local volumes OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface. Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. Note Local volumes can only be used as a statically created persistent volume. 4.12.2.1. Installing the Local Storage Operator The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster. Prerequisites Access to the OpenShift Container Platform web console or command-line interface (CLI). Procedure Create the openshift-local-storage project: USD oc adm new-project openshift-local-storage Optional: Allow local storage creation on infrastructure nodes. You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring. You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes. To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command: USD oc annotate namespace openshift-local-storage openshift.io/node-selector='' Optional: Allow local storage to run on the management pool of CPUs in single-node deployment. Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning. To allow Local Storage Operator to run on the management CPU pool, run following commands: USD oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management' From the UI To install the Local Storage Operator from the web console, follow these steps: Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Local Storage into the filter box to locate the Local Storage Operator. Click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-local-storage from the drop-down menu. Adjust the values for Update Channel and Approval Strategy to the values that you want. Click Install . Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console. From the CLI Install the Local Storage Operator from the CLI. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml : Example openshift-local-storage.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace 1 The user approval policy for an install plan. Create the Local Storage Operator object by entering the following command: USD oc apply -f openshift-local-storage.yaml At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verify local storage installation by checking that all pods and the Local Storage Operator have been created: Check that all the required pods have been created: USD oc -n openshift-local-storage get pods Example output NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project: USD oc get csvs -n openshift-local-storage Example output NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded After all checks have passed, the Local Storage Operator is installed successfully. 4.12.2.2. Provisioning local volumes by using the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Prerequisites The Local Storage Operator is installed. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs). Example: Filesystem apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Filesystem 5 fsType: xfs 6 devicePaths: 7 - /path/to/device 8 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 4 This setting defines whether or not to call wipefs , which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" ( wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where data can remain on the disks planned to be consumed as object storage devices (OSDs). 5 The volume mode, either Filesystem or Block , that defines the type of local volumes. Note A raw block volume ( volumeMode: Block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. 6 The file system that is created when the local volume is mounted for the first time. 7 The path containing a list of local storage devices to choose from. 8 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "local-sc" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block 5 devicePaths: 6 - /path/to/device 7 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 This setting defines whether or not to call wipefs , which removes partition table signatures (magic strings) making the disk ready to use for Local Storage Operator (LSO) provisioning. No other data besides signatures is erased. The default is "false" ( wipefs is not invoked). Setting forceWipeDevicesAndDestroyAllData to "true" can be useful in scenarios where data can remain on disks that need to be re-used. In these scenarios, setting this field to true eliminates the need for administrators to erase the disks manually. Such cases can include single-node OpenShift (SNO) cluster environments where a node can be redeployed multiple times or when using OpenShift Data Foundation (ODF), where data can remain on the disks planned to be consumed as object storage devices (OSDs). 5 The volume mode, either Filesystem or Block , that defines the type of local volumes. 6 The path containing a list of local storage devices to choose from. 7 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation. 4.12.2.3. Provisioning local volumes without the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Important Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. Prerequisites Local disks are attached to the OpenShift Container Platform nodes. Procedure Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml , with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple PVs. example-pv-filesystem.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode . Note A raw block volume ( volumeMode: block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. example-pv-block.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <example-pv>.yaml Verify that the local PV was created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h 4.12.2.4. Creating the local volume persistent volume claim Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. Prerequisites Persistent volumes have been created using the local volume provisioner. Procedure Create the PVC using the corresponding storage class: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4 1 Name of the PVC. 2 The type of the PVC. Defaults to Filesystem . 3 The amount of storage available to the PVC. 4 Name of the storage class required by the claim. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pvc>.yaml 4.12.2.5. Attach the local claim After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource. Prerequisites A persistent volume claim exists in the same namespace. Procedure Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod: apiVersion: v1 kind: Pod spec: # ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3 # ... 1 The name of the volume to mount. 2 The path inside the pod where the volume is mounted. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the existing persistent volume claim to use. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pod>.yaml 4.12.2.6. Automating discovery and provisioning for local storage devices The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices. Important Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment. Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices. Warning Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported. Prerequisites You have cluster administrator permissions. You have installed the Local Storage Operator. You have attached local disks to OpenShift Container Platform nodes. You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI). Procedure To enable automatic discovery of local devices from the web console: Click Operators Installed Operators . In the openshift-local-storage namespace, click Local Storage . Click the Local Volume Discovery tab. Click Create Local Volume Discovery and then select either Form view or YAML view . Configure the LocalVolumeDiscovery object parameters. Click Create . The Local Storage Operator creates a local volume discovery instance named auto-discover-devices . To display a continuous list of available devices on a node: Log in to the OpenShift Container Platform web console. Navigate to Compute Nodes . Click the node name that you want to open. The "Node Details" page is displayed. Select the Disks tab to display the list of the selected devices. The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode. To automatically provision local volumes for the discovered devices from the web console: Navigate to Operators Installed Operators and select Local Storage from the list of Operators. Select Local Volume Set Create Local Volume Set . Enter a volume set name and a storage class name. Choose All nodes or Select nodes to apply filters accordingly. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create . A message displays after several minutes, indicating that the "Operator reconciled successfully." Alternatively, to provision local volumes for the discovered devices from the CLI: Create an object YAML file to define the local volume set, such as local-volume-set.yaml , as shown in the following example: apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM 1 Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 2 When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices. Create the local volume set object: USD oc apply -f local-volume-set.yaml Verify that the local persistent volumes were dynamically provisioned based on the storage class: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Note Results are deleted after they are removed from the node. Symlinks must be manually removed. 4.12.2.7. Using tolerations with Local Storage Operator pods Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. Important Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect . An operator allows you to leave one of these parameters empty. Prerequisites The Local Storage Operator is installed. Local disks are attached to OpenShift Container Platform nodes with a taint. Tainted nodes are expected to provision local storage. Procedure To configure local volumes for scheduling on tainted nodes: Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example: apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "local-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg 1 Specify the key that you added to the node. 2 Specify the Equal operator to require the key / value parameters to match. If operator is Exists , the system checks that the key exists and ignores the value. If operator is Equal , then the key and value must match. 3 Specify the value local of the tainted node. 4 The volume mode, either Filesystem or Block , defining the type of the local volumes. 5 The path containing a list of local storage devices to choose from. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example: spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. 4.12.2.8. Local Storage Operator Metrics OpenShift Container Platform provides the following metrics for the Local Storage Operator: lso_discovery_disk_count : total number of discovered devices on each node lso_lvset_provisioned_PV_count : total number of PVs created by LocalVolumeSet objects lso_lvset_unmatched_disk_count : total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria lso_lvset_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolumeSet object criteria lso_lv_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolume object criteria lso_lv_provisioned_PV_count : total number of provisioned PVs for LocalVolume To use these metrics, be sure to: Enable support for monitoring when installing the Local Storage Operator. When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace. For more information about metrics, see Accessing metrics as an administrator . 4.12.2.9. Deleting the Local Storage Operator resources 4.12.2.9.1. Removing a local volume or local volume set Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. Note The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. Prerequisites The persistent volume must be in a Released or Available state. Warning Deleting a persistent volume that is still in use can result in data loss or corruption. Procedure Edit the previously created local volume to remove any unwanted disks. Edit the cluster resource: USD oc edit localvolume <name> -n openshift-local-storage Navigate to the lines under devicePaths , and delete any representing unwanted disks. Delete any persistent volumes created. USD oc delete pv <pv-name> Delete directory and included symlinks on the node. Warning The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. USD oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1 1 The name of the storage class used to create the local volumes. 4.12.2.9.2. Uninstalling the Local Storage Operator To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project. Warning Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator's removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. Prerequisites Access to the OpenShift Container Platform web console. Procedure Delete any local volume resources installed in the project, such as localvolume , localvolumeset , and localvolumediscovery : USD oc delete localvolume --all --all-namespaces USD oc delete localvolumeset --all --all-namespaces USD oc delete localvolumediscovery --all --all-namespaces Uninstall the Local Storage Operator from the web console. Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Type Local Storage into the filter box to locate the Local Storage Operator. Click the Options menu at the end of the Local Storage Operator. Click Uninstall Operator . Click Remove in the window that appears. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command: USD oc delete pv <pv-name> Delete the openshift-local-storage project: USD oc delete project openshift-local-storage 4.12.3. Persistent storage using hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it. Important The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node. 4.12.3.1. Overview OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster. In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning. A hostPath volume must be provisioned statically. Important Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host . The following example shows the / directory from the host being mounted into the container at /host . apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: '' 4.12.3.2. Statically provisioning hostPath volumes A pod that uses a hostPath volume must be referenced by manual (static) provisioning. Procedure Define the persistent volume (PV) by creating a pv.yaml file with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4 1 The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods. 2 Used to bind persistent volume claim (PVC) requests to the PV. 3 The volume can be mounted as read-write by a single node. 4 The configuration file specifies that the volume is at /mnt/data on the cluster's node. To avoid corrupting your host system, do not mount to the container root, / , or any path that is the same in the host and the container. You can safely mount the host by using /host Create the PV from the file: USD oc create -f pv.yaml Define the PVC by creating a pvc.yaml file with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual Create the PVC from the file: USD oc create -f pvc.yaml 4.12.3.3. Mounting the hostPath share in a privileged pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying hostPath share. Procedure Create a privileged pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4 1 The name of the pod. 2 The pod must run as privileged to access the node's storage. 3 The path to mount the host path share inside the privileged pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 4 The name of the PersistentVolumeClaim object that has been previously created. 4.12.4. Persistent storage using Logical Volume Manager Storage Logical Volume Manager (LVM) Storage uses LVM2 through the TopoLVM CSI driver to dynamically provision local storage on a cluster with limited resources. You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage. 4.12.4.1. Logical Volume Manager Storage installation You can install Logical Volume Manager (LVM) Storage on an OpenShift Container Platform cluster and configure it to dynamically provision storage for your workloads. You can install LVM Storage by using the OpenShift Container Platform CLI ( oc ), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM). Warning When using LVM Storage on multi-node clusters, LVM Storage only supports provisioning local storage. LVM Storage does not support storage data replication mechanisms across nodes. You must ensure storage data replication through active or passive replication mechanisms to avoid a single point of failure. 4.12.4.1.1. Prerequisites to install LVM Storage The prerequisites to install LVM Storage are as follows: Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM. Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them. Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention. Note You cannot wipe the disks that are in use. If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the "Installing LVM Storage using RHACM" section. Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online 4.12.4.1.2. Installing LVM Storage by using the CLI As a cluster administrator, you can install LVM Storage by using the OpenShift CLI. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to OpenShift Container Platform as a user with cluster-admin and Operator installation permissions. Procedure Create a YAML file with the configuration for creating a namespace: Example YAML configuration for creating a namespace apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage Create the namespace by running the following command: USD oc create -f <file_name> Create an OperatorGroup CR YAML file: Example OperatorGroup CR apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage Create the OperatorGroup CR by running the following command: USD oc create -f <file_name> Create a Subscription CR YAML file: Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f <file_name> Verification To verify that LVM Storage is installed, run the following command: USD oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase 4.13.0-202301261535 Succeeded 4.12.4.1.3. Installing LVM Storage by using the web console You can install LVM Storage by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster. You have access to OpenShift Container Platform with cluster-admin and Operator installation permissions. Procedure Log in to the OpenShift Container Platform web console. Click Operators OperatorHub . Click LVM Storage on the OperatorHub page. Set the following options on the Operator Installation page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If the openshift-storage namespace does not exist, it is created during the operator installation. Update approval as Automatic or Manual . Note If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Click Install . Verification steps Verify that LVM Storage shows a green tick, indicating successful installation. 4.12.4.1.4. Installing LVM Storage in a disconnected environment You can install LVM Storage on OpenShift Container Platform in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section. Prerequisites You read the "About disconnected installation mirroring" section. You have access to the OpenShift Container Platform image repository. You created a mirror registry. Procedure Follow the steps in the "Creating the image set configuration" procedure. To create an ImageSetConfiguration custom resource (CR) for LVM Storage, you can use the following example ImageSetConfiguration CR configuration: Example ImageSetConfiguration CR for LVM Storage kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.17 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {} 1 Set the maximum size (in GiB) of each file within the image set. 2 Specify the location in which you want to save the image set. This location can be a registry or a local directory. You must configure the storageConfig field unless you are using the Technology Preview OCI feature. 3 Specify the storage URL for the image stream when using a registry. For more information, see Why use imagestreams . 4 Specify the channel from which you want to retrieve the OpenShift Container Platform images. 5 Set this field to true to generate the OpenShift Update Service (OSUS) graph image. For more information, see About the OpenShift Update Service . 6 Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images. 7 Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved. 8 Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: USD oc mirror list operators --catalog=<catalog_name> --package=<package_name> . 9 Specify any additional images to include in the image set. Follow the procedure in the "Mirroring an image set to a mirror registry" section. Follow the procedure in the "Configuring image registry repository mirroring" section. Additional resources About disconnected installation mirroring Creating a mirror registry with mirror registry for Red Hat OpenShift Mirroring the OpenShift Container Platform image repository Creating the image set configuration Mirroring an image set to a mirror registry Configuring image registry repository mirroring Why use imagestreams 4.12.4.1.5. Installing LVM Storage by using RHACM To install LVM Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage. Note The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR. Prerequisites You have access to the RHACM cluster using an account with cluster-admin and Operator installation permissions. You have dedicated disks that LVM Storage can use on each cluster. The cluster must be managed by RHACM. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Create a namespace. USD oc create ns <namespace> Create a Policy CR YAML file: Example Policy CR to install and configure LVM Storage apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low 1 Set the key field and values field in PlacementRule.spec.clusterSelector to match the labels that are configured in the clusters on which you want to install LVM Storage. 2 Namespace configuration. 3 The OperatorGroup CR configuration. 4 The Subscription CR configuration. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Upon creating the Policy CR, the following custom resources are created on the clusters that match the selection criteria configured in the PlacementRule CR: Namespace OperatorGroup Subscription Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online About the LVMCluster custom resource 4.12.4.2. About the LVMCluster custom resource You can configure the LVMCluster CR to perform the following actions: Create LVM volume groups that you can use to provision persistent volume claims (PVCs). Configure a list of devices that you want to add to the LVM volume groups. Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group. Force wipe the selected devices. After you have installed LVM Storage, you must create an LVMCluster custom resource (CR). Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: "true" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 forceWipeDevicesAndDestroyAllData: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10 chunkSize: 128Ki 5 chunkSizeCalculationPolicy: Static 6 1 2 3 4 5 6 Optional field Explanation of fields in the LVMCluster CR The LVMCluster CR fields are described in the following table: Table 4.5. LVMCluster CR fields Field Type Description spec.storage.deviceClasses array Contains the configuration to assign the local storage devices to the LVM volume groups. LVM Storage creates a storage class and volume snapshot class for each device class that you create. deviceClasses.name string Specify a name for the LVM volume group (VG). You can also configure this field to reuse a volume group that you created in the installation. For more information, see "Reusing a volume group from the LVM Storage installation". deviceClasses.fstype string Set this field to ext4 or xfs . By default, this field is set to xfs . deviceClasses.default boolean Set this field to true to indicate that a device class is the default. Otherwise, you can set it to false . You can only configure a single default device class. deviceClasses.nodeSelector object Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster. nodeSelector.nodeSelectorTerms array Configure the requirements that are used to select the node. deviceClasses.deviceSelector object Contains the configuration to perform the following actions: Specify the paths to the devices that you want to add to the LVM volume group. Force wipe the devices that are added to the LVM volume group. For more information, see "About adding devices to a volume group". deviceSelector.paths array Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. deviceSelector.optionalPaths array Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. deviceSelector. forceWipeDevicesAndDestroyAllData boolean LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them. To force wipe the selected devices, set this field to true . By default, this field is set to false . Warning If this field is set to true , LVM Storage wipes all data on the devices. Use this feature with caution. Wiping the device can lead to inconsistencies in data integrity if any of the following conditions are met: The device is being used as swap space. The device is part of a RAID array. The device is mounted. If any of these conditions are true, do not force wipe the disk. Instead, you must manually wipe the disk. deviceClasses.thinPoolConfig object Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. Using thick-provisioned storage includes the following limitations: No copy-on-write support for volume cloning. No support for snapshot class. No support for over-provisioning. As a result, the provisioned capacity of PersistentVolumeClaims (PVCs) is immediately reduced from the volume group. No support for thin metrics. Thick-provisioned devices only support volume group metrics. thinPoolConfig.name string Specify a name for the thin pool. thinPoolConfig.sizePercent integer Specify the percentage of space in the LVM volume group for creating the thin pool. By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90. thinPoolConfig.overprovisionRatio integer Specify a factor by which you can provision additional storage based on the available storage in the thin pool. For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool. To disable over-provisioning, set this field to 1. thinPoolConfig.chunkSize integer Specifies the statically calculated chunk size for the thin pool. This field is only used when the ChunkSizeCalculationPolicy field is set to Static . The value for this field must be configured in the range of 64 KiB to 1 GiB because of the underlying limitations of lvm2 . If you do not configure this field and the ChunkSizeCalculationPolicy field is set to Static , the default chunk size is set to 128 KiB. For more information, see "Overview of chunk size". thinPoolConfig.chunkSizeCalculationPolicy string Specifies the policy to calculate the chunk size for the underlying volume group. You can set this field to either Static or Host . By default, this field is set to Static . If this field is set to Static , the chunk size is set to the value of the chunkSize field. If the chunkSize field is not configured, chunk size is set to 128 KiB. If this field is set to Host , the chunk size is calculated based on the configuration in the lvm.conf file. For more information, see "Limitations to configure the size of the devices used in LVM Storage". Additional resources Overview of chunk size Limitations to configure the size of the devices used in LVM Storage Reusing a volume group from the LVM Storage installation About adding devices to a volume group Adding worker nodes to single-node OpenShift clusters 4.12.4.2.1. Limitations to configure the size of the devices used in LVM Storage The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows: The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor. The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE). You can define the size of PE and LE during the physical and logical device creation. The default PE and LE size is 4 MB. If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space. The following tables describe the chunk size and volume size limits for static and host configurations: Table 4.6. Tested configuration Parameter Value Chunk size 128 KiB Maximum volume size 32 TiB Table 4.7. Theoretical size limits for static configuration Parameter Minimum value Maximum value Chunk size 64 KiB 1 GiB Volume size Minimum size of the underlying Red Hat Enterprise Linux CoreOS (RHCOS) system. Maximum size of the underlying RHCOS system. Table 4.8. Theoretical size limits for a host configuration Parameter Value Chunk size This value is based on the configuration in the lvm.conf file. By default, this value is set to 128 KiB. Maximum volume size Equal to the maximum volume size of the underlying RHCOS system. Minimum volume size Equal to the minimum volume size of the underlying RHCOS system. 4.12.4.2.2. About adding devices to a volume group The deviceSelector field in the LVMCluster CR contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the supported unused devices to the volume group (VG). Warning It is recommended to avoid referencing disks using symbolic naming, such as /dev/sdX , as these names may change across reboots within RHCOS. Instead, you must use stable naming schemes, such as /dev/disk/by-path/ or /dev/disk/by-id/ , to ensure consistent disk identification. With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node. For more information, see the RHEL documentation . You can add the path to the Redundant Array of Independent Disks (RAID) arrays in the deviceSelector field to integrate the RAID arrays with LVM Storage. You can create the RAID array by using the mdadm utility. LVM Storage does not support creating a software RAID. Note You can create a RAID array only during an OpenShift Container Platform installation. For information on creating a RAID array, see the following sections: "Configuring a RAID-enabled data volume" in "Additional resources". Creating a software RAID on an installed system Replacing a failed disk in RAID Repairing RAID disks You can also add encrypted devices to the volume group. You can enable disk encryption on the cluster nodes during an OpenShift Container Platform installation. After encrypting a device, you can specify the path to the LUKS encrypted device in the deviceSelector field. For information on disk encryption, see "About disk encryption" and "Configuring disk encryption and mirroring". The devices that you want to add to the VG must be supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". LVM Storage adds the devices to the VG only if the following conditions are met: The device path exists. The device is supported by LVM Storage. Important After a device is added to the VG, you cannot remove the device. LVM Storage supports dynamic device discovery. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices to the VG when the devices are available. Warning It is not recommended to add the devices to the VG through dynamic device discovery due to the following reasons: When you add a new device that you do not intend to add to the VG, LVM Storage automatically adds this device to the VG through dynamic device discovery. If LVM Storage adds a device to the VG through dynamic device discovery, LVM Storage does not restrict you from removing the device from the node. Removing or updating the devices that are already added to the VG can disrupt the VG. This can also lead to data loss and necessitate manual node remediation. Additional resources Configuring a RAID-enabled data volume About disk encryption Configuring disk encryption and mirroring Devices not supported by LVM Storage 4.12.4.2.3. Devices not supported by LVM Storage When you are adding the device paths in the deviceSelector field of the LVMCluster custom resource (CR), ensure that the devices are supported by LVM Storage. If you add paths to the unsupported devices, LVM Storage excludes the devices to avoid complexity in managing logical volumes. If you do not specify any device path in the deviceSelector field, LVM Storage adds only the unused devices that it supports. Note To get information about the devices, run the following command: USD lsblk --paths --json -o \ NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE LVM Storage does not support the following devices: Read-only devices Devices with the ro parameter set to true . Suspended devices Devices with the state parameter set to suspended . ROM devices Devices with the type parameter set to rom . LVM partition devices Devices with the type parameter set to lvm . Devices with invalid partition labels Devices with the partlabel parameter set to bios , boot , or reserved . Devices with an invalid filesystem Devices with the fstype parameter set to any value other than null or LVM2_member . Important LVM Storage supports devices with fstype parameter set to LVM2_member only if the devices do not contain children devices. Devices that are part of another volume group To get the information about the volume groups of the device, run the following command: USD pvs <device-name> 1 1 Replace <device-name> with the device name. Devices with bind mounts To get the mount points of a device, run the following command: USD cat /proc/1/mountinfo | grep <device-name> 1 1 Replace <device-name> with the device name. Devices that contain children devices Note It is recommended to wipe the device before using it in LVM Storage to prevent unexpected behavior. 4.12.4.3. Ways to create an LVMCluster custom resource You can create an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM. Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs: A storageClass and volumeSnapshotClass for each device class. Note LVM Storage configures the name of the storage class and volume snapshot class in the format lvms-<device_class_name> , where, <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1, the name of the storage class and volume snapshot class is lvms-vg1 . LVMVolumeGroup : This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes. LVMVolumeGroupNodeStatus : This CR tracks the status of the volume groups on a node. 4.12.4.3.1. Reusing a volume group from the LVM Storage installation You can reuse an existing volume group (VG) from the LVM Storage installation instead of creating a new VG. You can only reuse a VG but not the logical volume associated with the VG. Important You can perform this procedure only while creating an LVMCluster custom resource (CR). Prerequisites The VG that you want to reuse must not be corrupted. The VG that you want to reuse must have the lvms tag. For more information on adding tags to LVM objects, see Grouping LVM objects with tags . Procedure Open the LVMCluster CR YAML file. Configure the LVMCluster CR parameters as described in the following example: Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: # ... storage: deviceClasses: - name: vg1 1 fstype: ext4 2 default: true deviceSelector: 3 # ... forceWipeDevicesAndDestroyAllData: false 4 thinPoolConfig: 5 # ... nodeSelector: 6 # ... 1 Set this field to the name of a VG from the LVM Storage installation. 2 Set this field to ext4 or xfs . By default, this field is set to xfs . 3 You can add new devices to the VG that you want to reuse by specifying the new device paths in the deviceSelector field. If you do not want to add new devices to the VG, ensure that the deviceSelector configuration in the current LVM Storage installation is same as that of the LVM Storage installation. 4 If this field is set to true , LVM Storage wipes all the data on the devices that are added to the VG. 5 To retain the thinPoolConfig configuration of the VG that you want to reuse, ensure that the thinPoolConfig configuration in the current LVM Storage installation is same as that of the LVM Storage installation. Otherwise, you can configure the thinPoolConfig field as required. 6 Configure the requirements to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. Save the LVMCluster CR YAML file. Note To view the devices that are part a volume group, run the following command: USD pvs -S vgname=<vg_name> 1 1 Replace <vg_name> with the name of the volume group. 4.12.4.3.2. Creating an LVMCluster CR by using the CLI You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI ( oc ). Important You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to OpenShift Container Platform as a user with cluster-admin privileges. You have installed LVM Storage. You have installed a worker node in the cluster. You read the "About the LVMCluster custom resource" section. Procedure Create an LVMCluster custom resource (CR) YAML file: Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: # ... storage: deviceClasses: 1 # ... nodeSelector: 2 # ... deviceSelector: 3 # ... thinPoolConfig: 4 # ... 1 Contains the configuration to assign the local storage devices to the LVM volume groups. 2 Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. 3 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group. 4 Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. Create the LVMCluster CR by running the following command: USD oc create -f <file_name> Example output lvmcluster/lvmcluster created Verification Check that the LVMCluster CR is in the Ready state: USD oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace> Example output {"deviceClassStatuses": 1 [ { "name": "vg1", "nodeStatus": [ 2 { "devices": [ 3 "/dev/nvme0n1", "/dev/nvme1n1", "/dev/nvme2n1" ], "node": "kube-node", 4 "status": "Ready" 5 } ] } ] "state":"Ready"} 6 1 The status of the device class. 2 The status of the LVM volume group on each node. 3 The list of devices used to create the LVM volume group. 4 The node on which the device class is created. 5 The status of the LVM volume group on the node. 6 The status of the LVMCluster CR. Note If the LVMCluster CR is in the Failed state, you can view the reason for failure in the status field. Example of status field with the reason for failue: status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed Optional: To view the storage classes created by LVM Storage for each device class, run the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command: USD oc get volumesnapshotclass Example output NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h Additional resources About the LVMCluster custom resource 4.12.4.3.3. Creating an LVMCluster CR by using the web console You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console. Important You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster. Prerequisites You have access to the OpenShift Container Platform cluster with cluster-admin privileges. You have installed LVM Storage. You have installed a worker node in the cluster. You read the "About the LVMCluster custom resource" section. Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . In the openshift-storage namespace, click LVM Storage . Click Create LVMCluster and select either Form view or YAML view . Configure the required LVMCluster CR parameters. Click Create . Optional: If you want to edit the LVMCLuster CR, perform the following actions: Click the LVMCluster tab. From the Actions menu, select Edit LVMCluster . Click YAML and edit the required LVMCLuster CR parameters. Click Save . Verification On the LVMCLuster page, check that the LVMCluster CR is in the Ready state. Optional: To view the available storage classes created by LVM Storage for each device class, click Storage StorageClasses . Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage VolumeSnapshotClasses . Additional resources About the LVMCluster custom resource 4.12.4.3.4. Creating an LVMCluster CR by using RHACM After you have installed LVM Storage by using RHACM, you must create an LVMCluster custom resource (CR). Prerequisites You have installed LVM Storage by using RHACM. You have access to the RHACM cluster using an account with cluster-admin permissions. You read the "About the LVMCluster custom resource" section. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Create a ConfigurationPolicy CR YAML file with the configuration to create an LVMCluster CR: Example ConfigurationPolicy CR YAML file to create an LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 # ... deviceSelector: 2 # ... thinPoolConfig: 3 # ... nodeSelector: 4 # ... remediationAction: enforce severity: low 1 Contains the configuration to assign the local storage devices to the LVM volume groups. 2 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group. 3 Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. 4 Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered. Create the ConfigurationPolicy CR by running the following command: USD oc create -f <file_name> -n <cluster_namespace> 1 1 Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed. Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online About the LVMCluster custom resource 4.12.4.4. Ways to delete an LVMCluster custom resource You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM. Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs: storageClass volumeSnapshotClass LVMVolumeGroup LVMVolumeGroupNodeStatus 4.12.4.4.1. Deleting an LVMCluster CR by using the CLI You can delete the LVMCluster custom resource (CR) using the OpenShift CLI ( oc ). Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the OpenShift CLI ( oc ). Delete the LVMCluster CR by running the following command: USD oc delete lvmcluster <lvmclustername> -n openshift-storage Verification To verify that the LVMCluster CR has been deleted, run the following command: USD oc get lvmcluster -n <namespace> Example output No resources found in openshift-storage namespace. 4.12.4.4.2. Deleting an LVMCluster CR by using the web console You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators to view all the installed Operators. Click LVM Storage in the openshift-storage namespace. Click the LVMCluster tab. From the Actions , select Delete LVMCluster . Click Delete . Verification On the LVMCLuster page, check that the LVMCluster CR has been deleted. 4.12.4.4.3. Deleting an LVMCluster CR by using RHACM If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster CR by using RHACM. Prerequisites You have access to the RHACM cluster as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Delete the ConfigurationPolicy CR YAML file that was created for the LVMCluster CR: USD oc delete -f <file_name> -n <cluster_namespace> 1 1 Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed. Create a Policy CR YAML file to delete the LVMCluster CR: Example Policy CR to delete the LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue 1 The spec.remediationAction in policy-template is overridden by the preceding parameter value for spec.remediationAction . 2 This namespace field must have the openshift-storage value. 3 Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Create a Policy CR YAML file to check if the LVMCluster CR has been deleted: Example Policy CR to check if the LVMCluster CR has been deleted apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue 1 The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction . 2 The namespace field must have the openshift-storage value. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Verification Check the status of the Policy CRs by running the following command: USD oc get policy -n <namespace> Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m Important The Policy CRs must be in Compliant state. 4.12.4.5. Provisioning storage After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs). The following are the minimum storage sizes that you can request for each file system type: block : 8 MiB xfs : 300 MiB ext4 : 32 MiB To create a PVC, you must create a PersistentVolumeClaim object. Prerequisites You have created an LVMCluster CR. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object: Example PersistentVolumeClaim object apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 limits: storage: 20Gi 4 storageClassName: lvms-vg1 5 1 Specify a name for the PVC. 2 To create a block PVC, set this field to Block . To create a file PVC, set this field to Filesystem . 3 Specify the storage size. If the value is less than the minimum storage size, the requested storage size is rounded to the minimum storage size. The total storage size you can provision is limited by the size of the Logical Volume Manager (LVM) thin pool and the over-provisioning factor. 4 Optional: Specify the storage limit. Set this field to a value that is greater than or equal to the minimum storage size. Otherwise, PVC creation fails with an error. 5 The value of the storageClassName field must be in the format lvms-<device_class_name> where <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1 , you must set the storageClassName field to lvms-vg1 . Note The volumeBindingMode field of the storage class is set to WaitForFirstConsumer . Create the PVC by running the following command: # oc create -f <file_name> -n <application_namespace> Note The created PVCs remain in Pending state until you deploy the pods that use them. Verification To verify that the PVC is created, run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.6. Ways to scale up the storage of clusters OpenShift Container Platform supports additional worker nodes for clusters on bare metal user-provisioned infrastructure. You can scale up the storage of clusters either by adding new worker nodes with available storage or by adding new devices to the existing worker nodes. Logical Volume Manager (LVM) Storage detects and uses additional worker nodes when the nodes become active. To add a new device to the existing worker nodes on a cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR). Important You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available. Note LVM Storage adds only the supported devices. For information about unsupported devices, see "Devices not supported by LVM Storage". Additional resources Adding worker nodes to single-node OpenShift clusters Devices not supported by LVM Storage 4.12.4.6.1. Scaling up the storage of clusters by using the CLI You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift CLI ( oc ). Prerequisites You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage. You have installed the OpenShift CLI ( oc ). You have created an LVMCluster custom resource (CR). Procedure Edit the LVMCluster CR by running the following command: USD oc edit <lvmcluster_file_name> -n <namespace> Add the path to the new device in the deviceSelector field. Example LVMCluster CR apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met: The device path exists. The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". 2 Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Save the LVMCluster CR. Additional resources About the LVMCluster custom resource Devices not supported by LVM Storage About adding devices to a volume group 4.12.4.6.2. Scaling up the storage of clusters by using the web console You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift Container Platform web console. Prerequisites You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage. You have created an LVMCluster custom resource (CR). Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click LVM Storage in the openshift-storage namespace. Click the LVMCluster tab to view the LVMCluster CR created on the cluster. From the Actions menu, select Edit LVMCluster . Click the YAML tab. Edit the LVMCluster CR to add the new device path in the deviceSelector field: Example LVMCluster CR apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met: The device path exists. The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". 2 Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Click Save . Additional resources About the LVMCluster custom resource Devices not supported by LVM Storage About adding devices to a volume group 4.12.4.6.3. Scaling up the storage of clusters by using RHACM You can scale up the storage capacity of worker nodes on the clusters by using RHACM. Prerequisites You have access to the RHACM cluster using an account with cluster-admin privileges. You have created an LVMCluster custom resource (CR) by using RHACM. You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Edit the LVMCluster CR that you created using RHACM by running the following command: USD oc edit -f <file_name> -ns <namespace> 1 1 Replace <file_name> with the name of the LVMCluster CR. In the LVMCluster CR, add the path to the new device in the deviceSelector field. Example LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met: The device path exists. The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage". 2 Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Save the LVMCluster CR. Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online About the LVMCluster custom resource Devices not supported by LVM Storage About adding devices to a volume group 4.12.4.7. Expanding a persistent volume claim After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs). To expand a PVC, you must update the storage field in the PVC. Prerequisites Dynamic provisioning is used. The StorageClass object associated with the PVC has the allowVolumeExpansion field set to true . Procedure Log in to the OpenShift CLI ( oc ). Update the value of the spec.resources.requests.storage field to a value that is greater than the current value by running the following command: USD oc patch <pvc_name> -n <application_namespace> -p \ 1 '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}} --type=merge' 2 1 Replace <pvc_name> with the name of the PVC that you want to expand. 2 Replace <desired_size> with the new size to expand the PVC. Verification To verify that resizing is completed, run the following command: USD oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage} LVM Storage adds the Resizing condition to the PVC during expansion. It deletes the Resizing condition after the PVC expansion. Additional resources Ways to scale up the storage of clusters Enabling volume expansion support 4.12.4.8. Deleting a persistent volume claim You can delete a persistent volume claim (PVC) by using the OpenShift CLI ( oc ). Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Delete the PVC by running the following command: USD oc delete pvc <pvc_name> -n <namespace> Verification To verify that the PVC is deleted, run the following command: USD oc get pvc -n <namespace> The deleted PVC must not be present in the output of this command. 4.12.4.9. About volume snapshots You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage. You can perform the following actions using the volume snapshots: Back up your application data. Important Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information about OADP, see "OADP features". Revert to a state at which the volume snapshot was taken. Note You can also create volume snapshots of the volume clones. 4.12.4.9.1. Limitations for creating volume snapshots in multi-node topology LVM Storage has the following limitations for creating volume snapshots in multi-node topology: Creating volume snapshots is based on the LVM thin pool capabilities. After creating a volume snapshot, the node must have additional storage space for further updating the original data source. You can create volume snapshots only on the node where you have deployed the original data source. Pods relying on the PVC that uses the snapshot data can be scheduled only on the node where you have deployed the original data source. Additional resources OADP features 4.12.4.9.2. Creating volume snapshots You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshotClass object. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot. You stopped all the I/O to the PVC. Procedure Log in to the OpenShift CLI ( oc ). Create a VolumeSnapshot object: Example VolumeSnapshot object apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3 1 Specify a name for the volume snapshot. 2 Specify the name of the source PVC. LVM Storage creates a snapshot of this PVC. 3 Set this field to the name of a volume snapshot class. Note To get the list of available volume snapshot classes, run the following command: USD oc get volumesnapshotclass Create the volume snapshot in the namespace where you created the source PVC by running the following command: USD oc create -f <file_name> -n <namespace> LVM Storage creates a read-only copy of the PVC as a volume snapshot. Verification To verify that the volume snapshot is created, run the following command: USD oc get volumesnapshot -n <namespace> Example output NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s The value of the READYTOUSE field for the volume snapshot that you created must be true . 4.12.4.9.3. Restoring volume snapshots To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot. The restored PVC is independent of the volume snapshot and the source PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have created a volume snapshot. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object with the configuration to restore the volume snapshot: Example PersistentVolumeClaim object to restore a volume snapshot kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io 1 Specify the storage size of the restored PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot. 2 Set this field to the value of the storageClassName field in the source PVC of the volume snapshot that you want to restore. 3 Set this field to the name of the volume snapshot that you want to restore. Create the PVC in the namespace where you created the volume snapshot by running the following command: USD oc create -f <file_name> -n <namespace> Verification To verify that the volume snapshot is restored, run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.9.4. Deleting volume snapshots You can delete the volume snapshots of the persistent volume claims (PVCs). Important When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have ensured that the volume snpashot that you want to delete is not in use. Procedure Log in to the OpenShift CLI ( oc ). Delete the volume snapshot by running the following command: USD oc delete volumesnapshot <volume_snapshot_name> -n <namespace> Verification To verify that the volume snapshot is deleted, run the following command: USD oc get volumesnapshot -n <namespace> The deleted volume snapshot must not be present in the output of this command. 4.12.4.10. About volume clones A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data. 4.12.4.10.1. Limitations for creating volume clones in multi-node topology LVM Storage has the following limitations for creating volume clones in multi-node topology: Creating volume clones is based on the LVM thin pool capabilities. The node must have additional storage after creating a volume clone for further updating the original data source. You can create volume clones only on the node where you have deployed the original data source. Pods relying on the PVC that uses the clone data can be scheduled only on the node where you have deployed the original data source. 4.12.4.10.2. Creating volume clones To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC. Important The cloned PVC has write access. Prerequisites You ensured that the source PVC is in Bound state. This is required for a consistent clone. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object: Example PersistentVolumeClaim object to create a volume clone kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4 1 Set this field to the value of the storageClassName field in the source PVC. 2 Set this field to the volumeMode field in the source PVC. 3 Specify the name of the source PVC. 4 Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC. Create the PVC in the namespace where you created the source PVC by running the following command: USD oc create -f <file_name> -n <namespace> Verification To verify that the volume clone is created, run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.10.3. Deleting volume clones You can delete volume clones. Important When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Delete the cloned PVC by running the following command: # oc delete pvc <clone_pvc_name> -n <namespace> Verification To verify that the volume clone is deleted, run the following command: USD oc get pvc -n <namespace> The deleted volume clone must not be present in the output of this command. 4.12.4.11. Updating LVM Storage You can update LVM Storage to ensure compatibility with the OpenShift Container Platform version. Prerequisites You have updated your OpenShift Container Platform cluster. You have installed a version of LVM Storage. You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Update the Subscription custom resource (CR) that you created while installing LVM Storage by running the following command: USD oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}' 1 1 Replace <update_channel> with the version of LVM Storage that you want to install. For example, stable-4.17 . View the update events to check that the installation is complete by running the following command: USD oc get events -n openshift-storage Example output ... 8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.17 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.17 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.17 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.17 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.17 installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.17 install strategy completed with no errors ... Verification Verify the LVM Storage version by running the following command: USD oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}' Example output lvms-operator.v4.17 4.12.4.12. Monitoring LVM Storage To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage: openshift.io/cluster-monitoring=true Important For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics . 4.12.4.12.1. Metrics You can monitor LVM Storage by viewing the metrics. The following table describes the topolvm metrics: Table 4.9. topolvm metrics Alert Description topolvm_thinpool_data_percent Indicates the percentage of data space used in the LVM thinpool. topolvm_thinpool_metadata_percent Indicates the percentage of metadata space used in the LVM thinpool. topolvm_thinpool_size_bytes Indicates the size of the LVM thin pool in bytes. topolvm_volumegroup_available_bytes Indicates the available space in the LVM volume group in bytes. topolvm_volumegroup_size_bytes Indicates the size of the LVM volume group in bytes. topolvm_thinpool_overprovisioned_available Indicates the available over-provisioned size of the LVM thin pool in bytes. Note Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool. 4.12.4.12.2. Alerts When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss. LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value: Table 4.10. LVM Storage alerts Alert Description VolumeGroupUsageAtThresholdNearFull This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required. VolumeGroupUsageAtThresholdCritical This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required. ThinPoolDataUsageAtThresholdNearFull This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. ThinPoolDataUsageAtThresholdCritical This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. ThinPoolMetaDataUsageAtThresholdNearFull This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. ThinPoolMetaDataUsageAtThresholdCritical This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. 4.12.4.13. Uninstalling LVM Storage by using the CLI You can uninstall LVM Storage by using the OpenShift CLI ( oc ). Prerequisites You have logged in to oc as a user with cluster-admin permissions. You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You deleted the LVMCluster custom resource (CR). Procedure Get the currentCSV value for the LVM Storage Operator by running the following command: USD oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV Example output currentCSV: lvms-operator.v4.15.3 Delete the subscription by running the following command: USD oc delete subscription.operators.coreos.com lvms-operator -n <namespace> Example output subscription.operators.coreos.com "lvms-operator" deleted Delete the CSV for the LVM Storage Operator in the target namespace by running the following command: USD oc delete clusterserviceversion <currentCSV> -n <namespace> 1 1 Replace <currentCSV> with the currentCSV value for the LVM Storage Operator. Example output clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted Verification To verify that the LVM Storage Operator is uninstalled, run the following command: USD oc get csv -n <namespace> If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command. 4.12.4.14. Uninstalling LVM Storage by using the web console You can uninstall LVM Storage using the OpenShift Container Platform web console. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You have deleted the LVMCluster custom resource (CR). Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click LVM Storage in the openshift-storage namespace. Click the Details tab. From the Actions menu, select Uninstall Operator . Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage. Click Uninstall . 4.12.4.15. Uninstalling LVM Storage installed using RHACM To uninstall LVM Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage. Prerequisites You have access to the RHACM cluster as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You have deleted the LVMCluster CR that you created using RHACM. Procedure Log in to the OpenShift CLI ( oc ). Delete the RHACM Policy CR that you created for installing and configuring LVM Storage by using the following command: USD oc delete -f <policy> -n <namespace> 1 1 Replace <policy> with the name of the Policy CR YAML file. Create a Policy CR YAML file with the configuration to uninstall LVM Storage: Example Policy CR to uninstall LVM Storage apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high Create the Policy CR by running the following command: USD oc create -f <policy> -ns <namespace> 4.12.4.16. Downloading log files and diagnostic information using must-gather When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution. Procedure Run the must-gather command from the client connected to the LVM Storage cluster: USD oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.17 --dest-dir=<directory_name> Additional resources About the must-gather tool 4.12.4.17. Troubleshooting persistent storage While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting. 4.12.4.17.1. Investigating a PVC stuck in the Pending state A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons: Insufficient computing resources. Network problems. Mismatched storage class or node selector. No available persistent volumes (PVs). The node with the PV is in the Not Ready state. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Retrieve the list of PVCs by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s Inspect the events associated with a PVC stuck in the Pending state by running the following command: USD oc describe pvc <pvc_name> 1 1 Replace <pvc_name> with the name of the PVC. For example, lvms-vg1 . Example output Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not found 4.12.4.17.2. Recovering from a missing storage class If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Verify that the LVMCluster CR is present by running the following command: USD oc get lvmcluster -n openshift-storage Example output NAME AGE my-lvmcluster 65m If the LVMCluster CR is not present, create an LVMCluster CR. For more information, see "Ways to create an LVMCluster custom resource". In the openshift-storage namespace, check that all the LVM Storage pods are in the Running state by running the following command: USD oc get pods -n openshift-storage Example output NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m The output of this command must contain a running instance of the following pods: lvms-operator vg-manager If the vg-manager pod is stuck while loading a configuration file, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of the vg-manager pod by running the following command: USD oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage Additional resources About the LVMCluster custom resource Ways to create an LVMCluster custom resource 4.12.4.17.3. Recovering from node failure A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster. To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Examine the restart count of the topolvm-node pod instances by running the following command: USD oc get pods -n openshift-storage Example output NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m steps If the PVC is stuck in the Pending state even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up". Additional resources Performing a forced clean-up 4.12.4.17.4. Recovering from disk failure If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk. Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name> . The generic error message is followed by a specific volume failure error message. The following table describes the volume failure error messages: Table 4.11. Volume failure error messages Error message Description Failed to check volume existence Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures. Failed to bind volume Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC. FailedMount or FailedAttachVolume This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC. FailedUnMount This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC. Volume is already exclusively attached to one node and cannot be attached to another This error can appear with storage solutions that do not support ReadWriteMany access modes. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Inspect the events associated with a PVC by running the following command: USD oc describe pvc <pvc_name> 1 1 Replace <pvc_name> with the name of the PVC. Establish a direct connection to the host where the problem is occurring. Resolve the disk issue. steps If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up". Additional resources Performing a forced clean-up 4.12.4.17.5. Performing a forced clean-up If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage. You have stopped the pods that are using the PVCs that were created by using LVM Storage. Procedure Switch to the openshift-storage namespace by running the following command: USD oc project openshift-storage Check if the LogicalVolume custom resources (CRs) are present by running the following command: USD oc get logicalvolume If the LogicalVolume CRs are present, delete them by running the following command: USD oc delete logicalvolume <name> 1 1 Replace <name> with the name of the LogicalVolume CR. After deleting the LogicalVolume CRs, remove their finalizers by running the following command: USD oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LogicalVolume CR. Check if the LVMVolumeGroup CRs are present by running the following command: USD oc get lvmvolumegroup If the LVMVolumeGroup CRs are present, delete them by running the following command: USD oc delete lvmvolumegroup <name> 1 1 Replace <name> with the name of the LVMVolumeGroup CR. After deleting the LVMVolumeGroup CRs, remove their finalizers by running the following command: USD oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LVMVolumeGroup CR. Delete any LVMVolumeGroupNodeStatus CRs by running the following command: USD oc delete lvmvolumegroupnodestatus --all Delete the LVMCluster CR by running the following command: USD oc delete lvmcluster --all After deleting the LVMCluster CR, remove its finalizer by running the following command: USD oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LVMCluster CR.
[ "cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF", "cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF", "cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2", "oc create -f <machine-set-name>.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3", "apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>", "oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false", "apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3", "apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5", "oc create -f cinder-persistentvolume.yaml", "oc create serviceaccount <service_account>", "oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4", "{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }", "{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar", "\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false", "apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7", "oc get pv", "NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m", "ls -lZ /opt/nfs -d", "drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs", "id nfsnobody", "uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)", "spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2", "spec: containers: 1 - name: securityContext: runAsUser: 65534 2", "setsebool -P virt_use_nfs 1", "/<example_fs> *(rw,root_squash)", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT", "iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"", "echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3", "oc create -f pvc.yaml", "vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk", "shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk", "apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5", "oc create -f pv1.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4", "oc create -f pvc1.yaml", "oc adm new-project openshift-local-storage", "oc annotate namespace openshift-local-storage openshift.io/node-selector=''", "oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc apply -f openshift-local-storage.yaml", "oc -n openshift-local-storage get pods", "NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m", "oc get csvs -n openshift-local-storage", "NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Filesystem 5 fsType: xfs 6 devicePaths: 7 - /path/to/device 8", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"local-sc\" 3 forceWipeDevicesAndDestroyAllData: false 4 volumeMode: Block 5 devicePaths: 6 - /path/to/device 7", "oc create -f <local-volume>.yaml", "oc get all -n openshift-local-storage", "NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node", "oc create -f <example-pv>.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4", "oc create -f <local-pvc>.yaml", "apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3", "oc create -f <local-pod>.yaml", "apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM", "oc apply -f local-volume-set.yaml", "oc get pv", "NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m", "apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"local-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg", "spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists", "oc edit localvolume <name> -n openshift-local-storage", "oc delete pv <pv-name>", "oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1", "oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces", "oc delete pv <pv-name>", "oc delete project openshift-local-storage", "apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''", "apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4", "oc create -f pv.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual", "oc create -f pvc.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage", "oc create -f <file_name>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file_name>", "oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase 4.13.0-202301261535 Succeeded", "kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.17 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.17 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}", "oc create ns <namespace>", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low", "oc create -f <file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: \"true\" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 forceWipeDevicesAndDestroyAllData: true thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10 chunkSize: 128Ki 5 chunkSizeCalculationPolicy: Static 6", "lsblk --paths --json -o NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE", "pvs <device-name> 1", "cat /proc/1/mountinfo | grep <device-name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: - name: vg1 1 fstype: ext4 2 default: true deviceSelector: 3 forceWipeDevicesAndDestroyAllData: false 4 thinPoolConfig: 5 nodeSelector: 6", "pvs -S vgname=<vg_name> 1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: 1 nodeSelector: 2 deviceSelector: 3 thinPoolConfig: 4", "oc create -f <file_name>", "lvmcluster/lvmcluster created", "oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>", "{\"deviceClassStatuses\": 1 [ { \"name\": \"vg1\", \"nodeStatus\": [ 2 { \"devices\": [ 3 \"/dev/nvme0n1\", \"/dev/nvme1n1\", \"/dev/nvme2n1\" ], \"node\": \"kube-node\", 4 \"status\": \"Ready\" 5 } ] } ] \"state\":\"Ready\"} 6", "status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m", "oc get volumesnapshotclass", "NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 deviceSelector: 2 thinPoolConfig: 3 nodeSelector: 4 remediationAction: enforce severity: low", "oc create -f <file_name> -n <cluster_namespace> 1", "oc delete lvmcluster <lvmclustername> -n openshift-storage", "oc get lvmcluster -n <namespace>", "No resources found in openshift-storage namespace.", "oc delete -f <file_name> -n <cluster_namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue", "oc create -f <file_name> -n <namespace>", "oc get policy -n <namespace>", "NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 limits: storage: 20Gi 4 storageClassName: lvms-vg1 5", "oc create -f <file_name> -n <application_namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc edit <lvmcluster_file_name> -n <namespace>", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1", "oc edit -f <file_name> -ns <namespace> 1", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1", "oc patch <pvc_name> -n <application_namespace> -p \\ 1 '{ \"spec\": { \"resources\": { \"requests\": { \"storage\": \"<desired_size>\" }}}} --type=merge' 2", "oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}", "oc delete pvc <pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3", "oc get volumesnapshotclass", "oc create -f <file_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete volumesnapshot <volume_snapshot_name> -n <namespace>", "oc get volumesnapshot -n <namespace>", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4", "oc create -f <file_name> -n <namespace>", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s", "oc delete pvc <clone_pvc_name> -n <namespace>", "oc get pvc -n <namespace>", "oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{\"spec\":{\"channel\":\"<update_channel>\"}}' 1", "oc get events -n openshift-storage", "8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.17 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.17 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.17 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.17 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.17 installing: waiting for deployment lvms-operator to become ready: deployment \"lvms-operator\" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.17 install strategy completed with no errors", "oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'", "lvms-operator.v4.17", "openshift.io/cluster-monitoring=true", "oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV", "currentCSV: lvms-operator.v4.15.3", "oc delete subscription.operators.coreos.com lvms-operator -n <namespace>", "subscription.operators.coreos.com \"lvms-operator\" deleted", "oc delete clusterserviceversion <currentCSV> -n <namespace> 1", "clusterserviceversion.operators.coreos.com \"lvms-operator.v4.15.3\" deleted", "oc get csv -n <namespace>", "oc delete -f <policy> -n <namespace> 1", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high", "oc create -f <policy> -ns <namespace>", "oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.17 --dest-dir=<directory_name>", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s", "oc describe pvc <pvc_name> 1", "Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io \"lvms-vg1\" not found", "oc get lvmcluster -n openshift-storage", "NAME AGE my-lvmcluster 65m", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m", "oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage", "oc get pods -n openshift-storage", "NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m", "oc describe pvc <pvc_name> 1", "oc project openshift-storage", "oc get logicalvolume", "oc delete logicalvolume <name> 1", "oc patch logicalvolume <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc get lvmvolumegroup", "oc delete lvmvolumegroup <name> 1", "oc patch lvmvolumegroup <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1", "oc delete lvmvolumegroupnodestatus --all", "oc delete lvmcluster --all", "oc patch lvmcluster <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage/configuring-persistent-storage
5.5. Disk Space and Memory Requirements
5.5. Disk Space and Memory Requirements Red Hat Enterprise Linux, like most modern operating systems, uses disk partitions . When you install Red Hat Enterprise Linux, you might have to work with disk partitions. For more information, see Appendix A, An Introduction to Disk Partitions . If you have other operating systems installed on your system, the disk space they use must be separate from the disk space used by Red Hat Enterprise Linux. Note For AMD64/Intel 64 and ARM systems, at least two partitions ( / and swap ) must be dedicated to Red Hat Enterprise Linux. To install Red Hat Enterprise Linux, you must have a minimum of 10 GiB of space in either unpartitioned disk space or in partitions which can be deleted. For more information on partition and disk space recommendations, see the recommended partitioning sizes discussed in Section 8.14.4.4, "Recommended Partitioning Scheme" . Red Hat Enterprise Linux requires minimum the following amount of RAM: Installation type Minimum required RAM Local media installation (USB, DVD) 768 MiB NFS network installation 768 MiB HTTP, HTTPS, or FTP network installation 1.5 GiB Note It may be possible to perform the installation with less memory than listed in this section. However, the exact requirements depend heavily on your environment and exact installation path, and they also change with each new release. Determining the absolute minimum required RAM for your specific use case therefore requires you to test various configurations, and periodically re-test with each new release. Installing Red Hat Enterprise Linux using a Kickstart file has the same minimum RAM requirements as a manual installation. However, if you use a Kickstart file that runs commands which require additional memory or write data to the RAM disk, additional RAM might be necessary. For more information about the minimum requirements and technology limits of Red Hat Enterprise Linux 7, see the Red Hat Enterprise Linux technology capabilities and limits article on the Red Hat Customer Portal.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-planning-disk-space-memory-x86
Chapter 4. Configuring power monitoring
Chapter 4. Configuring power monitoring Important Power monitoring is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The Kepler resource is a Kubernetes custom resource definition (CRD) that enables you to configure the deployment and monitor the status of the Kepler resource. 4.1. The Kepler configuration You can configure Kepler with the spec field of the Kepler resource. Important Ensure that the name of your Kepler instance is kepler . All other instances are rejected by the Power monitoring Operator Webhook. The following is the list of configuration options: Table 4.1. Kepler configuration options Name Spec Description Default port exporter.deployment The port on the node where the Prometheus metrics are exposed. 9103 nodeSelector exporter.deployment The nodes on which Kepler exporter pods are scheduled. kubernetes.io/os: linux tolerations exporter.deployment The tolerations for Kepler exporter that allow the pods to be scheduled on nodes with specific characteristics. - operator: "Exists" Example Kepler resource with default configuration apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: port: 9103 1 nodeSelector: kubernetes.io/os: linux 2 Tolerations: 3 - key: "" operator: "Exists" value: "" effect: "" 1 The Prometheus metrics are exposed on port 9103. 2 Kepler pods are scheduled on Linux nodes. 3 The default tolerations allow Kepler to be scheduled on any node. 4.2. Monitoring the Kepler status You can monitor the state of the Kepler exporter with the status field of the Kepler resource. The status.exporter field includes information, such as the following: The number of nodes currently running the Kepler pods The number of nodes that should be running the Kepler pods Conditions representing the health of the Kepler resource This provides you with valuable insights into the changes made through the spec field. Example state of the Kepler resource apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler status: exporter: conditions: 1 - lastTransitionTime: '2024-01-11T11:07:39Z' message: Reconcile succeeded observedGeneration: 1 reason: ReconcileSuccess status: 'True' type: Reconciled - lastTransitionTime: '2024-01-11T11:07:39Z' message: >- Kepler daemonset "kepler-operator/kepler" is deployed to all nodes and available; ready 2/2 observedGeneration: 1 reason: DaemonSetReady status: 'True' type: Available currentNumberScheduled: 2 2 desiredNumberScheduled: 2 3 1 The health of the Kepler resource. In this example, Kepler is successfully reconciled and ready. 2 The number of nodes currently running the Kepler pods is 2. 3 The wanted number of nodes to run the Kepler pods is 2. 4.3. Configuring Kepler to use Redfish You can configure Kepler to use Redfish as the source for running or hosting containers. Kepler can then monitor the power usage of these containers. Prerequisites You have access to the OpenShift Container Platform web console. You are logged in as a user with the cluster-admin role. You have installed the Power monitoring Operator. Procedure In the Administrator perspective of the web console, click Operators Installed Operators . Click Power monitoring for Red Hat OpenShift from the Installed Operators list and click the Kepler tab. Click Create Kepler . If you already have a Kepler instance created, click Edit Kepler . Configure .spec.exporter.redfish of the Kepler instance by specifying the mandatory secretRef field. You can also configure the optional probeInterval and skipSSLVerify fields to meet your needs. Example Kepler instance apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: # ... redfish: secretRef: <secret_name> required 1 probeInterval: 60s 2 skipSSLVerify: false 3 # ... 1 Required: Specifies the name of the secret that contains the credentials for accessing the Redfish server. 2 Optional: Controls the frequency at which the power information is queried from Redfish. The default value is 60s . 3 Optional: Controls if Kepler skips verifying the Redfish server certificate. The default value is false . Note After Kepler is deployed, the openshift-power-monitoring namespace is created. Create the redfish.csv file with the following data format: <your_kubelet_node_name>,<redfish_username>,<redfish_password>,https://<redfish_ip_or_hostname>/ Example redfish.csv file control-plane,exampleuser,examplepass,https://redfish.nodes.example.com worker-1,exampleuser,examplepass,https://redfish.nodes.example.com worker-2,exampleuser,examplepass,https://another.redfish.nodes.example.com Create the secret under the openshift-power-monitoring namespace. You must create the secret with the following conditions: The secret type is Opaque . The credentials are stored under the redfish.csv key in the data field of the secret. USD oc -n openshift-power-monitoring \ create secret generic redfish-secret \ --from-file=redfish.csv Example output apiVersion: v1 kind: Secret metadata: name: redfish-secret data: redfish.csv: YmFyCg== # ... Important The Kepler deployment will not continue until the Redfish secret is created. You can find this information in the status of a Kepler instance.
[ "apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: port: 9103 1 nodeSelector: kubernetes.io/os: linux 2 Tolerations: 3 - key: \"\" operator: \"Exists\" value: \"\" effect: \"\"", "apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler status: exporter: conditions: 1 - lastTransitionTime: '2024-01-11T11:07:39Z' message: Reconcile succeeded observedGeneration: 1 reason: ReconcileSuccess status: 'True' type: Reconciled - lastTransitionTime: '2024-01-11T11:07:39Z' message: >- Kepler daemonset \"kepler-operator/kepler\" is deployed to all nodes and available; ready 2/2 observedGeneration: 1 reason: DaemonSetReady status: 'True' type: Available currentNumberScheduled: 2 2 desiredNumberScheduled: 2 3", "apiVersion: kepler.system.sustainable.computing.io/v1alpha1 kind: Kepler metadata: name: kepler spec: exporter: deployment: redfish: secretRef: <secret_name> required 1 probeInterval: 60s 2 skipSSLVerify: false 3", "<your_kubelet_node_name>,<redfish_username>,<redfish_password>,https://<redfish_ip_or_hostname>/", "control-plane,exampleuser,examplepass,https://redfish.nodes.example.com worker-1,exampleuser,examplepass,https://redfish.nodes.example.com worker-2,exampleuser,examplepass,https://another.redfish.nodes.example.com", "oc -n openshift-power-monitoring create secret generic redfish-secret --from-file=redfish.csv", "apiVersion: v1 kind: Secret metadata: name: redfish-secret data: redfish.csv: YmFyCg== #" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/power_monitoring/configuring-power-monitoring
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/red_hat_ha_solutions_for_sap_hana_s4hana_and_netweaver_based_sap_applications/conscious-language-message_ha-sol-hana-netweaver
Chapter 3. Tuned
Chapter 3. Tuned 3.1. Tuned Overview Tuned is a daemon that uses udev to monitor connected devices and statically and dynamically tunes system settings according to a selected profile. Tuned is distributed with a number of predefined profiles for common use cases like high throughput, low latency, or powersave. It is possible to modify the rules defined for each profile and customize how to tune a particular device. To revert all changes made to the system settings by a certain profile, you can either switch to another profile or deactivate the tuned service. Note Starting with Red Hat Enterprise Linux 7.2, you can run Tuned in no-daemon mode , which does not require any resident memory. In this mode, tuned applies the settings and exits. The no-daemon mode is disabled by default because a lot of tuned functionality is missing in this mode, including D-Bus support, hot-plug support, or rollback support for settings. To enable no-daemon mode , set the following in the /etc/tuned/tuned-main.conf file: daemon = 0 . Static tuning mainly consists of the application of predefined sysctl and sysfs settings and one-shot activation of several configuration tools like ethtool . Tuned also monitors the use of system components and tunes system settings dynamically based on that monitoring information. Dynamic tuning accounts for the way that various system components are used differently throughout the uptime for any given system. For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. Tuned monitors the activity of these components and reacts to the changes in their use. As a practical example, consider a typical office workstation. Most of the time, the Ethernet network interface is very inactive. Only a few emails go in and out every once in a while or some web pages might be loaded. For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. Tuned has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage. If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, tuned detects this and sets the interface speed to maximum to offer the best performance while the activity level is so high. This principle is used for other plug-ins for CPU and hard disks as well. Dynamic tuning is globally disabled in Red Hat Enterprise Linux and can be enabled by editing the /etc/tuned/tuned-main.conf file and changing the dynamic_tuning flag to 1 . 3.1.1. Plug-ins Tuned uses two types of plugins: monitoring plugins and tuning plugins . Monitoring plugins are used to get information from a running system. Currently, the following monitoring plugins are implemented: disk Gets disk load (number of IO operations) per device and measurement interval. net Gets network load (number of transferred packets) per network card and measurement interval. load Gets CPU load per CPU and measurement interval. The output of the monitoring plugins can be used by tuning plugins for dynamic tuning. Currently implemented dynamic tuning algorithms try to balance the performance and powersave and are therefore disabled in the performance profiles (dynamic tuning for individual plugins can be enabled or disabled in the tuned profiles). Monitoring plugins are automatically instantiated whenever their metrics are needed by any of the enabled tuning plugins. If two tuning plugins require the same data, only one instance of the monitoring plugin is created and the data is shared. Each tuning plugin tunes an individual subsystem and takes several parameters that are populated from the tuned profiles. Each subsystem can have multiple devices (for example, multiple CPUs or network cards) that are handled by individual instances of the tuning plugins. Specific settings for individual devices are also supported. The supplied profiles use wildcards to match all devices of individual subsystems (for details on how to change this, refer to Section 3.1.3, "Custom Profiles" ), which allows the plugins to tune these subsystems according to the required goal (selected profile) and the only thing that the user needs to do is to select the correct tuned profile. Currently, the following tuning plugins are implemented (only some of these plugins implement dynamic tuning, parameters supported by plugins are also listed): cpu Sets the CPU governor to the value specified by the governor parameter and dynamically changes the PM QoS CPU DMA latency according to the CPU load. If the CPU load is lower than the value specified by the load_threshold parameter, the latency is set to the value specified by the latency_high parameter, otherwise it is set to value specified by latency_low . Also the latency can be forced to a specific value without being dynamically changed further. This can be accomplished by setting the force_latency parameter to the required latency value. eeepc_she Dynamically sets the FSB speed according to the CPU load; this feature can be found on some netbooks and is also known as the Asus Super Hybrid Engine. If the CPU load is lower or equal to the value specified by the load_threshold_powersave parameter, the plugin sets the FSB speed to the value specified by the she_powersave parameter (for details about the FSB frequencies and corresponding values, see the kernel documentation, the provided defaults should work for most users). If the CPU load is higher or equal to the value specified by the load_threshold_normal parameter, it sets the FSB speed to the value specified by the she_normal parameter. Static tuning is not supported and the plugin is transparently disabled if the hardware support for this feature is not detected. net Configures wake-on-lan to the values specified by the wake_on_lan parameter (it uses same syntax as the ethtool utility). It also dynamically changes the interface speed according to the interface utilization. sysctl Sets various sysctl settings specified by the plugin parameters. The syntax is name = value , where name is the same as the name provided by the sysctl tool. Use this plugin if you need to change settings that are not covered by other plugins (but prefer specific plugins if the settings are covered by them). usb Sets autosuspend timeout of USB devices to the value specified by the autosuspend parameter. The value 0 means that autosuspend is disabled. vm Enables or disables transparent huge pages depending on the Boolean value of the transparent_hugepages parameter. audio Sets the autosuspend timeout for audio codecs to the value specified by the timeout parameter. Currently snd_hda_intel and snd_ac97_codec are supported. The value 0 means that the autosuspend is disabled. You can also enforce the controller reset by setting the Boolean parameter reset_controller to true . disk Sets the elevator to the value specified by the elevator parameter. It also sets ALPM to the value specified by the alpm parameter, ASPM to the value specified by the aspm parameter, scheduler quantum to the value specified by the scheduler_quantum parameter, disk spindown timeout to the value specified by the spindown parameter, disk readahead to the value specified by the readahead parameter, and can multiply the current disk readahead value by the constant specified by the readahead_multiply parameter. In addition, this plugin dynamically changes the advanced power management and spindown timeout setting for the drive according to the current drive utilization. The dynamic tuning can be controlled by the Boolean parameter dynamic and is enabled by default. Note Applying a tuned profile which stipulates a different disk readahead value overrides the disk readahead value settings if they have been configured using a udev rule. Red Hat recommends using the tuned tool to adjust the disk readahead values. mounts Enables or disables barriers for mounts according to the Boolean value of the disable_barriers parameter. script This plugin can be used for the execution of an external script that is run when the profile is loaded or unloaded. The script is called by one argument which can be start or stop (it depends on whether the script is called during the profile load or unload). The script file name can be specified by the script parameter. Note that you need to correctly implement the stop action in your script and revert all setting you changed during the start action, otherwise the roll-back will not work. For your convenience, the functions Bash helper script is installed by default and allows you to import and use various functions defined in it. Note that this functionality is provided mainly for backwards compatibility and it is recommended that you use it as the last resort and prefer other plugins if they cover the required settings. sysfs Sets various sysfs settings specified by the plugin parameters. The syntax is name = value , where name is the sysfs path to use. Use this plugin in case you need to change some settings that are not covered by other plugins (please prefer specific plugins if they cover the required settings). video Sets various powersave levels on video cards (currently only the Radeon cards are supported). The powersave level can be specified by using the radeon_powersave parameter. Supported values are: default , auto , low , mid , high , and dynpm . For details, refer to http://www.x.org/wiki/RadeonFeature#KMS_Power_Management_Options . Note that this plugin is experimental and the parameter may change in the future releases. bootloader Adds parameters to the kernel boot command line. This plugin supports the legacy GRUB 1, GRUB 2, and also GRUB with Extensible Firmware Interface (EFI). Customized non-standard location of the grub2 configuration file can be specified by the grub2_cfg_file option. The parameters are added to the current grub configuration and its templates. The machine needs to be rebooted for the kernel parameters to take effect. The parameters can be specified by the following syntax: 3.1.2. Installation and Usage To install the tuned package, run, as root, the following command: Installation of the tuned package also presets the profile which should be the best for you system. Currently the default profile is selected according the following customizable rules: throughput-performance This is pre-selected on Red Hat Enterprise Linux 7 operating systems which act as compute nodes. The goal on such systems is the best throughput performance. virtual-guest This is pre-selected on virtual machines. The goal is best performance. If you are not interested in best performance, you would probably like to change it to the balanced or powersave profile (see bellow). balanced This is pre-selected in all other cases. The goal is balanced performance and power consumption. To start tuned , run, as root, the following command: To enable tuned to start every time the machine boots, type the following command: For other tuned control such as selection of profiles and other, use: This command requires the tuned service to be running. To view the available installed profiles, run: To view the currently activated profile, run: To select or activate a profile, run: For example: As an experimental feature it is possible to select more profiles at once. The tuned application will try to merge them during the load. If there are conflicts the settings from the last specified profile will take precedence. This is done automatically and there is no checking whether the resulting combination of parameters makes sense. If used without thinking, the feature may tune some parameters the opposite way which may be counterproductive. An example of such situation would be setting the disk for the high throughput by using the throughput-performance profile and concurrently setting the disk spindown to the low value by the spindown-disk profile. The following example optimizes the system for run in a virtual machine for the best performance and concurrently tune it for the low power consumption while the low power consumption is the priority: To let tuned recommend you the best suitable profile for your system without changing any existing profiles and using the same logic as used during the installation, run the following command: Tuned itself has additional options that you can use when you run it manually. However, this is not recommended and is mostly intended for debugging purposes. The available options can be viewing using the following command: 3.1.3. Custom Profiles Distribution-specific profiles are stored in the /usr/lib/tuned/ directory. Each profile has its own directory. The profile consists of the main configuration file called tuned.conf , and optionally other files, for example helper scripts. If you need to customize a profile, copy the profile directory into the /etc/tuned/ directory, which is used for custom profiles. If there are two profiles of the same name, the profile included in /etc/tuned/ is used. You can also create your own profile in the /etc/tuned/ directory to use a profile included in /usr/lib/tuned/ with only certain parameters adjusted or overridden. The tuned.conf file contains several sections. There is one [main] section. The other sections are configurations for plugins instances. All sections are optional including the [main] section. Lines starting with the hash sign (#) are comments. The [main] section has the following option: include= profile The specified profile will be included, e.g. include=powersave will include the powersave profile. Sections describing plugins instances are formatted in the following way: NAME is the name of the plugin instance as it is used in the logs. It can be an arbitrary string. TYPE is the type of the tuning plugin. For a list and descriptions of the tuning plugins refer to Section 3.1.1, "Plug-ins" . DEVICES is the list of devices this plugin instance will handle. The devices line can contain a list, a wildcard (*), and negation (!). You can also combine rules. If there is no devices line all devices present or later attached on the system of the TYPE will be handled by the plugin instance. This is same as using devices=* . If no instance of the plugin is specified, the plugin will not be enabled. If the plugin supports more options, they can be also specified in the plugin section. If the option is not specified, the default value will be used (if not previously specified in the included plugin). For the list of plugin options refer to Section 3.1.1, "Plug-ins" ). Example 3.1. Describing Plug-ins Instances The following example will match everything starting with sd , such as sda or sdb , and does not disable barriers on them: The following example will match everything except sda1 and sda2 : In cases where you do not need custom names for the plugin instance and there is only one definition of the instance in your configuration file, Tuned supports the following short syntax: In this case, it is possible to omit the type line. The instance will then be referred to with a name, same as the type. The example could be then rewritten into: If the same section is specified more than once using the include option, then the settings are merged. If they cannot be merged due to a conflict, the last conflicting definition overrides the settings in conflict. Sometimes, you do not know what was previously defined. In such cases, you can use the replace boolean option and set it to true . This will cause all the definitions with the same name to be overwritten and the merge will not happen. You can also disable the plugin by specifying the enabled=false option. This has the same effect as if the instance was never defined. Disabling the plugin can be useful if you are redefining the definition from the include option and do not want the plugin to be active in your custom profile. The following is an example of a custom profile that is based on the balanced profile and extends it the way that ALPM for all devices is set to the maximal powersaving. The following is an example of a custom profile that adds isolcpus=2 to the kernel boot command line: The machine needs to be rebooted after the profile is applied for the changes to take effect. 3.1.4. Tuned-adm A detailed analysis of a system can be very time-consuming. Red Hat Enterprise Linux 7 includes a number of predefined profiles for typical use cases that you can easily activate with the tuned-adm utility. You can also create, modify, and delete profiles. To list all available profiles and identify the current active profile, run: To only display the currently active profile, run: To switch to one of the available profiles, run: for example: To disable all tuning: The following is a list of pre-defined profiles for typical use cases: Note The following profiles may or may not be installed with the base package, depending on the repo files being used. To see the tuned profiles installed on your system, run the following command as root: To see the list of available tuned profiles to install, run the following command as root: To install a tuned profile on your system, run the following command as root: Replacing profile-name with the profile you want to install. balanced The default power-saving profile. It is intended to be a compromise between performance and power consumption. It tries to use auto-scaling and auto-tuning whenever possible. It has good results for most loads. The only drawback is the increased latency. In the current tuned release it enables the CPU, disk, audio and video plugins and activates the conservative governor. The radeon_powersave is set to auto . powersave A profile for maximum power saving performance. It can throttle the performance in order to minimize the actual power consumption. In the current tuned release it enables USB autosuspend, WiFi power saving and ALPM power savings for SATA host adapters. It also schedules multi-core power savings for systems with a low wakeup rate and activates the ondemand governor. It enables AC97 audio power saving or, depending on your system, HDA-Intel power savings with a 10 seconds timeout. In case your system contains supported Radeon graphics card with enabled KMS it configures it to automatic power saving. On Asus Eee PCs a dynamic Super Hybrid Engine is enabled. Note The powersave profile may not always be the most efficient. Consider there is a defined amount of work that needs to be done, for example a video file that needs to be transcoded. Your machine can consume less energy if the transcoding is done on the full power, because the task will be finished quickly, the machine will start to idle and can automatically step-down to very efficient power save modes. On the other hand if you transcode the file with a throttled machine, the machine will consume less power during the transcoding, but the process will take longer and the overall consumed energy can be higher. That is why the balanced profile can be generally a better option. throughput-performance A server profile optimized for high throughput. It disables power savings mechanisms and enables sysctl settings that improve the throughput performance of the disk, network IO and switched to the deadline scheduler. CPU governor is set to performance . latency-performance A server profile optimized for low latency. It disables power savings mechanisms and enables sysctl settings that improve the latency. CPU governor is set to performance and the CPU is locked to the low C states (by PM QoS). network-latency A profile for low latency network tuning. It is based on the latency-performance profile. It additionally disables transparent hugepages, NUMA balancing and tunes several other network related sysctl parameters. network-throughput Profile for throughput network tuning. It is based on the throughput-performance profile. It additionally increases kernel network buffers. virtual-guest A profile designed for Red Hat Enterprise Linux 7 virtual machines as well as VMware guests based on the enterprise-storage profile that, among other tasks, decreases virtual memory swappiness and increases disk readahead values. It does not disable disk barriers. virtual-host A profile designed for virtual hosts based on the enterprise-storage profile that, among other tasks, decreases virtual memory swappiness, increases disk readahead values and enables a more aggressive value of dirty pages. oracle A profile optimized for Oracle databases loads based on throughput-performance profile. It additionally disables transparent huge pages and modifies some other performance related kernel parameters. This profile is provided by tuned-profiles-oracle package. It is available in Red Hat Enterprise Linux 6.8 and later. desktop A profile optimized for desktops, based on the balanced profile. It additionally enables scheduler autogroups for better response of interactive applications. cpu-partitioning The cpu-partitioning profile partitions the system CPUs into isolated and housekeeping CPUs. To reduce jitter and interruptions on an isolated CPU, the profile clears the isolated CPU from user-space processes, movable kernel threads, interrupt handlers, and kernel timers. A housekeeping CPU can run all services, shell processes, and kernel threads. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file. The configuration options are: isolated_cores= cpu-list Lists CPUs to isolate. The list of isolated CPUs is comma-separated or the user can specify the range. You can specify a range using a dash, such as 3-5 . This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. no_balance_cores= cpu-list Lists CPUs which are not considered by the kernel during system wide process load-balancing. This option is optional. This is usually the same list as isolated_cores . For more information on cpu-partitioning , see the tuned-profiles-cpu-partitioning (7) man page. Note There may be more product specific or third party Tuned profiles available. Such profiles are usually provided by separate RPM packages. Additional predefined profiles can be installed with the tuned-profiles-compat package available in the Optional channel. These profiles are intended for backward compatibility and are no longer developed. The generalized profiles from the base package will mostly perform the same or better. If you do not have specific reason for using them, please prefer the above mentioned profiles from the base package. The compat profiles are following: default This has the lowest impact on power saving of the available profiles and only enables CPU and disk plugins of tuned . desktop-powersave A power-saving profile directed at desktop systems. Enables ALPM power saving for SATA host adapters as well as the CPU, Ethernet, and disk plugins of tuned . laptop-ac-powersave A medium-impact power-saving profile directed at laptops running on AC. Enables ALPM powersaving for SATA host adapters, Wi-Fi power saving, as well as the CPU, Ethernet, and disk plugins of tuned . laptop-battery-powersave A high-impact power-saving profile directed at laptops running on battery. In the current tuned implementation it is an alias for the powersave profile. spindown-disk A power-saving profile for machines with classic HDDs to maximize spindown time. It disables the tuned power savings mechanism, disables USB autosuspend, disables Bluetooth, enables Wi-Fi power saving, disables logs syncing, increases disk write-back time, and lowers disk swappiness. All partitions are remounted with the noatime option. enterprise-storage A server profile directed at enterprise-class storage, maximizing I/O throughput. It activates the same settings as the throughput-performance profile, multiplies readahead settings, and disables barriers on non-root and non-boot partitions. Note Use the atomic-host profile on physical machines, and the atomic-guest profile on virtual machines. To enable the tuned profiles for Red Hat Enterprise Linux Atomic Host, install the tuned-profiles-atomic package. Run, as root, the following command: The two tuned profiles for Red Hat Enterprise Linux Atomic Host are: atomic-host A profile optimized for Red Hat Enterprise Linux Atomic Host, when used as a host system on a bare-metal server, using the throughput-performance profile. It additionally increases SELinux AVC cache, PID limit, and tunes netfilter connections tracking. atomic-guest A profile optimized for Red Hat Enterprise Linux Atomic Host, when used as a guest system based on the virtual-guest profile. It additionally increases SELinux AVC cache, PID limit, and tunes netfilter connections tracking. Note There may be more product-specific or third-party tuned profiles available. These profiles are usually provided by separate RPM packages. Three tuned profiles are available that enable to edit the kernel command line: realtime , realtime-virtual-host and realtime-virtual-guest . To enable the realtime profile, install the tuned-profiles-realtime package. Run, as root, the following command: To enable the realtime-virtual-host and realtime-virtual-guest profiles, install the tuned-profiles-nfv package. Run, as root, the following command: 3.1.5. powertop2tuned The powertop2tuned utility is a tool that allows you to create custom tuned profiles from the PowerTOP suggestions. To install the powertop2tuned application, run the following command as root: To create a custom profile, run the following command as root: By default it creates the profile in the /etc/tuned directory and it bases it on the currently selected tuned profile. For safety reasons all PowerTOP tunings are initially disabled in the new profile. To enable them uncomment the tunings of your interest in the /etc/tuned/ profile /tuned.conf . You can use the --enable or -e option that will generate the new profile with most of the tunings suggested by PowerTOP enabled. Some dangerous tunings like the USB autosuspend will still be disabled. If you really need them you will have to uncomment them manually. By default, the new profile is not activated. To activate it run the following command: For a complete list of the options powertop2tuned supports, type in the following command:
[ "cmdline = arg 1 arg 2 ... arg n .", "install tuned", "systemctl start tuned", "systemctl enable tuned", "tuned-adm", "tuned-adm list", "tuned-adm active", "tuned-adm profile profile", "tuned-adm profile powersave", "tuned-adm profile virtual-guest powersave", "tuned-adm recommend", "tuned --help", "[NAME] type=TYPE devices=DEVICES", "[data_disk] type=disk devices=sd* disable_barriers=false", "[data_disk] type=disk devices=!sda1, !sda2 disable_barriers=false", "[TYPE] devices=DEVICES", "[disk] devices=sdb* disable_barriers=false", "[main] include=balanced [disk] alpm=min_power", "[bootloader] cmdline=isolcpus=2", "tuned-adm list", "tuned-adm active", "tuned-adm profile profile_name", "tuned-adm profile latency-performance", "tuned-adm off", "tuned-adm list", "search tuned-profiles", "install tuned-profiles- profile-name", "install tuned-profiles-atomic", "install tuned-profiles-realtime", "install tuned-profiles-nfv", "install tuned-utils", "powertop2tuned new_profile_name", "tuned-adm profile new_profile_name", "powertop2tuned --help" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/chap-red_hat_enterprise_linux-performance_tuning_guide-tuned
B.12. Guest is Unable to Start with Error: warning: could not open /dev/net/tun
B.12. Guest is Unable to Start with Error: warning: could not open /dev/net/tun Symptom The guest virtual machine does not start after configuring a type='ethernet' (also known as 'generic ethernet') interface in the host system. An error appears either in libvirtd.log , /var/log/libvirt/qemu/ name_of_guest .log , or in both, similar to the below message: Investigation Use of the generic ethernet interface type ( <interface type='ethernet'> ) is discouraged, because using it requires lowering the level of host protection against potential security flaws in QEMU and its guests. However, it is sometimes necessary to use this type of interface to take advantage of some other facility that is not yet supported directly in libvirt . For example, openvswitch was not supported in libvirt until libvirt-0.9.11 , so in older versions of libvirt , <interface type='ethernet'> was the only way to connect a guest to an openvswitch bridge. However, if you configure a <interface type='ethernet'> interface without making any other changes to the host system, the guest virtual machine will not start successfully. The reason for this failure is that for this type of interface, a script called by QEMU needs to manipulate the tap device. However, with type='ethernet' configured, in an attempt to lock down QEMU , libvirt and SELinux have put in place several checks to prevent this. (Normally, libvirt performs all of the tap device creation and manipulation, and passes an open file descriptor for the tap device to QEMU .) Solution Reconfigure the host system to be compatible with the generic ethernet interface. Procedure B.4. Reconfiguring the host system to use the generic ethernet interface Set SELinux to permissive by configuring SELINUX=permissive in /etc/selinux/config : From a root shell, run the command setenforce permissive . In /etc/libvirt/qemu.conf add or edit the following lines: Restart libvirtd . Important Since each of these steps significantly decreases the host's security protections against QEMU guest domains, this configuration should only be used if there is no alternative to using <interface type='ethernet'> . Note For more information on SELinux, refer to the Red Hat Enterprise Linux 6 Security-Enhanced Linux User Guide .
[ "warning: could not open /dev/net/tun: no virtual network emulation qemu-kvm: -netdev tap,script=/etc/my-qemu-ifup,id=hostnet0: Device 'tap' could not be initialized", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX=permissive SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "clear_emulator_capabilities = 0", "user = \"root\"", "group = \"root\"", "cgroup_device_acl = [ \"/dev/null\", \"/dev/full\", \"/dev/zero\", \"/dev/random\", \"/dev/urandom\", \"/dev/ptmx\", \"/dev/kvm\", \"/dev/kqemu\", \"/dev/rtc\", \"/dev/hpet\", \"/dev/net/tun\"," ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/app_generic_ethernet
Chapter 14. Storage
Chapter 14. Storage rescan-scsi-bus.sh now correctly interprets multiple word device descriptions The rescan-scsi-bus.sh script, found in the sg3_utils package, previously misinterpreted SCSI device types that were described using more than one word, such as Medium Changer or Optical Device . Consequently, when the script was run on systems that had such device types attached, the script printed multiple misleading error messages. With this update, device types described with multiple words are handled correctly, and the proper device type description is returned to the user without any errors. (BZ#1210438) rescan-scsi-bus.sh no longer removes /dev/null When running the rescan-scsi-bus.sh script, due to incorrect syntax in redirecting output to the /dev/null device file while executing the /bin/rm utility, the redirection did not happen but /dev/null was instead interpreted as a file to be removed. As a consequence, running rescan-scsi-bus.sh with the --update option removed /dev/null during execution. This bug has been fixed, and /dev/null is no longer removed by rescan-scsi-bus.sh . (BZ# 1245302 ) Additional result codes are now recognized by sg_persist Previously, some SCSI hosts could return result codes which were not recognized by sg_persist , causing it to output an error message claiming the result code is invalid. This update adds additional return codes, such as DID_NEXUS_FAILURE , and the problem no longer occurs. (BZ#886611) iSCSI boot works correctly in Multi Function mode Due to incorrect handling of Multi Function mode when dealing with the bnx2x driver, booting iSCSI from Storage Area Network (SAN) did not work correctly for some Host Bus Adapters (HBAs). The underlying source code has been fixed, and iSCSI boot now works correctly in Multi Function mode. (BZ#1276545)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/bug_fixes_storage
Chapter 95. ExternalConfigurationEnvVarSource schema reference
Chapter 95. ExternalConfigurationEnvVarSource schema reference Used in: ExternalConfigurationEnv Property Property type Description secretKeyRef SecretKeySelector Reference to a key in a Secret. configMapKeyRef ConfigMapKeySelector Reference to a key in a ConfigMap.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ExternalConfigurationEnvVarSource-reference
Chapter 9. Installation configuration parameters for vSphere
Chapter 9. Installation configuration parameters for vSphere Before you deploy an OpenShift Container Platform cluster on vSphere, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 9.1. Available installation configuration parameters for vSphere The following tables specify the required, optional, and vSphere-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 9.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 9.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 9.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Consider the following information before you configure network parameters for your cluster: If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you deployed nodes in an OpenShift Container Platform cluster with a network that supports both IPv4 and non-link-local IPv6 addresses, configure your cluster to use a dual-stack network. For clusters configured for dual-stack networking, both IPv4 and IPv6 traffic must use the same network interface as the default gateway. This ensures that in a multiple network interface controller (NIC) environment, a cluster can detect what NIC to use based on the available network interface. For more information, see "OVN-Kubernetes IPv6 and dual-stack limitations" in About the OVN-Kubernetes network plugin . To prevent network connectivity issues, do not install a single-stack IPv4 cluster on a host that supports dual-stack networking. Note On VMware vSphere, dual-stack networking can specify either IPv4 or IPv6 as the primary address family. If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Table 9.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 9.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 9.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 9.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 9.4. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Virtual IP (VIP) addresses that you configured for control plane API access. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. An array of failure domain configuration objects. The name of the failure domain. String If you define multiple failure domains for your cluster, you must attach the tag to each vCenter data center. To define a region, use a tag from the openshift-region tag category. For a single vSphere data center environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere data center environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String The path to the vSphere compute cluster. String Lists and defines the data centers where OpenShift Container Platform virtual machines (VMs) operate. The list of data centers must match the list of data centers specified in the vcenters field. String Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<data_center_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the data center virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<data_center_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<data_center_name>/host/<cluster_name>/Resources . String Specifies the absolute path to a pre-existing Red Hat Enterprise Linux CoreOS (RHCOS) image template or virtual machine. The installation program can use the image template or virtual machine to quickly install RHCOS on vSphere hosts. Consider using this parameter as an alternative to uploading an RHCOS image on vSphere hosts. This parameter is available for use only on installer-provisioned infrastructure. String Virtual IP (VIP) addresses that you configured for cluster Ingress. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the data centers where OpenShift Container Platform virtual machines (VMs) operate. The list of data centers must match the list of data centers specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 9.1.5. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter: Table 9.5. Deprecated VMware vSphere cluster parameters Parameter Description Values The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the data center where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<data_center_name>/vm/<folder_name>/<subfolder_name> . Virtual IP (VIP) addresses that you configured for cluster Ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<data_center_name>/host/<cluster_name>/Resources . String, for example, /<data_center_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String 9.1.6. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 9.6. Optional VMware vSphere machine pool parameters Parameter Description Values The location from which the installation program downloads the Red Hat Enterprise Linux CoreOS (RHCOS) image. Before setting a path value for this parameter, ensure that the default RHCOS boot image in the OpenShift Container Platform release matches the RHCOS image template or virtual machine version; otherwise, cluster installation might fail. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . The size of the disk in gigabytes. Integer The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer The size of a virtual machine's memory in megabytes. Integer
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "platform: vsphere:", "platform: vsphere: apiVIPs:", "platform: vsphere: diskType:", "platform: vsphere: failureDomains:", "platform: vsphere: failureDomains: name:", "platform: vsphere: failureDomains: region:", "platform: vsphere: failureDomains: server:", "platform: vsphere: failureDomains: zone:", "platform: vsphere: failureDomains: topology: computeCluster:", "platform: vsphere: failureDomains: topology: datacenter:", "platform: vsphere: failureDomains: topology: datastore:", "platform: vsphere: failureDomains: topology: folder:", "platform: vsphere: failureDomains: topology: networks:", "platform: vsphere: failureDomains: topology: resourcePool:", "platform: vsphere: failureDomains: topology template:", "platform: vsphere: ingressVIPs:", "platform: vsphere: vcenters:", "platform: vsphere: vcenters: datacenters:", "platform: vsphere: vcenters: password:", "platform: vsphere: vcenters: port:", "platform: vsphere: vcenters: server:", "platform: vsphere: vcenters: user:", "platform: vsphere: apiVIP:", "platform: vsphere: cluster:", "platform: vsphere: datacenter:", "platform: vsphere: defaultDatastore:", "platform: vsphere: folder:", "platform: vsphere: ingressVIP:", "platform: vsphere: network:", "platform: vsphere: password:", "platform: vsphere: resourcePool:", "platform: vsphere: username:", "platform: vsphere: vCenter:", "platform: vsphere: clusterOSImage:", "platform: vsphere: osDisk: diskSizeGB:", "platform: vsphere: cpus:", "platform: vsphere: coresPerSocket:", "platform: vsphere: memoryMB:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_vsphere/installation-config-parameters-vsphere
Chapter 7. Bucket policies in the Multicloud Object Gateway
Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications . A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the MCG command-line interface for easier management. Note Specify the appropriate architecture for enabling the repositories using the subscription manager. For IBM Power, use the following command: For IBM Z, use the following command: Alternatively, you can install the MCG package from the OpenShift Data Foundation RPMs found at Download RedHat OpenShift Data Foundation page . Note Choose the correct Product Variant according to your architecture. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). Note To give access to certain buckets of MCG accounts, use AWS S3 bucket policies. For more information, see Using bucket policies in AWS documentation.
[ "{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }", "aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy", "aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-x86_64-rpms yum install mcg", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-ppc64le-rpms", "subscription-manager repos --enable=rh-odf-4-for-rhel-8-s390x-rpms", "noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--default_resource='']" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/bucket-policies-in-the-multicloud-object-gateway
function::fullpath_struct_nameidata
function::fullpath_struct_nameidata Name function::fullpath_struct_nameidata - get the full nameidata path Synopsis Arguments nd Pointer to " struct nameidata " . Description Returns the full dirent name (full path to the root), like the kernel (and systemtap-tapset) d_path function, with a " / " .
[ "fullpath_struct_nameidata(nd:)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-fullpath-struct-nameidata
Chapter 2. Installing insights-client
Chapter 2. Installing insights-client You can install Red Hat Insights for Red Hat Enterprise Linux on an existing system that is managed by Red Hat infrastructure, or you can install it on a minimal installation of Red Hat Enterprise Linux. After you install the Insights client, you need to register your system. For more information about registering systems, refer to: Configuring authentication 2.1. Installing the Insights client on an existing system managed by Red Hat Cloud Access Use these instructions to deploy Red Hat Insights for Red Hat Enterprise Linux on an existing Red Hat Enterprise Linux (RHEL) system connected to Red Hat Cloud Access. Prerequisites Root-level access for the system. Procedure Enter the following command to install the current version of the Insights client package: RHEL versions 6 and 7 RHEL version 8 and later Note Insights client installation on older versions RHEL versions 6 and 7 do not come with the Insights client pre-installed. If you have one of these versions, run the following commands in your terminal: 2.2. Installing insights-client on an existing system managed by Red Hat Update Infrastructure Use these instructions to deploy Insights for Red Hat Enterprise Linux on an existing, cloud marketplace-purchased Red Hat Enterprise Linux system managed by Red Hat Update Infrastructure (RHUI). Prerequisites Root-level access for the system. Procedure Enter the following command to install the current version of the Insights client package: RHEL versions 6 and 7 RHEL version 8 and later 2.3. How the Insights client CLI and configuration file interact The Insights client runs automatically, according to its scheduler settings. By default, it runs every 24 hours. To run the client interactively, enter the insights-client command. When you run insights-client , the following values and settings determine the results: Values that you enter when you run insights-client from the CLI temporarily override the preset configuration file settings and system environment settings. Any values that you enter for options in the insights-client command are used only for that instance of Insights client. Settings in the configuration file ( /etc/insights-client/insights-client.conf ) override system environment settings. Values of any system environment variables ( printenv ) are not affected by the CLI or the client configuration files. Note If you are running RHEL 6.9 or earlier, use redhat-access-insights to run the Insights client. 2.4. Installing Insights client on a minimal installation of RHEL The Insights client is not automatically installed on systems running the minimal installation of Red Hat Enterprise Linux 8. For more information about minimal installations, see Configuring software selection in Performing a standard RHEL installation . Prerequisites Root-level access to the system. Procedure To create a minimal installation with the Insights client, select Minimal Installation from the RHEL Software Selection options in the Anaconda installer. Make sure to select the Standard checkbox in the Additional Software for Selected Environment section. The Standard option includes the insights-client package in the RHEL installation. If you do not select the Standard checkbox, RHEL installs without the insights-client package. If that happens, use dnf install to install the Insights client at a later time. Additional resources Configuring software selection Performing a standard RHEL installation 2.5. How to resolve the Insights client real-time scheduling issue The Insights client executes a number of commands that collect data on your system. Therefore, it has a configuration restriction that limits its CPU usage to no more than 30%. This restriction is defined in the configuration file: insights-client-boot.service: CPUQuota=30% This configuration prevents the Insights client from creating a CPU spike on your system. This spike could interfere with other applications running on your system. Specifically, it could prevent applications that depend on real-time scheduling from initiating. If you need to enable real-time scheduling, you can disable the CPU quota restriction. The risk of removing this configuration is minimal. However, it is possible that when the Insights client runs, the CPU usage may become unusually high. If this situation occurs and negatively impacts other services on your system, please contact Red Hat support for assistance. Additional resources How to Remove the CPU quota . How do I open and manage a support case on the Customer Portal?
[ "yum install insights-client", "dnf install insights-client", "yum install insights-client", "yum install insights-client", "dnf install insights-client" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/client_configuration_guide_for_red_hat_insights/assembly-client-cg-installation
Chapter 10. OpenShift deployment options with the RHPAM Kogito Operator
Chapter 10. OpenShift deployment options with the RHPAM Kogito Operator After you create your Red Hat build of Kogito microservices as part of a business application, you can use the Red Hat OpenShift Container Platform web console to deploy your microservices. The RHPAM Kogito Operator page in the OpenShift web console guides you through the deployment process. The RHPAM Kogito Operator supports the following options for building and deploying Red Hat build of Kogito microservices on Red Hat OpenShift Container Platform: Git source build and deployment Binary build and deployment Custom image build and deployment File build and deployment 10.1. Deploying Red Hat build of Kogito microservices on OpenShift using Git source build and OpenShift web console The RHPAM Kogito Operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoBuild builds an application using the Git URL or other sources and produces a runtime image. KogitoRuntime starts the runtime image and configures it as per your requirements. In most use cases, you can use the standard runtime build and deployment method to deploy Red Hat build of Kogito microservices on OpenShift from a Git repository source, as shown in the following procedure. Note If you are developing or testing your Red Hat build of Kogito microservice locally, you can use the binary build, custom image build, or file build option to build and deploy from a local source instead of from a Git repository. Prerequisites The RHPAM Kogito Operator is installed. The application with your Red Hat build of Kogito microservices is in a Git repository that is reachable from your OpenShift environment. You have access to the OpenShift web console with the necessary permissions to create and edit KogitoBuild and KogitoRuntime . (Red Hat build of Quarkus only) The pom.xml file of your project contains the following dependency for the quarkus-smallrye-health extension. This extension enables the liveness and readiness probes that are required for Red Hat build of Quarkus projects on OpenShift. SmallRye Health dependency for Red Hat build of Quarkus applications on OpenShift <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> Procedure Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito build definition, on the operator page, select the Kogito Build tab and click Create KogitoBuild . In the application window, use Form View or YAML View to configure the build definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-quarkus-example # Git folder location of application Example YAML definition for a Spring Boot application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-springboot-example # Git folder location of application Note If you configured an internal Maven repository, you can use it as a Maven mirror service and specify the Maven mirror URL in your Red Hat build of Kogito build definition to shorten build time substantially: spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/ For more information about internal Maven repositories, see the Apache Maven documentation. After you define your application data, click Create to generate the Red Hat build of Kogito build. Your application is listed in the Red Hat build of KogitoBuilds page. You can select the application name to view or modify application settings and YAML details. To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot Note In this case, the application is built from Git and deployed using KogitoRuntime. You must ensure that the application name is same in KogitoBuild and KogitoRuntime . After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. In the left menu of the web console, go to Builds Builds to view the status of your application build. You can select a specific build to view build details. Note For every Red Hat build of Kogito microservice that you create for OpenShift deployment, two builds are generated and listed in the Builds page in the web console: a traditional runtime build and a Source-to-Image (S2I) build with the suffix -builder . The S2I mechanism builds the application in an OpenShift build and then passes the built application to the OpenShift build to be packaged into the runtime container image. The Red Hat build of Kogito S2I build configuration also enables you to build the project directly from a Git repository on the OpenShift platform. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed. 10.2. Deploying Red Hat build of Kogito microservices on OpenShift using binary build and OpenShift web console OpenShift builds can require extensive amounts of time. As a faster alternative for building and deploying your Red Hat build of Kogito microservices on OpenShift, you can use a binary build. The operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoBuild processes an uploaded application and produces a runtime image. KogitoRuntime starts the runtime image and configures it as per your requirements. Prerequisites The RHPAM Kogito Operator is installed. The oc OpenShift CLI is installed and you are logged in to the relevant OpenShift cluster. For oc installation and login instructions, see the OpenShift documentation . You have access to the OpenShift web console with the necessary permissions to create and edit KogitoBuild and KogitoRuntime . (Red Hat build of Quarkus only) The pom.xml file of your project contains the following dependency for the quarkus-smallrye-health extension. This extension enables the liveness and readiness probes that are required for Red Hat build of Quarkus projects on OpenShift. SmallRye Health dependency for Red Hat build of Quarkus applications on OpenShift <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> Procedure Build an application locally. Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito build definition, on the operator page, select the Kogito Build tab and click Create KogitoBuild . In the application window, use Form View or YAML View to configure the build definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: Binary Example YAML definition for a Spring Boot application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: Binary After you define your application data, click Create to generate the Red Hat build of Kogito build. Your application is listed in the Red Hat build of KogitoBuilds page. You can select the application name to view or modify application settings and YAML details. Upload the built binary using the following command: from-dir is equals to the target folder path of the built application. namespace is the namespace where KogitoBuild is created. To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot Note In this case, the application is built locally and deployed using KogitoRuntime. You must ensure that the application name is same in KogitoBuild and KogitoRuntime . After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. In the left menu of the web console, go to Builds Builds to view the status of your application build. You can select a specific build to view build details. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed. 10.3. Deploying Red Hat build of Kogito microservices on OpenShift using custom image build and OpenShift web console You can use custom image build as an alternative for building and deploying your Red Hat build of Kogito microservices on OpenShift. The operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoRuntime starts the runtime image and configures it as per your requirements. Note The Red Hat Process Automation Manager builder image does not supports native builds. However, you can perform a custom build and use Containerfile to build the container image as shown in the following example: FROM registry.redhat.io/rhpam-7-tech-preview/rhpam-kogito-runtime-native-rhel8:7.13.5 ENV RUNTIME_TYPE quarkus COPY --chown=1001:root target/*-runner USDKOGITO_HOME/bin This feature is Technology Preview only. To build the native binary with Mandrel, see Compiling your Quarkus applications to native executables . Prerequisites The RHPAM Kogito Operator is installed. You have access to the OpenShift web console with the necessary permissions to create and edit KogitoRuntime . (Red Hat build of Quarkus only) The pom.xml file of your project contains the following dependency for the quarkus-smallrye-health extension. This extension enables the liveness and readiness probes that are required for Red Hat build of Quarkus projects on OpenShift. SmallRye Health dependency for Red Hat build of Quarkus applications on OpenShift <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> Procedure Build an application locally. Create Containerfile in the project root folder with the following content: Example Containerfile for a Red Hat build of Quarkus application Example Containerfile for a Spring Boot application application-jar-file is the name of the JAR file of the application. Build the Red Hat build of Kogito image using the following command: In the command, final-image-name is the name of the Red Hat build of Kogito image and Container-file is name of the Containerfile that you created in the step. Optionally, test the built image using the following command: Push the built Red Hat build of Kogito image to an image registry using the following command: Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate runtime: springboot After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed. 10.4. Deploying Red Hat build of Kogito microservices on OpenShift using file build and OpenShift web console You can build and deploy your Red Hat build of Kogito microservices from a single file, such as a Decision Model and Notation (DMN), Drools Rule Language (DRL), or properties file, or from a directory with multiple files. You can specify a single file from your local file system path or specify a file directory from a local file system path only. When you upload the file or directory to an OpenShift cluster, a new Source-to-Image (S2I) build is automatically triggered. The operator uses the following custom resources to deploy domain-specific microservices (the microservices that you develop): KogitoBuild generates an application from a file and produces a runtime image. KogitoRuntime starts the runtime image and configures it as per your requirements. Prerequisites The RHPAM Kogito Operator is installed. The oc OpenShift CLI is installed and you are logged in to the relevant OpenShift cluster. For oc installation and login instructions, see the OpenShift documentation . You have access to the OpenShift web console with the necessary permissions to create and edit KogitoBuild and KogitoRuntime . Procedure Go to Operators Installed Operators and select RHPAM Kogito Operator . To create the Red Hat build of Kogito build definition, on the operator page, select the Kogito Build tab and click Create KogitoBuild . In the application window, use Form View or YAML View to configure the build definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: LocalSource Example YAML definition for a Spring Boot application with Red Hat build of Kogito build apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: LocalSource Note If you configured an internal Maven repository, you can use it as a Maven mirror service and specify the Maven mirror URL in your Red Hat build of Kogito build definition to shorten build time substantially: spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/ For more information about internal Maven repositories, see the Apache Maven documentation. After you define your application data, click Create to generate the Red Hat build of Kogito build. Your application is listed in the Red Hat build of KogitoBuilds page. You can select the application name to view or modify application settings and YAML details. Upload the file asset using the following command: file-asset-path is the path of the file asset that you want to upload. namespace is the namespace where KogitoBuild is created. To create the Red Hat build of Kogito microservice definition, on the operator page, select the Kogito Runtime tab and click Create KogitoRuntime . In the application window, use Form View or YAML View to configure the microservice definition. At a minimum, define the application configurations shown in the following example YAML file: Example YAML definition for a Red Hat build of Quarkus application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name Example YAML definition for a Spring Boot application with Red Hat build of Kogito microservices apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot Note In this case, the application is built from a file and deployed using KogitoRuntime. You must ensure that the application name is same in KogitoBuild and KogitoRuntime . After you define your application data, click Create to generate the Red Hat build of Kogito microservice. Your application is listed in the Red Hat build of Kogito microservice page. You can select the application name to view or modify application settings and the contents of the YAML file. In the left menu of the web console, go to Builds Builds to view the status of your application build. You can select a specific build to view build details. Note For every Red Hat build of Kogito microservice that you create for OpenShift deployment, two builds are generated and listed in the Builds page in the web console: a traditional runtime build and a Source-to-Image (S2I) build with the suffix -builder . The S2I mechanism builds the application in an OpenShift build and then passes the built application to the OpenShift build to be packaged into the runtime container image. After the application build is complete, go to Workloads Deployments to view the application deployments, pod status, and other details. After your Red Hat build of Kogito microservice is deployed, in the left menu of the web console, go to Networking Routes to view the access link to the deployed application. You can select the application name to view or modify route settings. With the application route, you can integrate your Red Hat build of Kogito microservices with your business automation solutions as needed.
[ "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-quarkus-example # Git folder location of application", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: RemoteSource gitSource: uri: 'https://github.com/kiegroup/kogito-examples' # Git repository containing application (uses default branch) contextDir: dmn-springboot-example # Git folder location of application", "spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: Binary", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: Binary", "oc start-build example-quarkus --from-dir=target/ -n namespace", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>", "FROM registry.redhat.io/rhpam-7/rhpam-kogito-runtime-jvm-rhel8:7.13.5 ENV RUNTIME_TYPE quarkus COPY target/quarkus-app/lib/ USDKOGITO_HOME/bin/lib/ COPY target/quarkus-app/*.jar USDKOGITO_HOME/bin COPY target/quarkus-app/app/ USDKOGITO_HOME/bin/app/ COPY target/quarkus-app/quarkus/ USDKOGITO_HOME/bin/quarkus/", "FROM registry.redhat.io/rhpam-7/rhpam-kogito-runtime-jvm-rhel8:7.13.5 ENV RUNTIME_TYPE springboot COPY target/<application-jar-file> USDKOGITO_HOME/bin", "build --tag <final-image-name> -f <Container-file>", "run --rm -it -p 8080:8080 <final-image-name>", "push <final-image-name>", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: image: <final-image-name> # Kogito image name insecureImageRegistry: true # Can be omitted when image is pushed into secured registry with valid certificate runtime: springboot", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-quarkus # Application name spec: type: LocalSource", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this service kind: KogitoBuild # Application type metadata: name: example-springboot # Application name spec: runtime: springboot type: LocalSource", "spec: mavenMirrorURL: http://nexus3-nexus.apps-crc.testing/repository/maven-public/", "oc start-build example-quarkus-builder --from-file=<file-asset-path> -n namespace", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-quarkus # Application name", "apiVersion: rhpam.kiegroup.org/v1 # Red Hat build of Kogito API for this microservice kind: KogitoRuntime # Application type metadata: name: example-springboot # Application name spec: runtime: springboot" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/con-kogito-operator-deployment-options_deploying-kogito-microservices-on-openshift
Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1]
Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1] Description VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. status object status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. 11.1.1. .spec Description spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. Type object Required source Property Type Description source object source specifies where a snapshot will be created from. This field is immutable after creation. Required. volumeSnapshotClassName string VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field. 11.1.2. .spec.source Description source specifies where a snapshot will be created from. This field is immutable after creation. Required. Type object Property Type Description persistentVolumeClaimName string persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. volumeSnapshotContentName string volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. 11.1.3. .status Description status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. Type object Property Type Description boundVolumeSnapshotContentName string boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. creationTime string creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. error object error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. readyToUse boolean readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer-or-string restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. 11.1.4. .status.error Description error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 11.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshots GET : list objects of kind VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots DELETE : delete collection of VolumeSnapshot GET : list objects of kind VolumeSnapshot POST : create a VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} DELETE : delete a VolumeSnapshot GET : read the specified VolumeSnapshot PATCH : partially update the specified VolumeSnapshot PUT : replace the specified VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status GET : read status of the specified VolumeSnapshot PATCH : partially update status of the specified VolumeSnapshot PUT : replace status of the specified VolumeSnapshot 11.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshots Table 11.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind VolumeSnapshot Table 11.2. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty 11.2.2. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots Table 11.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 11.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeSnapshot Table 11.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshot Table 11.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 11.8. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshot Table 11.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.10. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.11. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 202 - Accepted VolumeSnapshot schema 401 - Unauthorized Empty 11.2.3. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} Table 11.12. Global path parameters Parameter Type Description name string name of the VolumeSnapshot namespace string object name and auth scope, such as for teams and projects Table 11.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeSnapshot Table 11.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 11.15. Body parameters Parameter Type Description body DeleteOptions schema Table 11.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshot Table 11.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshot Table 11.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 11.20. Body parameters Parameter Type Description body Patch schema Table 11.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshot Table 11.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.23. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.24. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty 11.2.4. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status Table 11.25. Global path parameters Parameter Type Description name string name of the VolumeSnapshot namespace string object name and auth scope, such as for teams and projects Table 11.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified VolumeSnapshot Table 11.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 11.28. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshot Table 11.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 11.30. Body parameters Parameter Type Description body Patch schema Table 11.31. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshot Table 11.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.33. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.34. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/volumesnapshot-snapshot-storage-k8s-io-v1
Chapter 9. Detach volumes after non-graceful node shutdown
Chapter 9. Detach volumes after non-graceful node shutdown This feature allows drivers to automatically detach volumes when a node goes down non-gracefully. 9.1. Overview A graceful node shutdown occurs when the kubelet's node shutdown manager detects the upcoming node shutdown action. Non-graceful shutdowns occur when the kubelet does not detect a node shutdown action, which can occur because of system or hardware failures. Also, the kubelet may not detect a node shutdown action when the shutdown command does not trigger the Inhibitor Locks mechanism used by the kubelet on Linux, or because of a user error, for example, if the shutdownGracePeriod and shutdownGracePeriodCriticalPods details are not configured correctly for that node. With this feature, when a non-graceful node shutdown occurs, you can manually add an out-of-service taint on the node to allow volumes to automatically detach from the node. 9.2. Adding an out-of-service taint manually for automatic volume detachment Prerequisites Access to the cluster with cluster-admin privileges. Procedure To allow volumes to detach automatically from a node after a non-graceful node shutdown: After a node is detected as unhealthy, shut down the worker node. Ensure that the node is shutdown by running the following command and checking the status: oc get node <node name> 1 1 <node name> = name of the non-gracefully shutdown node Important If the node is not completely shut down, do not proceed with tainting the node. If the node is still up and the taint is applied, filesystem corruption can occur. Taint the corresponding node object by running the following command: Important Tainting a node this way deletes all pods on that node. This also causes any pods that are backed by statefulsets to be evicted, and replacement pods to be created on a different node. oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1 1 <node name> = name of the non-gracefully shutdown node After the taint is applied, the volumes detach from the shutdown node allowing their disks to be attached to a different node. Example The resulting YAML file resembles the following: spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown Restart the node. Remove the taint from the corresponding node object by running the following command: oc adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- 1
[ "get node <node name> 1", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute 1", "spec: taints: - effect: NoExecute key: node.kubernetes.io/out-of-service value: nodeshutdown", "adm taint node <node name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage/ephemeral-storage-csi-vol-detach-non-graceful-shutdown
Chapter 1. Introduction to Red Hat Quay OAuth 2.0 tokens
Chapter 1. Introduction to Red Hat Quay OAuth 2.0 tokens The Red Hat Quay OAuth 2 token system provides a secure, standards-based method for accessing Red Hat Quay's API and other relevant resources. The OAuth 2 token-based approach provides a secure method for handling authentication and authorization for complex environments. Compared to more traditional API tokens, Red Hat Quay's OAuth 2 token system offers the following enhancements: Standards-based security, which adheres to the OAuth 2.0 protocol . Revocable access by way of deleting the application in which the OAuth 2 token exists. Fine-grained access control, which allows Red Hat Quay administrators the ability to assign specific permissions to tokens. Delegated access, which allows third-party applications and services to act on behalf of a user. Future-proofing, which helps ensure that Red Hat Quay remains compatible with other services, platforms, and integrations. Red Hat Quay primarily supports two types of tokens: OAuth 2 access tokens and robot account tokens. A third token type, an OCI referrers access token , that is required to list OCI referrers of a manifest under a repository, is also available when warranted. The following chapters provide more details about each token type and how to generate each token type.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_api_guide/token-overview
Chapter 3. Deploying OpenShift Data Foundation on Azure Red Hat OpenShift
Chapter 3. Deploying OpenShift Data Foundation on Azure Red Hat OpenShift The Azure Red Hat OpenShift service enables you to deploy fully managed OpenShift clusters. Red Hat OpenShift Data Foundation can be deployed on Azure Red Hat OpenShift service. Important OpenShift Data Foundation on Azure Red Hat OpenShift is not a managed service offering. Red Hat OpenShift Data Foundation subscriptions are required to have the installation supported by the Red Hat support team. Open support cases by choosing the product as Red Hat OpenShift Data Foundation with the Red Hat support team (and not Microsoft) if you need any assistance for Red Hat OpenShift Data Foundation on Azure Red Hat OpenShift. To install OpenShift Data Foundation on Azure Red Hat OpenShift, follow sections: Getting a Red Hat pull secret for new deployment of Azure Red Hat OpenShift . Preparing a Red Hat pull secret for existing Azure Red Hat OpenShift clusters . Adding the pull secret to the cluster . Validating your Red Hat pull secret is working . Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster Service . 3.1. Getting a Red Hat pull secret for new deployment of Azure Red Hat OpenShift A Red Hat pull secret enables the cluster to access Red Hat container registries along with additional content. Prerequisites A Red Hat portal account. OpenShift Data Foundation subscription. Procedure To get a Red Hat pull secret for a new deployment of Azure Red Hat OpenShift, follow the steps in the section Get a Red Hat pull secret in the official Microsoft Azure documentation. Note that while creating the Azure Red Hat OpenShift cluster , you may need larger worker nodes, controlled by --worker-vm-size or more worker nodes, controlled by --worker-count . The recommended worker-vm-size is Standard_D16s_v3 . You can also use dedicated worker nodes, for more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation in the Managing and allocating storage resources guide. 3.2. Preparing a Red Hat pull secret for existing Azure Red Hat OpenShift clusters When you create an Azure Red Hat OpenShift cluster without adding a Red Hat pull secret, a pull secret is still created on the cluster automatically. However, this pull secret is not fully populated. Use this section to update the automatically created pull secret with the additional values from the Red Hat pull secret. Prerequisites Existing Azure Red Hat OpenShift cluster without a Red Hat pull secret. Procedure To prepare a Red Hat pull secret for existing an existing Azure Red Hat OpenShift clusters, follow the steps in the section Prepare your pull secret in the official Mircosoft Azure documentation. 3.3. Adding the pull secret to the cluster Prerequisites A Red Hat pull secret. Procedure Run the following command to update your pull secret. Note Running this command causes the cluster nodes to restart one by one as they are updated. After the secret is set, you can enable the Red Hat Certified Operators. 3.3.1. Modifying the configuration files to enable Red Hat operators To modify the configuration files to enable Red Hat operators, follow the steps in the section Modify the configuration files in the official Microsoft Azure documentation. 3.4. Validating your Red Hat pull secret is working After you add the pull secret and modify the configuration files, the cluster can take several minutes to get updated. To check if the cluster has been updated, run the following command to show the Certified Operators and Red Hat Operators sources available: If you do not see the Red Hat Operators, wait for a few minutes and try again. To ensure that your pull secret has been updated and is working correctly, open Operator Hub and check for any Red Hat verified Operator. For example, check if the OpenShift Data Foundation Operator is available, and see if you have permissions to install it. 3.5. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.6. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . If you want to use Azure Vault as the key management service provider, make sure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to managed-csi . Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. For cloud platforms with multiple availability zones, ensure that the Nodes are spread across different Locations/availability zones. If the nodes selected do not match the OpenShift Data Foundation cluster requirements of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault For information about setting up client authentication and fetching the client credentials in Azure platform, see the Prerequisites section of this procedure. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
[ "set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=./pull-secret.json", "oc get catalogsource -A NAMESPACE NAME DISPLAY openshift-marketplace redhat-operators Red Hat Operators TYPE PUBLISHER AGE grpc Red Hat 11s", "oc annotate namespace openshift-storage openshift.io/node-selector=", "patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_microsoft_azure/deploying-openshift-container-storage-on-azure-red-hat-openshift_aro
Chapter 2. Installing the native Data Grid CLI as a client plugin
Chapter 2. Installing the native Data Grid CLI as a client plugin Data Grid provides a command line interface (CLI) compiled to a native executable that you can install as a plugin for oc clients. You can then use your oc client to: Create Data Grid Operator subscriptions and remove Data Grid Operator installations. Set up Data Grid clusters and configure services. Work with Data Grid resources via remote shells. 2.1. Installing the native Data Grid CLI plugin Install the native Data Grid Command Line Interface (CLI) as a plugin for oc clients. Prerequisites Have an oc client. Download the native Data Grid CLI distribution from the Data Grid software downloads . Procedure Extract the .zip archive for the native Data Grid CLI distribution. Copy the native executable, or create a hard link, to a file named "kubectl-infinispan", for example: Add kubectl-infinispan to your PATH . Verify that the CLI is installed. Use the infinispan --help command to view available commands. Additional resources Extending the OpenShift CLI with plug-ins 2.2. kubectl-infinispan command reference This topic provides some details about the kubectl-infinispan plugin for clients. Tip Use the --help argument to view the complete list of available options and descriptions for each command. For example, oc infinispan create cluster --help prints all command options for creating Data Grid clusters. Command Description oc infinispan install Creates Data Grid Operator subscriptions and installs into the global namespace by default. oc infinispan create cluster Creates Data Grid clusters. oc infinispan get clusters Displays running Data Grid clusters. oc infinispan shell Starts an interactive remote shell session on a Data Grid cluster. oc infinispan delete cluster Removes Data Grid clusters. oc infinispan uninstall Removes Data Grid Operator installations and all managed resources.
[ "cp redhat-datagrid-cli kubectl-infinispan", "plugin list The following compatible plugins are available: /path/to/kubectl-infinispan", "infinispan --help" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_guide/installing-native-cli-plugin
Chapter 2. OpenShift Data Foundation deployed using local storage devices
Chapter 2. OpenShift Data Foundation deployed using local storage devices 2.1. Replacing storage nodes on bare metal infrastructure To replace an operational node, see Section 2.1.1, "Replacing an operational node on bare metal user-provisioned infrastructure" . To replace a failed node, see Section 2.1.2, "Replacing a failed node on bare metal user-provisioned infrastructure" . 2.1.1. Replacing an operational node on bare metal user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Drain the node: Delete the node: Get a new bare-metal machine with the required infrastructure. See Installing on bare metal . Important For information about how to replace a master node when you have installed OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, see the Backup and Restore guide in the OpenShift Container Platform documentation. Create a new OpenShift Container Platform node using the new bare-metal machine. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.1.2. Replacing a failed node on bare metal user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Delete the node: Get a new bare-metal machine with the required infrastructure. See Installing on bare metal . Important For information about how to replace a master node when you have installed OpenShift Data Foundation on a three-node OpenShift compact bare-metal cluster, see the Backup and Restore guide in the OpenShift Container Platform documentation. Create a new OpenShift Container Platform node using the new bare-metal machine. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.2. Replacing storage nodes on IBM Z or IBM(R) LinuxONE infrastructure You can choose one of the following procedures to replace storage nodes: Section 2.2.1, "Replacing operational nodes on IBM Z or IBM(R) LinuxONE infrastructure" . Section 2.2.2, "Replacing failed nodes on IBM Z or IBM(R) LinuxONE infrastructure" . 2.2.1. Replacing operational nodes on IBM Z or IBM(R) LinuxONE infrastructure Use this procedure to replace an operational node on IBM Z or IBM(R) LinuxONE infrastructure. Procedure Identify the node and get labels on the node to be replaced. Make a note of the rack label. Identify the mon (if any) and object storage device (OSD) pods that are running in the node to be replaced. Scale down the deployments of the pods identified in the step. For example: Mark the nodes as unschedulable. Remove the pods which are in Terminating state. Drain the node. Delete the node. Get a new IBM Z storage node as a replacement. Check for certificate signing requests (CSRs) related to OpenShift Data Foundation that are in Pending state: Approve all required OpenShift Data Foundation CSRs for the new node: Click Compute Nodes in OpenShift Web Console, confirm if the new node is in Ready state. Apply the openshift-storage label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels Add cluster.ocs.openshift.io/openshift-storage and click Save . From Command line interface Execute the following command to apply the OpenShift Data Foundation label to the new node: Add a new worker node to localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node. Remember to save before exiting the editor. In the above example, server3.example.com was removed and newnode.example.com is the new node. Determine which localVolumeSet to edit. Replace local-storage-project in the following commands with the name of your local storage project. The default project name is openshift-local-storage in OpenShift Data Foundation 4.6 and later. versions use local-storage by default. Update the localVolumeSet definition to include the new node and remove the failed node. Remember to save before exiting the editor. In the above example, server3.example.com was removed and newnode.example.com is the new node. Verify that the new localblock PV is available. Change to the openshift-storage project. Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required. Identify the PVC as afterwards we need to delete PV associated with that specific PVC. where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix . In this example, the deployment name is rook-ceph-osd-1 . Example output: In this example, the PVC name is ocs-deviceset-localblock-0-data-0-g2mmc . Remove the failed OSD from the cluster. You can remove more than one OSD by adding comma separated OSD IDs in the command. (For example: FAILED_OSD_IDS=0,1,2) Warning This step results in OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal pod. A status of Completed confirms that the OSD removal job succeeded. Note If ocs-osd-removal fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: It may be necessary to manually cleanup the removed OSD as follows: Delete the PV associated with the failed node. Identify the PV associated with the PVC. The PVC name must be identical to the name that is obtained while removing the failed OSD from the cluster. If there is a PV in Released state, delete it. For example: Identify the crashcollector pod deployment. If there is an existing crashcollector pod deployment, delete it. Delete the ocs-osd-removal job. Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new Object Storage Device (OSD) pods are running on the replacement node: Optional: If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.2.2. Replacing failed nodes on IBM Z or IBM(R) LinuxONE infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created. Wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that new Object Storage Device (OSD) pods are running on the replacement node: Optional: If data encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.3. Replacing storage nodes on IBM Power infrastructure For OpenShift Data Foundation, you can perform node replacement proactively for an operational node, and reactively for a failed node, for the deployments related to IBM Power. 2.3.1. Replacing an operational or failed storage node on IBM Power Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure and resources to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the mon (if any), and Object Storage Device (OSD) pods that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Delete the node: Get a new IBM Power machine with the required infrastructure. See Installing a cluster on IBM Power . Create a new OpenShift Container Platform node using the new IBM Power machine. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a newly added worker node to the localVolume . Determine the localVolume you need to edit: Example output: Update the localVolume definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, worker-0 is removed and worker-3 is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required. Identify the Persistent Volume Claim (PVC): where, <osd_id_to_remove> is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-1 . Example output: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Warning This step results in the OSD being completely removed from the cluster. Ensure that the correct value of osd_id_to_remove is provided. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job has succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: Delete the PV associated with the failed node. Identify the PV associated with the PVC: Example output: The PVC name must be identical to the name that is obtained while removing the failed OSD from the cluster. If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created and is in the Running state: Example output: The OSD and monitor pod might take several minutes to get to the Running state. Verify that the new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4. Replacing storage nodes on VMware infrastructure To replace an operational node, see: Section 2.4.1, "Replacing an operational node on VMware user-provisioned infrastructure" . Section 2.4.2, "Replacing an operational node on VMware installer-provisioned infrastructure" . To replace a failed node,see: Section 2.4.3, "Replacing a failed node on VMware user-provisioned infrastructure" . Section 2.4.4, "Replacing a failed node on VMware installer-provisioned infrastructure" . 2.4.1. Replacing an operational node on VMware user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Drain the node: Delete the node: Log in to VMware vSphere and terminate the Virtual Machine (VM) that you have identified. Create a new VM on VMware vSphere with the required infrastructure. See Infrastructure requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4.2. Replacing an operational node on VMware installer-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Get labels on the node: <node_name> Specify the name of node that you need to replace. Identify the mon (if any), and Object Storage Devices (OSDs) that are running in the node: Scale down the deployments of the pods that you identified in the step: For example: Mark the node as unschedulable: Drain the node: Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine deletion. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Physically add a new device to the node. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where the OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node. Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet you need to edit: Example output: Update the localVolumeSet definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging. For example: Identify the PV associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created and is in the Running state. Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4.3. Replacing a failed node on VMware user-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Identify the node, and get the labels on the node that you need to replace: <node_name> Specify the name of node that you need to replace. Identify the monitor pod (if any), and OSDs that are running in the node that you need to replace: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Delete the node: Log in to VMware vSphere and terminate the Virtual Machine (VM) that you have identified. Create a new VM on VMware vSphere with the required infrastructure. See Infrastructure requirements . Create a new OpenShift Container Platform worker node using the new VM. Check for the Certificate Signing Requests (CSRs) related to OpenShift Container Platform that are in Pending state: Approve all the required OpenShift Container Platform CSRs for the new node: <certificate_name> Specify the name of the CSR. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed, and newnode.example.com is the new node. Determine the localVolumeSet to edit: Example output: Update the localVolumeSet definition to include the new node, and remove the failed node: Example output: Remember to save before exiting the editor. In the this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails, and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the Persistent Volume (PV) associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.4.4. Replacing a failed node on VMware installer-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure, resources, and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Get the labels on the node: <node_name> Specify the name of node that you need to replace. Identify the mon (if any) and Object Storage Devices (OSDs) that are running in the node: Scale down the deployments of the pods identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in Terminating state: Drain the node: Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes in the OpenShift Web Console. Confirm that the new node is in Ready state. Physically add a new device to the node. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where the OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Determine the localVolumeSet you need to edit. Example output: Update the localVolumeSet definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock PV is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the PV associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod deployment, delete it: Delete the ocs-osd-removal-job : Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created, and is in the Running state: Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.5. Replacing storage nodes on Red Hat Virtualization infrastructure To replace an operational node, see Section 2.5.1, "Replacing an operational node on Red Hat Virtualization installer-provisioned infrastructure" . To replace a failed node, see Section 2.5.2, "Replacing a failed node on Red Hat Virtualization installer-provisioned infrastructure" . 2.5.1. Replacing an operational node on Red Hat Virtualization installer-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure, resources and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Get the labels on the node: <node_name> Specify the name of node that you need to replace. Identify the mon (if any), and Object Storage Devices (OSDs) that are running in the node: Scale down the deployments of the pods that you identified in the step: For example: Mark the nodes as unschedulable: Drain the node: Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes in the OpenShift web console. Confirm that the new node is in Ready state. Physically add the one or more new devices to the node. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Identify the namespace where the OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Determine the localVolumeSet that you need to edit: Example output: Update the localVolumeSet definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job has succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the PV associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod, delete it: Delete the ocs-osd-removal job: Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created and is in the Running state. Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 2.5.2. Replacing a failed node on Red Hat Virtualization installer-provisioned infrastructure Prerequisites Ensure that the replacement nodes are configured with the similar infrastructure, resources and disks to the node that you replace. You must be logged into the OpenShift Container Platform cluster. Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Get the labels on the node: <node_name> Specify the name of node that you need to replace. Identify the mon (if any) and Object Storage Devices (OSDs) that are running in the node: Scale down the deployments of the pods that you identified in the step: For example: Mark the node as unschedulable: Remove the pods which are in the Terminating state: Drain the node: Click Compute Machines . Search for the required machine. Besides the required machine, click the Action menu (...) Delete Machine . Click Delete to confirm the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes in the OpenShift web console. Confirm that the new node is in Ready state. Physically add the one or more new devices to the node. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: Identify the namespace where the OpenShift local storage operator is installed, and assign it to the local_storage_project variable: For example: Example output: Add a new worker node to the localVolumeDiscovery and localVolumeSet . Update the localVolumeDiscovery definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Determine the localVolumeSet you need to edit: Example output: Update the localVolumeSet definition to include the new node and remove the failed node: Example output: Remember to save before exiting the editor. In this example, server3.example.com is removed and newnode.example.com is the new node. Verify that the new localblock Persistent Volume (PV) is available: Example output: Navigate to the openshift-storage project: Remove the failed OSD from the cluster. You can specify multiple failed OSDs if required: <failed_osd_id> Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2 . The FORCE_OSD_REMOVAL value must be changed to true in clusters that only have three OSDs, or clusters with insufficient space to restore all three replicas of the data after the OSD is removed. Verify that the OSD was removed successfully by checking the status of the ocs-osd-removal-job pod. A status of Completed confirms that the OSD removal job has succeeded. Ensure that the OSD removal is completed. Example output: Important If the ocs-osd-removal-job fails and the pod is not in the expected Completed state, check the pod logs for further debugging: For example: Identify the PV associated with the Persistent Volume Claim (PVC): Example output: If there is a PV in Released state, delete it: For example: Example output: Identify the crashcollector pod deployment: If there is an existing crashcollector pod, delete it: Delete the ocs-osd-removal job: Example output: Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all other required OpenShift Data Foundation pods are in Running state. Ensure that the new incremental mon is created and is in the Running state. Example output: OSD and monitor pod might take several minutes to get to the Running state. Verify that new OSD pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support .
[ "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete node <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "USDoc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv <persistent_volume>", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d9c5cbd6\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete node <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "USDoc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv <persistent_volume>", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d9c5cbd6\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete node <node_name>", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc edit -n local-storage-project localvolumediscovery auto-discover-devices [...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n local-storage-project localvolumeset NAME AGE localblock 25h", "oc edit -n local-storage-project localvolumeset localblock [...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get pv | grep localblock CAPA- ACCESS RECLAIM STORAGE NAME CITY MODES POLICY STATUS CLAIM CLASS AGE local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 3e8964d3 ocs-deviceset-2-0 -79j94 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h 414755e0 ocs-deviceset-1-0 -959rp local-pv- 931Gi RWO Delete Available localblock 3m24s b481410 local-pv- 931Gi RWO Delete Bound openshift-storage/ localblock 25h d9c5cbd6 ocs-deviceset-0-0 -nvs68", "oc project openshift-storage", "osd_id_to_remove=1 oc get -n openshift-storage -o yaml deployment rook-ceph-osd-USD{osd_id_to_remove} | grep ceph.rook.io/pvc", "ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD{osd_id_to_remove} |oc create -f -", "oc get pod -l job-name=ocs-osd-removal- osd_id_to_remove -n openshift-storage", "oc logs -l job-name=ocs-osd-removal- osd_id_to_remove -n openshift-storage --tail=-1", "ceph osd crush remove osd.osd_id_to_remove ceph osd rm osd_id_to_remove ceph auth del osd.osd_id_to_remove ceph osd crush rm osd_id_to_remove", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released local-pv-5c9b8982 500Gi RWO Delete Released openshift-storage/ocs-deviceset-localblock-0-data-0-g2mmc localblock 24h worker-0", "oc delete pv <persistent-volume>", "oc delete pv local-pv-5c9b8982 persistentvolume \"local-pv-5c9b8982\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name> -n openshift-storage", "oc delete job ocs-osd-removal-USD{osd_id_to_remove}", "job.batch \"ocs-osd-removal-0\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= | cut -d' ' -f1", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-a --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-1 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete node <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=''", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc get -n USDlocal_storage_project localvolume", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolume localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: #- worker-0 - worker-1 - worker-2 - worker-3 [...]", "oc get pv | grep localblock", "NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS AGE local-pv-3e8964d3 500Gi RWO Delete Bound ocs-deviceset-localblock-2-data-0-mdbg9 localblock 25h local-pv-414755e0 500Gi RWO Delete Bound ocs-deviceset-localblock-1-data-0-4cslf localblock 25h local-pv-b481410 500Gi RWO Delete Available localblock 3m24s local-pv-5c9b8982 500Gi RWO Delete Bound ocs-deviceset-localblock-0-data-0-g2mmc localblock 25h", "oc project openshift-storage", "osd_id_to_remove=1", "oc get -n openshift-storage -o yaml deployment rook-ceph-osd-USD{ <osd_id_to_remove> } | grep ceph.rook.io/pvc", "ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-5c9b8982 500Gi RWO Delete Released openshift-storage/ocs-deviceset-localblock-0-data-0-g2mmc localblock 24h worker-0", "oc delete pv <persistent_volume>", "oc delete pv local-pv-5c9b8982", "persistentvolume \"local-pv-5c9b8982\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-b-74f6dc9dd6-4llzq 1/1 Running 0 6h14m rook-ceph-mon-c-74948755c-h7wtx 1/1 Running 0 4h24m rook-ceph-mon-d-598f69869b-4bv49 1/1 Running 0 162m", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete node <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "USDoc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv <persistent_volume>", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d9c5cbd6\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv <persistent_volume>", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d9c5cbd6\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc delete node <node_name>", "oc get csr", "oc adm certificate approve <certificate_name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "USDoc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv <persistent_volume>", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d9c5cbd6\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep _<node_name>_", "oc get pods -n openshift-storage -o wide | grep -i _<node_name>_", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage", "oc adm cordon _<node_name>_", "oc get pods -A -o wide | grep -i _<node_name>_ | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'", "oc adm drain _<node_name>_ --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node _<new_node_name>_ cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - **newnode.example.com** [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - **newnode.example.com** [...]", "oc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 1490Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h compute-1", "oc delete pv _<persistent_volume>_", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d9c5cbd6\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name=_<failed_node_name>_ -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=_<failed_node_name>_ -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name=<node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "USDoc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage -tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 512Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h server3.example.com", "oc delete pv <persistent_volume>", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d6bf175b\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name>_ -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk", "oc get nodes --show-labels | grep <node_name>", "oc get pods -n openshift-storage -o wide | grep -i <node_name>", "oc scale deployment rook-ceph-mon-c --replicas=0 -n openshift-storage", "oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage", "oc scale deployment --selector=app=rook-ceph-crashcollector,node_name= <node_name> --replicas=0 -n openshift-storage", "oc adm cordon <node_name>", "oc get pods -A -o wide | grep -i <node_name> | awk '{if (USD4 == \"Terminating\") system (\"oc -n \" USD1 \" delete pods \" USD2 \" --grace-period=0 \" \" --force \")}'", "oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "local_storage_project=USD(oc get csv --all-namespaces | awk '{print USD1}' | grep local)", "echo USDlocal_storage_project", "openshift-local-storage", "oc edit -n USDlocal_storage_project localvolumediscovery auto-discover-devices", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "oc get -n USDlocal_storage_project localvolumeset", "NAME AGE localblock 25h", "oc edit -n USDlocal_storage_project localvolumeset localblock", "[...] nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - server1.example.com - server2.example.com #- server3.example.com - newnode.example.com [...]", "USDoc get pv | grep localblock | grep Available", "local-pv-551d950 512Gi RWO Delete Available localblock 26s", "oc project openshift-storage", "oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS= <failed_osd_id> -p FORCE_OSD_REMOVAL=true | oc create -f -", "oc get pod -l job-name=ocs-osd-removal-job -n openshift-storage", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1 | egrep -i 'completed removal'", "2022-05-10 06:50:04.501511 I | cephosd: completed removal of OSD 0", "oc logs -l job-name=ocs-osd-removal-job -n openshift-storage --tail=-1", "oc get pv -L kubernetes.io/hostname | grep localblock | grep Released", "local-pv-d6bf175b 512Gi RWO Delete Released openshift-storage/ocs-deviceset-0-data-0-6c5pw localblock 2d22h server3.example.com", "oc delete pv <persistent_volume>", "oc delete pv local-pv-d6bf175b", "persistentvolume \"local-pv-d6bf175b\" deleted", "oc get deployment --selector=app=rook-ceph-crashcollector,node_name= <failed_node_name> -n openshift-storage", "oc delete deployment --selector=app=rook-ceph-crashcollector,node_name=<failed_node_name>_ -n openshift-storage", "oc delete -n openshift-storage job ocs-osd-removal-job", "job.batch \"ocs-osd-removal-job\" deleted", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1", "oc get pod -n openshift-storage | grep mon", "rook-ceph-mon-a-cd575c89b-b6k66 2/2 Running 0 38m rook-ceph-mon-b-6776bc469b-tzzt8 2/2 Running 0 38m rook-ceph-mon-d-5ff5d488b5-7v8xh 2/2 Running 0 4m8s", "oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd", "oc debug node/ <node_name>", "chroot /host", "lsblk" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/replacing_nodes/openshift_data_foundation_deployed_using_local_storage_devices
4.6. Troubleshooting Expiration
4.6. Troubleshooting Expiration If expiration does not appear to be working, it may be due to an entry being marked for expiration but not being removed. Multiple-cache operations such as put() are passed a life span value as a parameter. This value defines the interval after which the entry must expire. In cases where eviction is not configured and the life span interval expires, it can appear as if Red Hat JBoss Data Grid has not removed the entry. For example, when viewing JMX statistics, such as the number of entries , you may see an out of date count, or the persistent store associated with JBoss Data Grid may still contain this entry. Behind the scenes, JBoss Data Grid has marked it as an expired entry, but has not removed it. Removal of such entries happens as follows: An entry is passivated/overflowed to disk and is discovered to have expired. The expiration maintenance thread discovers that an entry it has found is expired. Any attempt to use get() or containsKey() for the expired entry causes JBoss Data Grid to return a null value. The expired entry is later removed by the expiration thread. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/expiration_troubleshooting1
Chapter 122. KafkaMirrorMakerTemplate schema reference
Chapter 122. KafkaMirrorMakerTemplate schema reference Used in: KafkaMirrorMakerSpec Property Property type Description deployment DeploymentTemplate Template for Kafka MirrorMaker Deployment . pod PodTemplate Template for Kafka MirrorMaker Pods . podDisruptionBudget PodDisruptionBudgetTemplate Template for Kafka MirrorMaker PodDisruptionBudget . mirrorMakerContainer ContainerTemplate Template for Kafka MirrorMaker container. serviceAccount ResourceTemplate Template for the Kafka MirrorMaker service account.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaMirrorMakerTemplate-reference
Chapter 2. Creating the environment file
Chapter 2. Creating the environment file The environment file that you create to configure custom back ends contains the settings for each back end that you want to define. It also contains other settings that are relevant to the deployment of a custom back end. For more information about environment files, see Environment Files in the Advanced Overcloud Customization guide. The following sample environment file defines two NetApp back ends, netapp1 and netapp2 : /home/stack/templates/custom-env.yaml 1 The following parameters are set to false , which disables other back end types: CinderEnableIscsiBackend : other iSCSI back ends. CinderEnableRbdBackend : Red Hat Ceph. CinderEnableNfsBackend : NFS. NovaEnableRbdBackend : ephemeral Red Hat Ceph storage. 2 The GlanceBackend parameter sets what the Image service uses to store images. The following values are supported: file : store images on /var/lib/glance/images on each Controller node. swift : use the Object Storage service for image storage. cinder : use the Block Storage service for image storage. 3 ControllerExtraConfig defines custom settings that are applied to all Controller nodes. The cinder::config::cinder_config class means the settings must be applied to the Block Storage (cinder) service. 4 The netapp1/volume_driver and netapp2/volume_driver settings follow the section / setting syntax. With the Block Storage service, each back end is defined in its own section in /etc/cinder/cinder.conf . Each setting that uses the netapp1 prefix is defined in a new [netapp1] back end section. 5 netapp2 settings are defined in a separate [netapp2] section. 6 The value prefix configures the preceding setting. 7 The cinder_user_enabled_backends class sets and enables custom back ends. Use this class only for user-enabled back ends, specifically, those defined in the cinder::config::cinder_config class. Do not use cinder_user_enabled_backends to list back ends that you can enable natively with director. These include Red Hat Ceph, NFS, and single back ends for supported NetApp or Dell appliances. For example, if you enable a Red Hat Ceph back end, do not list it in cinder_user_enabled_backends , enable it by setting CinderEnableRbdBackend to true . Note For more information about defining a Red Hat Ceph back end for OpenStack Block Storage, see the Deploying an Overcloud with Containerized Red Hat Ceph guide. To see the resulting /etc/cinder/cinder.conf settings from /home/stack/templates/custom-env.yaml , see Appendix A, Appendix
[ "parameter_defaults: # 1 CinderEnableIscsiBackend : false CinderEnableRbdBackend : false CinderEnableNfsBackend : false NovaEnableRbdBackend : false GlanceBackend : file # 2 ControllerExtraConfig: # 3 cinder::config::cinder_config: netapp1 /volume_driver: # 4 value: cinder.volume.drivers.netapp.common.NetAppDriver netapp1/netapp_storage_family: value: ontap_7mode netapp1/netapp_storage_protocol: value: iscsi netapp1/netapp_server_hostname: value: 10.35.64.11 netapp1/netapp_server_port: value: 80 netapp1/netapp_login: value: root netapp1/netapp_password: value: p@USDUSDw0rd netapp1/volume_backend_name: value: netapp1 netapp2 /volume_driver: # 5 value : cinder.volume.drivers.netapp.common.NetAppDriver # 6 netapp2/netapp_storage_family: value: ontap_7mode netapp2/netapp_storage_protocol: value: iscsi netapp2/netapp_server_hostname: value: 10.35.64.11 netapp2/netapp_server_port: value: 80 netapp2/netapp_login: value: root netapp2/netapp_password: value: p@USDUSDw0rd netapp2/volume_backend_name: value: netapp2 cinder_user_enabled_backends: ['netapp1','netapp2'] # 7" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/custom_block_storage_back_end_deployment_guide/envfile
Chapter 10. Using JBoss EAP high availability in Microsoft Azure
Chapter 10. Using JBoss EAP high availability in Microsoft Azure Microsoft Azure does not support JGroups discovery protocols that are based on UDP multicast. Although you may use other JGroups discovery protocols (such as a static configuration ( TCPPING ), a shared database ( JDBC_PING ), shared file system-based ping ( FILE_PING ), or TCPGOSSIP ), we strongly recommend that you use the shared file discovery protocol specifically developed for Azure: AZURE_PING . 10.1. AZURE_PING configuration for JBoss EAP high availability This section describes configuring your JBoss EAP cluster to use the AZURE_PING JGroups discovery protocol. Ensure that you meet the prerequisites when creating your virtual machines . AZURE_PING uses a common blob container in a Microsoft Azure storage account. If you do not already have a blob container that AZURE_PING can use, create one that your virtual machines can access. After creating your blob container, you will need the following information to configure AZURE_PING: storage_account_name : the name of the Microsoft Azure storage account that contains your blob container. storage_access_key : the secret access key of the storage account. container : the name of the blob container to use for PING data. Important The following instructions configure AZURE_PING using a UDP JGroups stack. If you will be configuring JBoss EAP messaging high availability in Azure , you must configure AZURE_PING in a TCP JGroups stack instead. To configure JBoss EAP to use AZURE_PING as the JGroups discovery protocol, you can either use a preconfigured example JBoss EAP configuration file , or modify an existing configuration . 10.2. Use of the example configuration file to configure high availability JBoss EAP includes example configuration files for configuring clustering of standalone servers in Microsoft Azure. These files are located in EAP_HOME/docs/examples/configs/ and are standalone-azure-ha.xml and standalone-azure-full-ha.xml . Note See the JBoss EAP Configuration Guide for an explanation of the differences between the server profiles. These sample configuration files are preconfigured for using clustering in Microsoft Azure, and all that is needed is to specify the values for your Azure storage account and blob container. Copy your desired example configuration file to EAP_HOME/standalone/configuration/ . 10.3. Modifying an existing server high availability configuration If you are modifying an existing JBoss EAP high availability configuration, the following changes to the jgroups subsystem are required. Procedure Launch the management CLI and embed a server to make offline changes to your chosen configuration file. For example: By default, JGroups uses the UDP stack. If you were using another stack, change back to using the UDP stack: Execute the following batch of commands to remove the existing UDP stack and insert a new UDP stack configured for Microsoft Azure: Important The encoding for storage_access_key used in the following command must be Base64. Note If you want to store the values of your Microsoft Azure storage account and blob container in your configuration file, replace the system property references in the above configuration with the values from your Azure environment. In the following command, examples for starting JBoss EAP, the system properties are used. The stack XML in your configuration file should look like the following: <stack name="udp"> <transport type="UDP" socket-binding="jgroups-udp"/> <protocol type="RED"/> <protocol type="azure.AZURE_PING"> <property name="storage_account_name">USD{jboss.jgroups.azure_ping.storage_account_name}</property> <property name="storage_access_key">USD{jboss.jgroups.azure_ping.storage_access_key}</property> <property name="container">USD{jboss.jgroups.azure_ping.container}</property> </protocol> <protocol type="MERGE3"/> <socket-protocol type="FD_SOCK2" socket-binding="jgroups-udp-fd"/> <protocol type="FD_ALL3"/> <protocol type="VERIFY_SUSPECT2"/> <protocol type="pbcast.NAKACK2"/> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="UFC"/> <protocol type="MFC"/> <protocol type="FRAG4"/> </stack> Stop the embedded server and exit the management CLI: 10.4. Starting JBoss EAP high availability in Microsoft Azure To start JBoss EAP using high availability in Microsoft Azure, you must: Use a configuration file that has been configured with the AZURE_PING discovery protocol and specify the required values of your Microsoft Azure storage account and blob container. Bind the private interface to the Microsoft Azure internal IP address that is used for clustering traffic. You can do this at startup, as shown below, or as a set configuration shown in the JBoss EAP Configuration Guide . Warning For security reasons, you must ensure that you do not expose clustering traffic to unintended networks. You can do this by restricting the endpoints to your Microsoft Azure virtual network or by creating a dedicated virtual network and dedicated virtual machine NICs for clustering traffic. Procedure Start your JBoss EAP high availability instance using the following command. If you stored your Microsoft Azure storage account and blob container values in your configuration file, you can omit the -Djboss.jgroups.azure_ping system property definitions. For example: Note As JBoss EAP subsystems only start when needed, you must deploy a distributable application to your JBoss EAP servers to start the high availability JBoss EAP subsystems. After you start a second JBoss EAP instance in a cluster, you should see logs similar to the following in the console log of the first server in the cluster: 10.5. Clean stale discovery files in your blob container If a JBoss EAP cluster that uses AZURE_PING is shut down abnormally, for example, using kill -9 to end the JBoss EAP process, some stale discovery files may be left in your blob container. These files are usually cleaned up in a graceful cluster shutdown, but if left there from an abnormal shutdown, it may impact startup performance of cluster members attempting to contact nodes that are no longer online. If this is a problem for you, you can set the following configuration to make the cluster coordinator remove and refresh all discovery files whenever the cluster view changes. Note Alternatively, if cleaning your container on each view change is not ideal, you can reduce the number of join attempts for a node attempting to join a cluster. The default number of join attempts is 10 . For example, to set the number of join attempts to 3 : The stale discovery files will still be present, but a node attempting to join a cluster will not spend as much time attempting to contact nodes that are no longer online. Revised on 2025-03-11 14:21:53 UTC
[ "EAP_HOME/bin/jboss-cli.sh [disconnected /] embed-server --server-config=standalone-ha.xml", "[standalone@embedded /] /subsystem=jgroups/channel=ee:write-attribute(name=stack,value=udp)", "batch /subsystem=jgroups/stack=udp:remove /subsystem=jgroups/stack=udp:add() /subsystem=jgroups/stack=udp/transport=UDP:add(socket-binding=jgroups-udp,properties={ip_mcast=false}) /subsystem=jgroups/stack=udp/protocol=azure.AZURE_PING:add(properties={storage_account_name=\"USD{jboss.jgroups.azure_ping.storage_account_name}\", storage_access_key=\"USD{jboss.jgroups.azure_ping.storage_access_key}\", container=\"USD{jboss.jgroups.azure_ping.container}\"}) /subsystem=jgroups/stack=udp/protocol=MERGE3:add /subsystem=jgroups/stack=udp/protocol=FD_SOCK2:add(socket-binding=jgroups-udp-fd) /subsystem=jgroups/stack=udp/protocol=FD_ALL3:add /subsystem=jgroups/stack=udp/protocol=VERIFY_SUSPECT2:add /subsystem=jgroups/stack=udp/protocol=pbcast.NAKACK2:add(properties={use_mcast_xmit=false,use_mcast_xmit_req=false}) /subsystem=jgroups/stack=udp/protocol=UNICAST3:add /subsystem=jgroups/stack=udp/protocol=pbcast.STABLE:add /subsystem=jgroups/stack=udp/protocol=pbcast.GMS:add /subsystem=jgroups/stack=udp/protocol=UFC:add /subsystem=jgroups/stack=udp/protocol=MFC:add /subsystem=jgroups/stack=udp/protocol=FRAG4:add run-batch", "<stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <protocol type=\"RED\"/> <protocol type=\"azure.AZURE_PING\"> <property name=\"storage_account_name\">USD{jboss.jgroups.azure_ping.storage_account_name}</property> <property name=\"storage_access_key\">USD{jboss.jgroups.azure_ping.storage_access_key}</property> <property name=\"container\">USD{jboss.jgroups.azure_ping.container}</property> </protocol> <protocol type=\"MERGE3\"/> <socket-protocol type=\"FD_SOCK2\" socket-binding=\"jgroups-udp-fd\"/> <protocol type=\"FD_ALL3\"/> <protocol type=\"VERIFY_SUSPECT2\"/> <protocol type=\"pbcast.NAKACK2\"/> <protocol type=\"UNICAST3\"/> <protocol type=\"pbcast.STABLE\"/> <protocol type=\"pbcast.GMS\"/> <protocol type=\"UFC\"/> <protocol type=\"MFC\"/> <protocol type=\"FRAG4\"/> </stack>", "[standalone@embedded /] stop-embedded-server [disconnected /] exit", "EAP_HOME/bin/standalone.sh -b <IP_ADDRESS> -bprivate <IP_ADDRESS> --server-config= <EAP_CONFIG_FILE> .xml -Djboss.jgroups.azure_ping.storage_account_name= <STORAGE_ACCOUNT_NAME> -Djboss.jgroups.azure_ping.storage_access_key= <STORAGE_ACCESS_KEY> -Djboss.jgroups.azure_ping.container= <CONTAINER_NAME>", "EAP_HOME/bin/standalone.sh -b 172.28.0.2 -bprivate 172.28.0.2 --server-config=standalone-azure-ha.xml -Djboss.jgroups.azure_ping.storage_account_name=my_storage_account -Djboss.jgroups.azure_ping.storage_access_key=y7+2x7P68pQse9MNh58Bkk5po9OGzeJc+0IRqYcQ9Cr/Sp4xiUFJVlbY+MGXJRNx3syksikwm4tOYlFgjvoCmw== -Djboss.jgroups.azure_ping.container=my_blob_container", "INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (thread-2,ee,eap-server-1) ISPN000094: Received new cluster view for channel server: [eap-server-1|1] (2) [eap-server-1, eap-server-2]", "/subsystem=jgroups/stack=udp/protocol=azure.AZURE_PING/property=remove_all_data_on_view_change:add(value=true)", "/subsystem=jgroups/stack=udp/protocol=pbcast.GMS/property=max_join_attempts:add(value=3)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_red_hat_jboss_enterprise_application_platform_in_microsoft_azure/using-server-high-availability-in-microsoft-azure_default
Chapter 6. File Integrity Operator
Chapter 6. File Integrity Operator 6.1. File Integrity Operator release notes The File Integrity Operator for OpenShift Container Platform continually runs file integrity checks on RHCOS nodes. These release notes track the development of the File Integrity Operator in the OpenShift Container Platform. For an overview of the File Integrity Operator, see Understanding the File Integrity Operator . To access the latest release, see Updating the File Integrity Operator . 6.1.1. OpenShift File Integrity Operator 1.2.1 The following advisory is available for the OpenShift File Integrity Operator 1.2.1: RHBA-2023:1684 OpenShift File Integrity Operator Bug Fix Update This release includes updated container dependencies. 6.1.2. OpenShift File Integrity Operator 1.2.0 The following advisory is available for the OpenShift File Integrity Operator 1.2.0: RHBA-2023:1273 OpenShift File Integrity Operator Enhancement Update 6.1.2.1. New features and enhancements The File Integrity Operator Custom Resource (CR) now contains an initialDelay feature that specifies the number of seconds to wait before starting the first AIDE integrity check. For more information, see Creating the FileIntegrity custom resource . The File Integrity Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the File Integrity Operator . 6.1.3. OpenShift File Integrity Operator 1.0.0 The following advisory is available for the OpenShift File Integrity Operator 1.0.0: RHBA-2023:0037 OpenShift File Integrity Operator Bug Fix Update 6.1.4. OpenShift File Integrity Operator 0.1.32 The following advisory is available for the OpenShift File Integrity Operator 0.1.32: RHBA-2022:7095 OpenShift File Integrity Operator Bug Fix Update 6.1.4.1. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand from which namespace the alert originated. Now, the Operator sets the appropriate namespace, providing more information about the alert. ( BZ#2112394 ) Previously, The File Integrity Operator did not update the metrics service on Operator startup, causing the metrics targets to be unreachable. With this release, the File Integrity Operator now ensures the metrics service is updated on Operator startup. ( BZ#2115821 ) 6.1.5. OpenShift File Integrity Operator 0.1.30 The following advisory is available for the OpenShift File Integrity Operator 0.1.30: RHBA-2022:5538 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.1.5.1. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand where the alert originated. Now, the Operator sets the appropriate namespace, increasing understanding of the alert. ( BZ#2101393 ) 6.1.6. OpenShift File Integrity Operator 0.1.24 The following advisory is available for the OpenShift File Integrity Operator 0.1.24: RHBA-2022:1331 OpenShift File Integrity Operator Bug Fix 6.1.6.1. New features and enhancements You can now configure the maximum number of backups stored in the FileIntegrity Custom Resource (CR) with the config.maxBackups attribute. This attribute specifies the number of AIDE database and log backups left over from the re-init process to keep on the node. Older backups beyond the configured number are automatically pruned. The default is set to five backups. 6.1.6.2. Bug fixes Previously, upgrading the Operator from versions older than 0.1.21 to 0.1.22 could cause the re-init feature to fail. This was a result of the Operator failing to update configMap resource labels. Now, upgrading to the latest version fixes the resource labels. ( BZ#2049206 ) Previously, when enforcing the default configMap script contents, the wrong data keys were compared. This resulted in the aide-reinit script not being updated properly after an Operator upgrade, and caused the re-init process to fail. Now, daemonSets run to completion and the AIDE database re-init process executes successfully. ( BZ#2072058 ) 6.1.7. OpenShift File Integrity Operator 0.1.22 The following advisory is available for the OpenShift File Integrity Operator 0.1.22: RHBA-2022:0142 OpenShift File Integrity Operator Bug Fix 6.1.7.1. Bug fixes Previously, a system with a File Integrity Operator installed might interrupt the OpenShift Container Platform update, due to the /etc/kubernetes/aide.reinit file. This occurred if the /etc/kubernetes/aide.reinit file was present, but later removed prior to the ostree validation. With this update, /etc/kubernetes/aide.reinit is moved to the /run directory so that it does not conflict with the OpenShift Container Platform update. ( BZ#2033311 ) 6.1.8. OpenShift File Integrity Operator 0.1.21 The following advisory is available for the OpenShift File Integrity Operator 0.1.21: RHBA-2021:4631 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.1.8.1. New features and enhancements The metrics related to FileIntegrity scan results and processing metrics are displayed on the monitoring dashboard on the web console. The results are labeled with the prefix of file_integrity_operator_ . If a node has an integrity failure for more than 1 second, the default PrometheusRule provided in the operator namespace alerts with a warning. The following dynamic Machine Config Operator and Cluster Version Operator related filepaths are excluded from the default AIDE policy to help prevent false positives during node updates: /etc/machine-config-daemon/currentconfig /etc/pki/ca-trust/extracted/java/cacerts /etc/cvo/updatepayloads /root/.kube The AIDE daemon process has stability improvements over v0.1.16, and is more resilient to errors that might occur when the AIDE database is initialized. 6.1.8.2. Bug fixes Previously, when the Operator automatically upgraded, outdated daemon sets were not removed. With this release, outdated daemon sets are removed during the automatic upgrade. 6.1.9. Additional resources Understanding the File Integrity Operator 6.2. Installing the File Integrity Operator 6.2.1. Installing the File Integrity Operator using the web console Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the File Integrity Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-file-integrity namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-file-integrity namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-file-integrity project that are reporting issues. 6.2.2. Installing the File Integrity Operator using the CLI Prerequisites You must have admin privileges. Procedure Create a Namespace object YAML file by running: USD oc create -f <file-name>.yaml Example output apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-file-integrity Create the OperatorGroup object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity Create the Subscription object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: "stable" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-file-integrity Verify that the File Integrity Operator is up and running: USD oc get deploy -n openshift-file-integrity 6.2.3. Additional resources The File Integrity Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 6.3. Updating the File Integrity Operator As a cluster administrator, you can update the File Integrity Operator on your OpenShift Container Platform cluster. 6.3.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 6.3.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 6.3.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.4. Understanding the File Integrity Operator The File Integrity Operator is an OpenShift Container Platform Operator that continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged advanced intrusion detection environment (AIDE) containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods. Important Currently, only Red Hat Enterprise Linux CoreOS (RHCOS) nodes are supported. 6.4.1. Creating the FileIntegrity custom resource An instance of a FileIntegrity custom resource (CR) represents a set of continuous file integrity scans for one or more nodes. Each FileIntegrity CR is backed by a daemon set running AIDE on the nodes matching the FileIntegrity CR specification. Procedure Create the following example FileIntegrity CR named worker-fileintegrity.yaml to enable scans on worker nodes: Example FileIntegrity CR apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: "" tolerations: 2 - key: "myNode" operator: "Exists" effect: "NoSchedule" config: 3 name: "myconfig" namespace: "openshift-file-integrity" key: "config" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7 1 Defines the selector for scheduling node scans. 2 Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration allowing running on main and infra nodes is applied. 3 Define a ConfigMap containing an AIDE configuration to use. 4 The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node might be resource intensive, so it can be useful to specify a longer interval. Default is 900 seconds (15 minutes). 5 The maximum number of AIDE database and log backups (leftover from the re-init process) to keep on a node. Older backups beyond this number are automatically pruned by the daemon. Default is set to 5. 6 The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. 7 The running status of the FileIntegrity instance. Statuses are Initializing , Pending , or Active . Initializing The FileIntegrity object is currently initializing or re-initializing the AIDE database. Pending The FileIntegrity deployment is still being created. Active The scans are active and ongoing. Apply the YAML file to the openshift-file-integrity namespace: USD oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity Verification Confirm the FileIntegrity object was created successfully by running the following command: USD oc get fileintegrities -n openshift-file-integrity Example output NAME AGE worker-fileintegrity 14s 6.4.2. Checking the FileIntegrity custom resource status The FileIntegrity custom resource (CR) reports its status through the . status.phase subresource. Procedure To query the FileIntegrity CR status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status.phase }" Example output Active 6.4.3. FileIntegrity custom resource phases Pending - The phase after the custom resource (CR) is created. Active - The phase when the backing daemon set is up and running. Initializing - The phase when the AIDE database is being reinitialized. 6.4.4. Understanding the FileIntegrityNodeStatuses object The scan results of the FileIntegrity CR are reported in another object called FileIntegrityNodeStatuses . USD oc get fileintegritynodestatuses Example output NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s Note It might take some time for the FileIntegrityNodeStatus object results to be available. There is one result object per node. The nodeName attribute of each FileIntegrityNodeStatus object corresponds to the node being scanned. The status of the file integrity scan is represented in the results array, which holds scan conditions. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq The fileintegritynodestatus object reports the latest status of an AIDE run and exposes the status as Failed , Succeeded , or Errored in a status field. USD oc get fileintegritynodestatuses -w Example output NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded 6.4.5. FileIntegrityNodeStatus CR status types These conditions are reported in the results array of the corresponding FileIntegrityNodeStatus CR status: Succeeded - The integrity check passed; the files and directories covered by the AIDE check have not been modified since the database was last initialized. Failed - The integrity check failed; some files or directories covered by the AIDE check have been modified since the database was last initialized. Errored - The AIDE scanner encountered an internal error. 6.4.5.1. FileIntegrityNodeStatus CR success example Example output of a condition with a success status [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:57Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:46:03Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:48Z" } ] In this case, all three scans succeeded and so far there are no other conditions. 6.4.5.2. FileIntegrityNodeStatus CR failure status example To simulate a failure condition, modify one of the files AIDE tracks. For example, modify /etc/resolv.conf on one of the worker nodes: USD oc debug node/ip-10-0-130-192.ec2.internal Example output Creating debug namespace/openshift-debug-node-ldfbj ... Starting pod/ip-10-0-130-192ec2internal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo "# integrity test" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod ... Removing debug namespace/openshift-debug-node-ldfbj ... After some time, the Failed condition is reported in the results array of the corresponding FileIntegrityNodeStatus object. The Succeeded condition is retained, which allows you to pinpoint the time the check failed. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r Alternatively, if you are not mentioning the object name, run: USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq Example output [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:54:14Z" }, { "condition": "Failed", "filesChanged": 1, "lastProbeTime": "2020-09-15T12:57:20Z", "resultConfigMapName": "aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "resultConfigMapNamespace": "openshift-file-integrity" } ] The Failed condition points to a config map that gives more details about what exactly failed and why: USD oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Example output Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none> Due to the config map data size limit, AIDE logs over 1 MB are added to the failure config map as a base64-encoded gzip archive. In this case, you want to pipe the output of the above command to base64 --decode | gunzip . Compressed logs are indicated by the presence of a file-integrity.openshift.io/compressed annotation key in the config map. 6.4.6. Understanding events Transitions in the status of the FileIntegrity and FileIntegrityNodeStatus objects are logged by events . The creation time of the event reflects the latest transition, such as Initializing to Active , and not necessarily the latest scan result. However, the newest event always reflects the most recent status. USD oc get events --field-selector reason=FileIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active When a node scan fails, an event is created with the add/changed/removed and config map information. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed Changes to the number of added, changed, or removed files results in a new event, even if the status of the node has not transitioned. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 6.5. Configuring the Custom File Integrity Operator 6.5.1. Viewing FileIntegrity object attributes As with any Kubernetes custom resources (CRs), you can run oc explain fileintegrity , and then look at the individual attributes using: USD oc explain fileintegrity.spec USD oc explain fileintegrity.spec.config 6.5.2. Important attributes Table 6.1. Important spec and spec.config attributes Attribute Description spec.nodeSelector A map of key-values pairs that must match with node's labels in order for the AIDE pods to be schedulable on that node. The typical use is to set only a single key-value pair where node-role.kubernetes.io/worker: "" schedules AIDE on all worker nodes, node.openshift.io/os_id: "rhcos" schedules on all Red Hat Enterprise Linux CoreOS (RHCOS) nodes. spec.debug A boolean attribute. If set to true , the daemon running in the AIDE deamon set's pods would output extra information. spec.tolerations Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration is applied, which allows tolerations to run on control plane nodes. spec.config.gracePeriod The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node can be resource intensive, so it can be useful to specify a longer interval. Defaults to 900 , or 15 minutes. maxBackups The maximum number of AIDE database and log backups leftover from the re-init process to keep on a node. Older backups beyond this number are automatically pruned by the daemon. spec.config.name Name of a configMap that contains custom AIDE configuration. If omitted, a default configuration is created. spec.config.namespace Namespace of a configMap that contains custom AIDE configuration. If unset, the FIO generates a default configuration suitable for RHCOS systems. spec.config.key Key that contains actual AIDE configuration in a config map specified by name and namespace . The default value is aide.conf . spec.config.initialDelay The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. This attribute is optional. 6.5.3. Examine the default configuration The default File Integrity Operator configuration is stored in a config map with the same name as the FileIntegrity CR. Procedure To examine the default config, run: USD oc describe cm/worker-fileintegrity 6.5.4. Understanding the default File Integrity Operator configuration Below is an excerpt from the aide.conf key of the config map: @@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\..* PERMS /hostroot/root/ CONTENT_EX The default configuration for a FileIntegrity instance provides coverage for files under the following directories: /root /boot /usr /etc The following directories are not covered: /var /opt Some OpenShift Container Platform-specific excludes under /etc/ 6.5.5. Supplying a custom AIDE configuration Any entries that configure AIDE internal behavior such as DBDIR , LOGDIR , database , and database_out are overwritten by the Operator. The Operator would add a prefix to /hostroot/ before all paths to be watched for integrity changes. This makes reusing existing AIDE configs that might often not be tailored for a containerized environment and start from the root directory easier. Note /hostroot is the directory where the pods running AIDE mount the host's file system. Changing the configuration triggers a reinitializing of the database. 6.5.6. Defining a custom File Integrity Operator configuration This example focuses on defining a custom configuration for a scanner that runs on the control plane nodes based on the default configuration provided for the worker-fileintegrity CR. This workflow might be useful if you are planning to deploy a custom software running as a daemon set and storing its data under /opt/mydaemon on the control plane nodes. Procedure Make a copy of the default configuration. Edit the default configuration with the files that must be watched or excluded. Store the edited contents in a new config map. Point the FileIntegrity object to the new config map through the attributes in spec.config . Extract the default configuration: USD oc extract cm/worker-fileintegrity --keys=aide.conf This creates a file named aide.conf that you can edit. To illustrate how the Operator post-processes the paths, this example adds an exclude directory without the prefix: USD vim aide.conf Example output /hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db Exclude a path specific to control plane nodes: !/opt/mydaemon/ Store the other content in /etc : /hostroot/etc/ CONTENT_EX Create a config map based on this file: USD oc create cm master-aide-conf --from-file=aide.conf Define a FileIntegrity CR manifest that references the config map: apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: "" config: name: master-aide-conf namespace: openshift-file-integrity The Operator processes the provided config map file and stores the result in a config map with the same name as the FileIntegrity object: USD oc describe cm/master-fileintegrity | grep /opt/mydaemon Example output !/hostroot/opt/mydaemon 6.5.7. Changing the custom File Integrity configuration To change the File Integrity configuration, never change the generated config map. Instead, change the config map that is linked to the FileIntegrity object through the spec.name , namespace , and key attributes. 6.6. Performing advanced Custom File Integrity Operator tasks 6.6.1. Reinitializing the database If the File Integrity Operator detects a change that was planned, it might be required to reinitialize the database. Procedure Annotate the FileIntegrity custom resource (CR) with file-integrity.openshift.io/re-init : USD oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init= The old database and log files are backed up and a new database is initialized. The old database and logs are retained on the nodes under /etc/kubernetes , as seen in the following output from a pod spawned using oc debug : Example output ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55 To provide some permanence of record, the resulting config maps are not owned by the FileIntegrity object, so manual cleanup is necessary. As a result, any integrity failures would still be visible in the FileIntegrityNodeStatus object. 6.6.2. Machine config integration In OpenShift Container Platform 4, the cluster node configuration is delivered through MachineConfig objects. You can assume that the changes to files that are caused by a MachineConfig object are expected and should not cause the file integrity scan to fail. To suppress changes to files caused by MachineConfig object updates, the File Integrity Operator watches the node objects; when a node is being updated, the AIDE scans are suspended for the duration of the update. When the update finishes, the database is reinitialized and the scans resume. This pause and resume logic only applies to updates through the MachineConfig API, as they are reflected in the node object annotations. 6.6.3. Exploring the daemon sets Each FileIntegrity object represents a scan on a number of nodes. The scan itself is performed by pods managed by a daemon set. To find the daemon set that represents a FileIntegrity object, run: USD oc -n openshift-file-integrity get ds/aide-worker-fileintegrity To list the pods in that daemon set, run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity To view logs of a single AIDE pod, call oc logs on one of the pods. USD oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6 Example output Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check ... The config maps created by the AIDE daemon are not retained and are deleted after the File Integrity Operator processes them. However, on failure and error, the contents of these config maps are copied to the config map that the FileIntegrityNodeStatus object points to. 6.7. Troubleshooting the File Integrity Operator 6.7.1. General troubleshooting Issue You want to generally troubleshoot issues with the File Integrity Operator. Resolution Enable the debug flag in the FileIntegrity object. The debug flag increases the verbosity of the daemons that run in the DaemonSet pods and run the AIDE checks. 6.7.2. Checking the AIDE configuration Issue You want to check the AIDE configuration. Resolution The AIDE configuration is stored in a config map with the same name as the FileIntegrity object. All AIDE configuration config maps are labeled with file-integrity.openshift.io/aide-conf . 6.7.3. Determining the FileIntegrity object's phase Issue You want to determine if the FileIntegrity object exists and see its current status. Resolution To see the FileIntegrity object's current status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status }" Once the FileIntegrity object and the backing daemon set are created, the status should switch to Active . If it does not, check the Operator pod logs. 6.7.4. Determining that the daemon set's pods are running on the expected nodes Issue You want to confirm that the daemon set exists and that its pods are running on the nodes you expect them to run on. Resolution Run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity Note Adding -owide includes the IP address of the node that the pod is running on. To check the logs of the daemon pods, run oc logs . Check the return value of the AIDE command to see if the check passed or failed.
[ "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity", "oc create -f <file-name>.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace", "oc get csv -n openshift-file-integrity", "oc get deploy -n openshift-file-integrity", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7", "oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity", "oc get fileintegrities -n openshift-file-integrity", "NAME AGE worker-fileintegrity 14s", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"", "Active", "oc get fileintegritynodestatuses", "NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "oc get fileintegritynodestatuses -w", "NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]", "oc debug node/ip-10-0-130-192.ec2.internal", "Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj", "oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r", "oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq", "[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]", "oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>", "oc get events --field-selector reason=FileIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc get events --field-selector reason=NodeIntegrityStatus", "LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed", "oc explain fileintegrity.spec", "oc explain fileintegrity.spec.config", "oc describe cm/worker-fileintegrity", "@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX", "oc extract cm/worker-fileintegrity --keys=aide.conf", "vim aide.conf", "/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db", "!/opt/mydaemon/", "/hostroot/etc/ CONTENT_EX", "oc create cm master-aide-conf --from-file=aide.conf", "apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity", "oc describe cm/master-fileintegrity | grep /opt/mydaemon", "!/hostroot/opt/mydaemon", "oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=", "ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55", "oc -n openshift-file-integrity get ds/aide-worker-fileintegrity", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity", "oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6", "Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check", "oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"", "oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/security_and_compliance/file-integrity-operator
10.2.3. Virtual Host Configuration
10.2.3. Virtual Host Configuration The contents of all <VirtualHost> containers should be migrated in the same way as the main server section as described in Section 10.2.2, "Main Server Configuration" . Important Note that SSL/TLS virtual host configuration has been moved out of the main server configuration file and into /etc/httpd/conf.d/ssl.conf . For more on this topic, refer to the chapter titled Apache HTTP Secure Server Configuration in the System Administrators Guide and the documentation online at the following URL: http://httpd.apache.org/docs-2.0/vhosts/
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-httpd-v2-mig-virtual
3.3. Virtualized Hardware Devices
3.3. Virtualized Hardware Devices Virtualization on Red Hat Enterprise Linux 6 presents three distinct types of system devices to virtual machines. The three types include: Virtualized and Emulated Devices Paravirtualized devices Physically shared devices These hardware devices all appear as being physically attached to the virtual machine but the device drivers work in different ways. 3.3.1. Virtualized and Emulated devices KVM implements many core devices for virtual machines as software. These emulated hardware devices are crucial for virtualizing operating systems. Emulated devices are virtual devices which exist entirely in software. In addition, KVM provides emulated drivers. These form a translation layer between the virtual machine and the Linux kernel (which manages the source device). The device level instructions are completely translated by the KVM hypervisor. Any device of the same type (storage, network, keyboard, or mouse) that is recognized by the Linux kernel can be used as the backing source device for the emulated drivers. Virtual CPUs (vCPUs) A host system can have up to 160 virtual CPUs (vCPUs) that can be presented to guests for use, regardless of the number of host CPUs. Emulated system components The following core system components are emulated to provide basic system functions: Intel i440FX host PCI bridge PIIX3 PCI to ISA bridge PS/2 mouse and keyboard EvTouch USB Graphics Tablet PCI UHCI USB controller and a virtualized USB hub Emulated serial ports EHCI controller, virtualized USB storage and a USB mouse Emulated storage drivers Storage devices and storage pools can use these emulated devices to attach storage devices to virtual machines. The guest uses an emulated storage driver to access the storage pool. Note that like all virtual devices, the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtual machine. The backing storage device can be any supported type of storage device, file, or storage pool volume. The emulated IDE driver KVM provides two emulated PCI IDE interfaces. An emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtual machine. The emulated IDE driver is also used for virtualized CD-ROM and DVD-ROM drives. The emulated floppy disk drive driver The emulated floppy disk drive driver is used for creating virtualized floppy drives. Emulated sound devices Red Hat Enterprise Linux 6.1 and above provides an emulated (Intel) HDA sound device, intel-hda . This device is supported on the following guest operating systems: Red Hat Enterprise Linux 6, for the 32-bit AMD and Intel architecture, and AMD64 and Intel 64 architectures Red Hat Enterprise Linux 5, for i386, and the 32-bit AMD and Intel architecture and Intel 64 architectures Red Hat Enterprise Linux 4, for i386 and the 32-bit AMD and Intel architecture and Intel 64 architectures Windows 7, for i386 and AMD64 and Intel 64 architectures Windows 2008 R2, for the AMD64 and Intel 64 architecture Note The following emulated sound devices are also available, but are not recommended due to compatibility issues with certain guest operating systems: ac97 , an emulated Intel 82801AA AC97 Audio compatible sound card es1370 , an emulated ENSONIQ AudioPCI ES1370 sound card Emulated graphics cards The following emulated graphics devices are provided: A Cirrus CLGD 5446 PCI VGA card A standard VGA graphics card with Bochs VESA extensions (hardware level, including all non-standard modes) Guests can connect to these devices with the Simple Protocol for Independent Computing Environments (SPICE) protocol or with the Virtual Network Computing (VNC) system. Emulated network devices The following emulated network devices are provided: The e1000 device emulates an Intel E1000 network adapter (Intel 82540EM, 82573L, 82544GC). The rtl8139 device emulates a Realtek 8139 network adapter. Emulated watchdog devices Red Hat Enterprise Linux 6 provide two emulated watchdog devices. A watchdog can be used to automatically reboot a virtual machine when it becomes overloaded or unresponsive. The watchdog package must be installed on the guest. The two devices available are: i6300esb , an emulated Intel 6300 ESB PCI watchdog device. It is supported in guest operating system Red Hat Enterprise Linux versions 6.0 and above, and is the recommended device to use. ib700 , an emulated iBase 700 ISA watchdog device. The ib700 watchdog device is only supported in guests using Red Hat Enterprise Linux 6.2 and above. Both watchdog devices are supported in the 32-bit AMD and Intel architecture and AMD64 and Intel 64 architectures for guest operating systems Red Hat Enterprise Linux 6.2 and above.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_getting_started_guide/sec-virtualized-hardware-devices
Appendix B. Health messages of a Ceph cluster
Appendix B. Health messages of a Ceph cluster There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Table B.1. Monitor Health Code Description DAEMON_OLD_VERSION Warn if old version of Ceph are running on any daemons. It will generate a health error if multiple versions are detected. MON_DOWN One or more Ceph Monitor daemons are currently down. MON_CLOCK_SKEW The clocks on the nodes running the ceph-mon daemons are not sufficiently well synchronized. Resolve it by synchronizing the clocks using ntpd or chrony . MON_MSGR2_NOT_ENABLED The ms_bind_msgr2 option is enabled but one or more Ceph Monitors is not configured to bind to a v2 port in the cluster's monmap. Resolve this by running ceph mon enable-msgr2 command. MON_DISK_LOW One or more Ceph Monitors are low on disk space. MON_DISK_CRIT One or more Ceph Monitors are critically low on disk space. MON_DISK_BIG The database size for one or more Ceph Monitors are very large. AUTH_INSECURE_GLOBAL_ID_RECLAIM One or more clients or daemons are connected to the storage cluster that are not securely reclaiming their global_id when reconnecting to a Ceph Monitor. AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED Ceph is currently configured to allow clients to reconnect to monitors using an insecure process to reclaim their global_id because the setting auth_allow_insecure_global_id_reclaim is set to true . Table B.2. Manager Health Code Description MGR_DOWN All Ceph Manager daemons are currently down. MGR_MODULE_DEPENDENCY An enabled Ceph Manager module is failing its dependency check. MGR_MODULE_ERROR A Ceph Manager module has experienced an unexpected error. Typically, this means an unhandled exception was raised from the module's serve function. Table B.3. OSDs Health Code Description OSD_DOWN One or more OSDs are marked down. OSD_CRUSH_TYPE_DOWN All the OSDs within a particular CRUSH subtree are marked down, for example all OSDs on a host. For example, OSD_HOST_DOWN and OSD_ROOT_DOWN OSD_ORPHAN An OSD is referenced in the CRUSH map hierarchy but does not exist. Remove the OSD by running ceph osd crush rm osd._OSD_ID command. OSD_OUT_OF_ORDER_FULL The utilization thresholds for nearfull , backfillfull , full , or, failsafefull are not ascending. Adjust the thresholds by running ceph osd set-nearfull-ratio RATIO , ceph osd set-backfillfull-ratio RATIO , and ceph osd set-full-ratio RATIO OSD_FULL One or more OSDs has exceeded the full threshold and is preventing the storage cluster from servicing writes. Restore write availability by raising the full threshold by a small margin ceph osd set-full-ratio RATIO . OSD_BACKFILLFULL One or more OSDs has exceeded the backfillfull threshold, which will prevent data from being allowed to rebalance to this device. OSD_NEARFULL One or more OSDs has exceeded the nearfull threshold. OSDMAP_FLAGS One or more storage cluster flags of interest has been set. These flags include full , pauserd , pausewr , noup , nodown , noin , noout , nobackfill , norecover , norebalance , noscrub , nodeep_scrub , and notieragent . Except for full , the flags can be cleared with ceph osd set FLAG and ceph osd unset FLAG commands. OSD_FLAGS One or more OSDs or CRUSH has a flag of interest set. These flags include noup , nodown , noin , and noout . OLD_CRUSH_TUNABLES The CRUSH map is using very old settings and should be updated. OLD_CRUSH_STRAW_CALC_VERSION The CRUSH map is using an older, non-optimal method for calculating intermediate weight values for straw buckets. CACHE_POOL_NO_HIT_SET One or more cache pools is not configured with a hit set to track utilization, which will prevent the tiering agent from identifying cold objects to flush and evict from the cache. Configure the hit sets on the cache pool with ceph osd pool set_POOL_NAME_ hit_set_type TYPE , ceph osd pool set POOL_NAME hit_set_period PERIOD_IN_SECONDS , ceph osd pool set POOL_NAME hit_set_count NUMBER_OF_HIT_SETS , and ceph osd pool set POOL_NAME hit_set_fpp TARGET_FALSE_POSITIVE_RATE commands. OSD_NO_SORTBITWISE sortbitwise flag is not set. Set the flag with ceph osd set sortbitwise command. POOL_FULL One or more pools has reached its quota and is no longer allowing writes. Increase the pool quota with ceph osd pool set-quota POOL_NAME max_objects NUMBER_OF_OBJECTS and ceph osd pool set-quota POOL_NAME max_bytes BYTES or delete some existing data to reduce utilization. BLUEFS_SPILLOVER One or more OSDs that use the BlueStore backend is allocated db partitions but that space has filled, such that metadata has "spilled over" onto the normal slow device. Disable this with ceph config set osd bluestore_warn_on_bluefs_spillover false command. BLUEFS_AVAILABLE_SPACE This output gives three values which are BDEV_DB free , BDEV_SLOW free and available_from_bluestore . BLUEFS_LOW_SPACE If the BlueStore File System (BlueFS) is running low on available free space and there is little available_from_bluestore one can consider reducing BlueFS allocation unit size. BLUESTORE_FRAGMENTATION As BlueStore works free space on underlying storage will get fragmented. This is normal and unavoidable but excessive fragmentation will cause slowdown. BLUESTORE_LEGACY_STATFS BlueStore tracks its internal usage statistics on a per-pool granular basis, and one or more OSDs have BlueStore volumes. Disable the warning with ceph config set global bluestore_warn_on_legacy_statfs false command. BLUESTORE_NO_PER_POOL_OMAP BlueStore tracks omap space utilization by pool. Disable the warning with ceph config set global bluestore_warn_on_no_per_pool_omap false command. BLUESTORE_NO_PER_PG_OMAP BlueStore tracks omap space utilization by PG. Disable the warning with ceph config set global bluestore_warn_on_no_per_pg_omap false command. BLUESTORE_DISK_SIZE_MISMATCH One or more OSDs using BlueStore has an internal inconsistency between the size of the physical device and the metadata tracking its size. BLUESTORE_NO_COMPRESSION ` One or more OSDs is unable to load a BlueStore compression plugin. This can be caused by a broken installation, in which the ceph-osd binary does not match the compression plugins, or a recent upgrade that did not include a restart of the ceph-osd daemon. BLUESTORE_SPURIOUS_READ_ERRORS One or more OSDs using BlueStore detects spurious read errors at main device. BlueStore has recovered from these errors by retrying disk reads. Table B.4. Device health Health Code Description DEVICE_HEALTH One or more devices is expected to fail soon, where the warning threshold is controlled by the mgr/devicehealth/warn_threshold config option. Mark the device out to migrate the data and replace the hardware. DEVICE_HEALTH_IN_USE One or more devices is expected to fail soon and has been marked "out" of the storage cluster based on mgr/devicehealth/mark_out_threshold , but it is still participating in one more PGs. DEVICE_HEALTH_TOOMANY Too many devices are expected to fail soon and the mgr/devicehealth/self_heal behavior is enabled, such that marking out all of the ailing devices would exceed the clusters mon_osd_min_in_ratio ratio that prevents too many OSDs from being automatically marked out . Table B.5. Pools and placement groups Health Code Description PG_AVAILABILITY Data availability is reduced, meaning that the storage cluster is unable to service potential read or write requests for some data in the cluster. PG_DEGRADED Data redundancy is reduced for some data, meaning the storage cluster does not have the desired number of replicas for for replicated pools or erasure code fragments. PG_RECOVERY_FULL Data redundancy might be reduced or at risk for some data due to a lack of free space in the storage cluster, specifically, one or more PGs has the recovery_toofull flag set, which means that the cluster is unable to migrate or recover data because one or more OSDs is above the full threshold. PG_BACKFILL_FULL Data redundancy might be reduced or at risk for some data due to a lack of free space in the storage cluster, specifically, one or more PGs has the backfill_toofull flag set, which means that the cluster is unable to migrate or recover data because one or more OSDs is above the backfillfull threshold. PG_DAMAGED Data scrubbing has discovered some problems with data consistency in the storage cluster, specifically, one or more PGs has the inconsistent or snaptrim_error flag is set, indicating an earlier scrub operation found a problem, or that the repair flag is set, meaning a repair for such an inconsistency is currently in progress. OSD_SCRUB_ERRORS Recent OSD scrubs have uncovered inconsistencies. OSD_TOO_MANY_REPAIRS When a read error occurs and another replica is available it is used to repair the error immediately, so that the client can get the object data. LARGE_OMAP_OBJECTS One or more pools contain large omap objects as determined by osd_deep_scrub_large_omap_object_key_threshold or osd_deep_scrub_large_omap_object_value_sum_threshold or both. Adjust the thresholds with ceph config set osd osd_deep_scrub_large_omap_object_key_threshold KEYS and ceph config set osd osd_deep_scrub_large_omap_object_value_sum_threshold BYTES commands. CACHE_POOL_NEAR_FULL A cache tier pool is nearly full. Adjust the cache pool target size with ceph osd pool set CACHE_POOL_NAME target_max_bytes BYTES and ceph osd pool set CACHE_POOL_NAME target_max_bytes BYTES commands. TOO_FEW_PGS The number of PGs in use in the storage cluster is below the configurable threshold of mon_pg_warn_min_per_osd PGs per OSD. POOL_PG_NUM_NOT_POWER_OF_TWO One or more pools has a pg_num value that is not a power of two. Disable the warning with ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false command. POOL_TOO_FEW_PGS One or more pools should probably have more PGs, based on the amount of data that is currently stored in the pool. You can either disable auto-scaling of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode off command, automatically adjust the number of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode on command or manually set the number of PGs with ceph osd pool set POOL_NAME pg_num _NEW_PG_NUMBER command. TOO_MANY_PGS The number of PGs in use in the storage cluster is above the configurable threshold of mon_max_pg_per_osd PGs per OSD. Increase the number of OSDs in the cluster by adding more hardware. POOL_TOO_MANY_PGS One or more pools should probably have more PGs, based on the amount of data that is currently stored in the pool. You can either disable auto-scaling of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode off command, automatically adjust the number of PGs with ceph osd pool set POOL_NAME pg_autoscale_mode on command or manually set the number of PGs with ceph osd pool set POOL_NAME pg_num _NEW_PG_NUMBER command. POOL_TARGET_SIZE_BYTES_OVERCOMMITTED One or more pools have a target_size_bytes property set to estimate the expected size of the pool, but the values exceed the total available storage. Set the value for the pool to zero with ceph osd pool set POOL_NAME target_size_bytes 0 command. POOL_HAS_TARGET_SIZE_BYTES_AND_RATIO One or more pools have both target_size_bytes and target_size_ratio set to estimate the expected size of the pool. Set the value for the pool to zero with ceph osd pool set POOL_NAME target_size_bytes 0 command. TOO_FEW_OSDS The number of OSDs in the storage cluster is below the configurable threshold of o`sd_pool_default_size . SMALLER_PGP_NUM One or more pools has a pgp_num value less than pg_num . This is normally an indication that the PG count was increased without also increasing the placement behavior. Resolve this by setting pgp_num to match with pg_num with ceph osd pool set POOL_NAME pgp_num PG_NUM_VALUE command. MANY_OBJECTS_PER_PG One or more pools has an average number of objects per PG that is significantly higher than the overall storage cluster average. The specific threshold is controlled by the mon_pg_warn_max_object_skew configuration value. POOL_APP_NOT_ENABLED A pool exists that contains one or more objects but has not been tagged for use by a particular application. Resolve this warning by labeling the pool for use by an application with rbd pool init POOL_NAME command. POOL_FULL One or more pools has reached its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. POOL_NEAR_FULL One or more pools is approaching a configured fullness threshold. Adjust the pool quotas with ceph osd pool set-quota POOL_NAME max_objects NUMBER_OF_OBJECTS and ceph osd pool set-quota POOL_NAME max_bytes BYTES commands. OBJECT_MISPLACED One or more objects in the storage cluster is not stored on the node the storage cluster would like it to be stored on. This is an indication that data migration due to some recent storage cluster change has not yet completed. OBJECT_UNFOUND One or more objects in the storage cluster cannot be found, specifically, the OSDs know that a new or updated copy of an object should exist, but a copy of that version of the object has not been found on OSDs that are currently online. SLOW_OPS One or more OSD or monitor requests is taking a long time to process. This can be an indication of extreme load, a slow storage device, or a software bug. PG_NOT_SCRUBBED One or more PGs has not been scrubbed recently. PGs are normally scrubbed within every configured interval specified by osd_scrub_max_interval globally. Initiate the scrub with ceph pg scrub PG_ID command. PG_NOT_DEEP_SCRUBBED One or more PGs has not been deep scrubbed recently. Initiate the scrub with ceph pg deep-scrub PG_ID command. PGs are normally scrubbed every osd_deep_scrub_interval seconds, and this warning triggers when mon_warn_pg_not_deep_scrubbed_ratio percentage of interval has elapsed without a scrub since it was due. PG_SLOW_SNAP_TRIMMING The snapshot trim queue for one or more PGs has exceeded the configured warning threshold. This indicates that either an extremely large number of snapshots were recently deleted, or that the OSDs are unable to trim snapshots quickly enough to keep up with the rate of new snapshot deletions. Table B.6. Miscellaneous Health Code Description RECENT_CRASH One or more Ceph daemons has crashed recently, and the crash has not yet been acknowledged by the administrator. TELEMETRY_CHANGED Telemetry has been enabled, but the contents of the telemetry report have changed since that time, so telemetry reports will not be sent. AUTH_BAD_CAPS One or more auth users has capabilities that cannot be parsed by the monitor. Update the capabilities of the user with ceph auth ENTITY_NAME DAEMON_TYPE CAPS command. OSD_NO_DOWN_OUT_INTERVAL The mon_osd_down_out_interval option is set to zero, which means that the system will not automatically perform any repair or healing operations after an OSD fails. Silence the interval with ceph config global mon mon_warn_on_osd_down_out_interval_zero false command. DASHBOARD_DEBUG The Dashboard debug mode is enabled. This means, if there is an error while processing a REST API request, the HTTP error response contains a Python traceback. Disable the debug mode with ceph dashboard debug disable command.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/troubleshooting_guide/health-messages-of-a-ceph-cluster_diag
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/working_with_vaults_in_identity_management/proc_providing-feedback-on-red-hat-documentation_working-with-vaults-in-identity-management
Chapter 1. About Telemetry
Chapter 1. About Telemetry Red Hat Advanced Cluster Security for Kubernetes (RHACS) collects anonymized aggregated information about product usage and product configuration. It helps Red Hat understand how everyone uses the product and identify areas to prioritize for improvements. In addition, Red Hat uses this information to improve the user experience. 1.1. Information collected by Telemetry Telemetry does not collect identifying information such as user names, passwords, or the names or addresses of user resources. Note Telemetry data collection is enabled by default, except for the installations with the offline mode enabled. Telemetry collects the following information: API, roxctl CLI, and user interface (UI) features and settings to know how you use Red Hat Advanced Cluster Security for Kubernetes (RHACS), which helps prioritize efforts. The time you spend on UI screens to help us improve user experience. The integrations are used to know if there are integrations that you have never used. The number of connected secured clusters and their configurations. Errors you encounter to identify the most common problems.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/telemetry/about-telemetry