title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
17.5. Routed Mode
17.5. Routed Mode When using Routed mode , the virtual switch connects to the physical LAN connected to the host physical machine, passing traffic back and forth without the use of NAT. The virtual switch can examine all traffic and use the information contained within the network packets to make routing decisions. When using this mode, all of the virtual machines are in their own subnet, routed through a virtual switch. This situation is not always ideal as no other host physical machines on the physical network are aware of the virtual machines without manual physical router configuration, and cannot access the virtual machines. Routed mode operates at Layer 3 of the OSI networking model. Figure 17.5. Virtual network switch in routed mode
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-networking_protocols-routed_mode
Chapter 2. MTR 1.2.7
Chapter 2. MTR 1.2.7 2.1. Known issues The following known issues are in the MTR 1.2.7 release: For a complete list of all known issues, see the list of MTR 1.2.7 known issues in Jira. 2.2. Resolved issues MTR 1.2.7 has the following resolved issues: MTR 1.2.0 fails with the Exception java.lang.ClassNotFoundException:org.eclipse.text.edits.MalformedTreeException In earlier versions of MTR 1.2.z, when migrating an Application from JBoss Enterprise Application Platform (EAP) 7 to EAP 8, there could be a failure with the following java.lang.ClassNotFoundException : java.lang.ClassNotFoundException: org.eclipse.text.edits.MalformedTreeException from [Module "org.jboss.windup.ast.windup-java-ast:6.3.1.Final-redhat-00002_67e96e90-d3bc-44fe-8fc8-ac2abdeacc58" from AddonModuleLoader] This issue has been resolved in MTR 1.2.7. (WINDUP-4200) CVE-2022-36033: org.jsoup/jsoup : The jsoup cleaner may incorrectly sanitize crafted XSS attempts if SafeList.preserveRelativeLinks is enabled A flaw was discovered in jsoup , which is a Java HTML parser, built for HTML editing, cleaning, scraping, and cross-site scripting (XSS) safety. An issue in jsoup could incorrectly sanitize HTML, including javascript: URL expressions, which could allow XSS attacks when a reader subsequently clicks that link. If the non-default SafeList.preserveRelativeLinks option is enabled, HTML, including javascript: URLs crafted with control characters, will not be sanitized. Users are recommended to upgrade to MTR 1.2.7, which resolves this issue. For more details, see (2022-36033) . For a complete list of all issues resolved in this release, see the list of MTR 1.2.7 resolved issues in Jira.
[ "java.lang.ClassNotFoundException: org.eclipse.text.edits.MalformedTreeException from [Module \"org.jboss.windup.ast.windup-java-ast:6.3.1.Final-redhat-00002_67e96e90-d3bc-44fe-8fc8-ac2abdeacc58\" from AddonModuleLoader]" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/release_notes/mtr_1_2_7
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/customizing_anaconda/proc_providing-feedback-on-red-hat-documentation_customizing-anaconda
Chapter 8. Dynamic provisioning
Chapter 8. Dynamic provisioning 8.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 8.2. Available dynamic provisioning plugins OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. Amazon Elastic Block Store (Amazon EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. IBM Power Virtual Server Block powervs.csi.ibm.com After installation, the IBM Power Virtual Server Block CSI Driver Operator and IBM Power Virtual Server Block CSI Driver automatically create the required storage classes for dynamic provisioning. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 8.3. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the Operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 8.3.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plug-in to plug-in. 8.3.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 8.3.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Volume type created in Cinder. Default is empty. 3 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 4 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.4. RHOSP Manila Container Storage Interface (CSI) object definition Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. 8.3.5. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp3 , sc1 , st1 . The default is gp3 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.3.6. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 3 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 4 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 8.3.7. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 8.3.7.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 8.3.8. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Select either pd-standard or pd-ssd . The default is pd-standard . 8.3.9. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 For more information about using VMware vSphere CSI with OpenShift Container Platform, see the Kubernetes documentation . 8.4. Changing the default storage class Use the following procedure to change the default storage class. For example, if you have two defined storage classes, gp3 and standard , and you want to change the default storage class from gp3 to standard . Prerequisites Access to the cluster with cluster-admin privileges. Procedure To change the default storage class: List the storage classes: USD oc get storageclass Example output NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) indicates the default storage class. Make the desired storage class the default. For the desired storage class, set the storageclass.kubernetes.io/is-default-class annotation to true by running the following command: USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Note You can have multiple default storage classes for a short time. However, you should ensure that only one default storage class exists eventually. With multiple default storage classes present, any persistent volume claim (PVC) requesting the default storage class ( pvc.spec.storageClassName =nil) gets the most recently created default storage class, regardless of the default status of that storage class, and the administrator receives an alert in the alerts dashboard that there are multiple default storage classes, MultipleDefaultStorageClasses . Remove the default storage class setting from the old default storage class. For the old default storage class, change the value of the storageclass.kubernetes.io/is-default-class annotation to false by running the following command: USD oc patch storageclass gp3 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs
[ "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp3", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: csi.vsphere.vmware.com 2", "oc get storageclass", "NAME TYPE gp3 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc patch storageclass gp3 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc get storageclass", "NAME TYPE gp3 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage/dynamic-provisioning
Chapter 7. Kafka configuration
Chapter 7. Kafka configuration A deployment of Kafka components to an OpenShift cluster using AMQ Streams is highly configurable through the application of custom resources. Custom resources are created as instances of APIs added by Custom resource definitions (CRDs) to extend OpenShift resources. CRDs act as configuration instructions to describe the custom resources in an OpenShift cluster, and are provided with AMQ Streams for each Kafka component used in a deployment, as well as users and topics. CRDs and custom resources are defined as YAML files. Example YAML files are provided with the AMQ Streams distribution. CRDs also allow AMQ Streams resources to benefit from native OpenShift features like CLI accessibility and configuration validation. In this section we look at how Kafka components are configured through custom resources, starting with common configuration points and then important configuration considerations specific to components. AMQ Streams provides example configuration files , which can serve as a starting point when building your own Kafka component configuration for deployment. 7.1. Custom resources After a new custom resource type is added to your cluster by installing a CRD, you can create instances of the resource based on its specification. The custom resources for AMQ Streams components have common configuration properties, which are defined under spec . In this fragment from a Kafka topic custom resource, the apiVersion and kind properties identify the associated CRD. The spec property shows configuration that defines the number of partitions and replicas for the topic. Kafka topic custom resource apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 1 # ... There are many additional configuration options that can be incorporated into a YAML definition, some common and some specific to a particular component. Additional resources Extend the Kubernetes API with CustomResourceDefinitions 7.2. Common configuration Some of the configuration options common to resources are described here. Security and metrics collection might also be adopted where applicable. Bootstrap servers Bootstrap servers are used for host/port connection to a Kafka cluster for: Kafka Connect Kafka Bridge Kafka MirrorMaker producers and consumers CPU and memory resources You request CPU and memory resources for components. Limits specify the maximum resources that can be consumed by a given container. Resource requests and limits for the Topic Operator and User Operator are set in the Kafka resource. Logging You define the logging level for the component. Logging can be defined directly (inline) or externally using a config map. Healthchecks Healthcheck configuration introduces liveness and readiness probes to know when to restart a container (liveness) and when a container can accept traffic (readiness). JVM options JVM options provide maximum and minimum memory allocation to optimize the performance of the component according to the platform it is running on. Pod scheduling Pod schedules use affinity/anti-affinity rules to determine under what circumstances a pod is scheduled onto a node. Example YAML showing common configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-cluster spec: # ... bootstrapServers: my-cluster-kafka-bootstrap:9092 resources: requests: cpu: 12 memory: 64Gi limits: cpu: 12 memory: 64Gi logging: type: inline loggers: connect.root.logger.level: "INFO" readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: "-Xmx": "2g" "-Xms": "2g" template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... 7.3. Kafka cluster configuration A kafka cluster comprises one or more brokers. For producers and consumers to be able to access topics within the brokers, Kafka configuration must define how data is stored in the cluster, and how the data is accessed. You can configure a Kafka cluster to run with multiple broker nodes across racks . Storage Kafka and ZooKeeper store data on disks. AMQ Streams requires block storage provisioned through StorageClass . The file system format for storage must be XFS or EXT4 . Three types of data storage are supported: Ephemeral (Recommended for development only) Ephemeral storage stores data for the lifetime of an instance. Data is lost when the instance is restarted. Persistent Persistent storage relates to long-term data storage independent of the lifecycle of the instance. JBOD (Just a Bunch of Disks, suitable for Kafka only) JBOD allows you to use multiple disks to store commit logs in each broker. The disk capacity used by an existing Kafka cluster can be increased if supported by the infrastructure. Listeners Listeners configure how clients connect to a Kafka cluster. By specifying a unique name and port for each listener within a Kafka cluster, you can configure multiple listeners. The following types of listener are supported: Internal listeners for access within OpenShift External listeners for access outside of OpenShift You can enable TLS encryption for listeners, and configure authentication . Internal listeners are specified using an internal type. External listeners expose Kafka by specifying an external type : route to use OpenShift routes and the default HAProxy router loadbalancer to use loadbalancer services nodeport to use ports on OpenShift nodes ingress to use OpenShift Ingress and the NGINX Ingress Controller for Kubernetes If you are using OAuth 2.0 for token-based authentication , you can configure listeners to use the authorization server. Rack awareness Rack awareness is a configuration feature that distributes Kafka broker pods and topic replicas across racks , which represent data centers or racks in data centers, or availability zones. Example YAML showing Kafka configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true authentication: type: tls # ... storage: type: persistent-claim size: 10000Gi # ... rack: topologyKey: topology.kubernetes.io/zone # ... 7.4. Kafka MirrorMaker configuration To set up MirrorMaker, a source and target (destination) Kafka cluster must be running. You can use AMQ Streams with MirrorMaker 2.0, although the earlier version of MirrorMaker continues to be supported. MirrorMaker 2.0 MirrorMaker 2.0 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. MirrorMaker 2.0 uses: Source cluster configuration to consume data from the source cluster Target cluster configuration to output data to the target cluster Cluster configuration You can use MirrorMaker 2.0 in active/passive or active/active cluster configurations. In an active/active configuration, both clusters are active and provide the same data simultaneously, which is useful if you want to make the same data available locally in different geographical locations. In an active/passive configuration, the data from an active cluster is replicated in a passive cluster, which remains on standby, for example, for data recovery in the event of system failure. You configure a KafkaMirrorMaker2 custom resource to define the Kafka Connect deployment, including the connection details of the source and target clusters, and then run a set of MirrorMaker 2.0 connectors to make the connection. Topic configuration is automatically synchronized between the source and target clusters according to the topics defined in the KafkaMirrorMaker2 custom resource. Configuration changes are propagated to remote topics so that new topics and partitions are detected and created. Topic replication is defined using regular expression patterns to include or exclude topics. The following MirrorMaker 2.0 connectors and related internal topics help manage the transfer and synchronization of data between the clusters. MirrorSourceConnector A MirrorSourceConnector creates remote topics from the source cluster. MirrorCheckpointConnector A MirrorCheckpointConnector tracks and maps offsets for specified consumer groups using an offset sync topic and checkpoint topic. The offset sync topic maps the source and target offsets for replicated topic partitions from record metadata. A checkpoint is emitted from each source cluster and replicated in the target cluster through the checkpoint topic. The checkpoint topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. MirrorHeartbeatConnector A MirrorHeartbeatConnector periodically checks connectivity between clusters. A heartbeat is produced every second by the MirrorHeartbeatConnector into a heartbeat topic that is created on the local cluster. If you have MirrorMaker 2.0 at both the remote and local locations, the heartbeat emitted at the remote location by the MirrorHeartbeatConnector is treated like any remote topic and mirrored by the MirrorSourceConnector at the local cluster. The heartbeat topic makes it easy to check that the remote cluster is available and the clusters are connected. If things go wrong, the heartbeat topic offset positions and time stamps can help with recovery and diagnosis. Figure 7.1. Replication across two clusters Bidirectional replication across two clusters The MirrorMaker 2.0 architecture supports bidirectional replication in an active/active cluster configuration, so both clusters are active and provide the same data simultaneously. A MirrorMaker 2.0 cluster is required at each target destination. Remote topics are distinguished by automatic renaming that prepends the name of cluster to the name of the topic. This is useful if you want to make the same data available locally in different geographical locations. However, if you want to backup or migrate data in an active/passive cluster configuration, you might want to keep the original names of the topics. If so, you can configure MirrorMaker 2.0 to turn off automatic renaming. Figure 7.2. Bidirectional replication Example YAML showing MirrorMaker 2.0 configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.1.0 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: {} topicsPattern: ".*" groupsPattern: "group1|group2|group3" MirrorMaker The earlier version of MirrorMaker uses producers and consumers to replicate data across clusters. MirrorMaker uses: Consumer configuration to consume data from the source cluster Producer configuration to output data to the target cluster Consumer and producer configuration includes any authentication and encryption settings. The include field defines the topics to mirror from a source to a target cluster. Key Consumer configuration Consumer group identifier The consumer group ID for a MirrorMaker consumer so that messages consumed are assigned to a consumer group. Number of consumer streams A value to determine the number of consumers in a consumer group that consume a message in parallel. Offset commit interval An offset commit interval to set the time between consuming and committing a message. Key Producer configuration Cancel option for send failure You can define whether a message send failure is ignored or MirrorMaker is terminated and recreated. Example YAML showing MirrorMaker configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # ... consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 groupId: "my-group" numStreams: 2 offsetCommitInterval: 120000 # ... producer: # ... abortOnSendFailure: false # ... include: "my-topic|other-topic" # ... 7.5. Kafka Connect configuration Use AMQ Streams's KafkaConnect resource to quickly and easily create new Kafka Connect clusters. When you deploy Kafka Connect using the KafkaConnect resource, you specify bootstrap server addresses (in spec.bootstrapServers ) for connecting to a Kafka cluster. You can specify more than one address in case a server goes down. You also specify the authentication credentials and TLS encryption certificates to make a secure connection. Note The Kafka cluster doesn't need to be managed by AMQ Streams or deployed to an OpenShift cluster. You can also use the KafkaConnect resource to specify the following: Plugin configuration to build a container image that includes the plugins to make connections Configuration for the worker pods that belong to the Kafka Connect cluster An annotation to enable use of the KafkaConnector resource to manage plugins The Cluster Operator manages Kafka Connect clusters deployed using the KafkaConnect resource and connectors created using the KafkaConnector resource. Plugin configuration Plugins provide the implementation for creating connector instances. When a plugin is instantiated, configuration is provided for connection to a specific type of external data system. Plugins provide a set of one or more JAR files that define a connector and task implementation for connecting to a given kind of data source. Plugins for many external systems are available for use with Kafka Connect. You can also create your own plugins. The configuration describes the source input data and target output data to feed into and out of Kafka Connect. For a source connector, external source data must reference specific topics that will store the messages. The plugins might also contain the libraries and files needed to transform the data. A Kafka Connect deployment can have one or more plugins, but only one version of each plugin. You can create a custom Kafka Connect image that includes your choice of plugins. You can create the image in two ways: Automatically using Kafka Connect configuration Manually using a Dockerfile and a Kafka container image as a base image To create the container image automatically, you specify the plugins to add to your Kafka Connect cluster using the build property of the KafkaConnect resource. AMQ Streams automatically downloads and adds the plugin artifacts to a new container image. Example plugin configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" spec: # ... build: 1 output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https:// ARTIFACT-ADDRESS .tgz sha512sum: HASH-NUMBER-TO-VERIFY-ARTIFACT # ... # ... 1 Build configuration properties for building a container image with plugins automatically. 2 Configuration of the container registry where new images are pushed. The output properties describe the type and name of the image, and optionally the name of the secret containing the credentials needed to access the container registry. 3 List of plugins and their artifacts to add to the new container image. The plugins properties describe the type of artifact and the URL from which the artifact is downloaded. Each plugin must be configured with at least one artifact. Additionally, you can specify a SHA-512 checksum to verify the artifact before unpacking it. If you are using a Dockerfile to build an image, you can use AMQ Streams's latest container image as a base image to add your plugin configuration file. Example showing manual addition of plugin configuration Kafka Connect cluster configuration for workers You specify the configuration for workers in the config property of the KafkaConnect resource. A distributed Kafka Connect cluster has a group ID and a set of internal configuration topics. group.id offset.storage.topic config.storage.topic status.storage.topic Kafka Connect clusters are configured by default with the same values for these properties. Kafka Connect clusters cannot share the group ID or topic names as it will create errors. If multiple different Kafka Connect clusters are used, these settings must be unique for the workers of each Kafka Connect cluster created. The names of the connectors used by each Kafka Connect cluster must also be unique. In the following example worker configuration, JSON converters are specified. A replication factor is set for the internal Kafka topics used by Kafka Connect. This should be at least 3 for a production environment. Changing the replication factor after the topics have been created will have no effect. Example worker configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # ... spec: config: # ... group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 key.converter: org.apache.kafka.connect.json.JsonConverter 5 value.converter: org.apache.kafka.connect.json.JsonConverter 6 key.converter.schemas.enable: true 7 value.converter.schemas.enable: true 8 config.storage.replication.factor: 3 9 offset.storage.replication.factor: 3 10 status.storage.replication.factor: 3 11 # ... 1 The Kafka Connect cluster ID within Kafka. Must be unique for each Kafka Connect cluster. 2 Kafka topic that stores connector offsets. Must be unique for each Kafka Connect cluster. 3 Kafka topic that stores connector and task status configurations. Must be unique for each Kafka Connect cluster. 4 Kafka topic that stores connector and task status updates. Must be unique for each Kafka Connect cluster. 5 Converter to transform message keys into JSON format for storage in Kafka. 6 Converter to transform message values into JSON format for storage in Kafka. 7 Schema enabled for converting message keys into structured JSON format. 8 Schema enabled for converting message values into structured JSON format. 9 Replication factor for the Kafka topic that stores connector offsets. 10 Replication factor for the Kafka topic that stores connector and task status configurations. 11 Replication factor for the Kafka topic that stores connector and task status updates. KafkaConnector management of connectors After plugins have been added to the container image used for the worker pods in a deployment, you can use AMQ Streams's KafkaConnector custom resource or the Kafka Connect API to manage connector instances. You can also create new connector instances using these options. The KafkaConnector resource offers an OpenShift-native approach to management of connectors by the Cluster Operator. To manage connectors with KafkaConnector resources, you must specify an annotation in your KafkaConnect custom resource. Annotation to enable KafkaConnectors apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" # ... Setting use-connector-resources to true enables KafkaConnectors to create, delete, and reconfigure connectors. If use-connector-resources is enabled in your KafkaConnect configuration, you must use the KafkaConnector resource to define and manage connectors. KafkaConnector resources are configured to connect to external systems. They are deployed to the same OpenShift cluster as the Kafka Connect cluster and Kafka cluster interacting with the external data system. Kafka components are contained in the same OpenShift cluster The configuration specifies how connector instances connect to an external data system, including any authentication. You also need to state what data to watch. For a source connector, you might provide a database name in the configuration. You can also specify where the data should sit in Kafka by specifying a target topic name. Use tasksMax to specify the maximum number of tasks. For example, a source connector with tasksMax: 2 might split the import of source data into two tasks. Example KafkaConnector source connector configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: "/opt/kafka/LICENSE" 6 topic: my-topic 7 # ... 1 Name of the KafkaConnector resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. 2 Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to. 3 Full name of the connector class. This should be present in the image being used by the Kafka Connect cluster. 4 Maximum number of Kafka Connect tasks that the connector can create. 5 Connector configuration as key-value pairs. 6 Location of the external data file. In this example, we're configuring the FileStreamSourceConnector to read from the /opt/kafka/LICENSE file. 7 Kafka topic to publish the source data to. Note You can load confidential configuration values for a connector from OpenShift Secrets or ConfigMaps. Kafka Connect API Use the Kafka Connect REST API as an alternative to using KafkaConnector resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name> -connect-api:8083 , where <connect_cluster_name> is the name of your Kafka Connect cluster. You add the connector configuration as a JSON object. Example curl request to add connector configuration curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" } }' If KafkaConnectors are enabled, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. The operations supported by the REST API are described in the Apache Kafka documentation . Note You can expose the Kafka Connect API service outside OpenShift. You do this by creating a service that uses a connection mechanism that provides the access, such as an ingress or route. Use advisedly as the connection is insecure. Additional resources Kafka Connect configuration options Kafka Connect configuration for multiple instances Extending Kafka Connect with plugins Creating a new container image automatically using AMQ Streams Creating a Docker image from the Kafka Connect base image Build schema reference Source and sink connector configuration options Loading configuration values from external sources 7.6. Kafka Bridge configuration A Kafka Bridge configuration requires a bootstrap server specification for the Kafka cluster it connects to, as well as any encryption and authentication options required. Kafka Bridge consumer and producer configuration is standard, as described in the Apache Kafka configuration documentation for consumers and Apache Kafka configuration documentation for producers . HTTP-related configuration options set the port connection which the server listens on. CORS The Kafka Bridge supports the use of Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin, for example, resources on different domains. If you choose to use CORS, you can define a list of allowed resource origins and HTTP methods for interaction with the Kafka cluster through the Kafka Bridge. The lists are defined in the http specification of the Kafka Bridge configuration. CORS allows for simple and preflighted requests between origin sources on different domains. A simple request is a HTTP request that must have an allowed origin defined in its header. A preflighted request sends an initial OPTIONS HTTP request before the actual request to check that the origin and the method are allowed. Example YAML showing Kafka Bridge configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... bootstrapServers: my-cluster-kafka:9092 http: port: 8080 cors: allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" consumer: config: auto.offset.reset: earliest producer: config: delivery.timeout.ms: 300000 # ... Additional resources Fetch CORS specification
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 replicas: 1 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-cluster spec: # bootstrapServers: my-cluster-kafka-bootstrap:9092 resources: requests: cpu: 12 memory: 64Gi limits: cpu: 12 memory: 64Gi logging: type: inline loggers: connect.root.logger.level: \"INFO\" readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: \"-Xmx\": \"2g\" \"-Xms\": \"2g\" template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true authentication: type: tls # storage: type: persistent-claim size: 10000Gi # rack: topologyKey: topology.kubernetes.io/zone #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.1.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {} topicsPattern: \".*\" groupsPattern: \"group1|group2|group3\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: # consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 groupId: \"my-group\" numStreams: 2 offsetCommitInterval: 120000 # producer: # abortOnSendFailure: false # include: \"my-topic|other-topic\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" spec: # build: 1 output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https:// ARTIFACT-ADDRESS .tgz sha512sum: HASH-NUMBER-TO-VERIFY-ARTIFACT # #", "FROM registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: config: # group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 key.converter: org.apache.kafka.connect.json.JsonConverter 5 value.converter: org.apache.kafka.connect.json.JsonConverter 6 key.converter.schemas.enable: true 7 value.converter.schemas.enable: true 8 config.storage.replication.factor: 3 9 offset.storage.replication.factor: 3 10 status.storage.replication.factor: 3 11 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: \"/opt/kafka/LICENSE\" 6 topic: my-topic 7 #", "curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" } }'", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # bootstrapServers: my-cluster-kafka:9092 http: port: 8080 cors: allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" consumer: config: auto.offset.reset: earliest producer: config: delivery.timeout.ms: 300000 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/amq_streams_on_openshift_overview/configuration-points_str
Chapter 22. Using the Node Observability Operator
Chapter 22. Using the Node Observability Operator The Node Observability Operator collects and stores CRI-O and Kubelet profiling or metrics from scripts of compute nodes. With the Node Observability Operator, you can query the profiling data, enabling analysis of performance trends in CRI-O and Kubelet. It supports debugging performance-related issues and executing embedded scripts for network metrics by using the run field in the custom resource definition. To enable CRI-O and Kubelet profiling or scripting, you can configure the type field in the custom resource definition. Important The Node Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 22.1. Workflow of the Node Observability Operator The following workflow outlines on how to query the profiling data using the Node Observability Operator: Install the Node Observability Operator in the OpenShift Container Platform cluster. Create a NodeObservability custom resource to enable the CRI-O profiling on the worker nodes of your choice. Run the profiling query to generate the profiling data. 22.2. Installing the Node Observability Operator The Node Observability Operator is not installed in OpenShift Container Platform by default. You can install the Node Observability Operator by using the OpenShift Container Platform CLI or the web console. 22.2.1. Installing the Node Observability Operator using the CLI You can install the Node Observability Operator by using the OpenShift CLI (oc). Prerequisites You have installed the OpenShift CLI (oc). You have access to the cluster with cluster-admin privileges. Procedure Confirm that the Node Observability Operator is available by running the following command: USD oc get packagemanifests -n openshift-marketplace node-observability-operator Example output NAME CATALOG AGE node-observability-operator Red Hat Operators 9h Create the node-observability-operator namespace by running the following command: USD oc new-project node-observability-operator Create an OperatorGroup object YAML file: cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF Create a Subscription object YAML file to subscribe a namespace to an Operator: cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Verification View the install plan name by running the following command: USD oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name' Example output install-dt54w Verify the install plan status by running the following command: USD oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase' <install_plan_name> is the install plan name that you obtained from the output of the command. Example output COMPLETE Verify that the Node Observability Operator is up and running: USD oc get deploy -n node-observability-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h 22.2.2. Installing the Node Observability Operator using the web console You can install the Node Observability Operator from the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. In the Administrator's navigation panel, expand Operators OperatorHub . In the All items field, enter Node Observability Operator and select the Node Observability Operator tile. Click Install . On the Install Operator page, configure the following settings: In the Update channel area, click alpha . In the Installation mode area, click A specific namespace on the cluster . From the Installed Namespace list, select node-observability-operator from the list. In the Update approval area, select Automatic . Click Install . Verification In the Administrator's navigation panel, expand Operators Installed Operators . Verify that the Node Observability Operator is listed in the Operators list. 22.3. Requesting CRI-O and Kubelet profiling data using the Node Observability Operator Creating a Node Observability custom resource to collect CRI-O and Kubelet profiling data. 22.3.1. Creating the Node Observability custom resource You must create and run the NodeObservability custom resource (CR) before you run the profiling query. When you run the NodeObservability CR, it creates the necessary machine config and machine config pool CRs to enable the CRI-O profiling on the worker nodes matching the nodeSelector . Important If CRI-O profiling is not enabled on the worker nodes, the NodeObservabilityMachineConfig resource gets created. Worker nodes matching the nodeSelector specified in NodeObservability CR restarts. This might take 10 or more minutes to complete. Note Kubelet profiling is enabled by default. The CRI-O unix socket of the node is mounted on the agent pod, which allows the agent to communicate with CRI-O to run the pprof request. Similarly, the kubelet-serving-ca certificate chain is mounted on the agent pod, which allows secure communication between the agent and node's kubelet endpoint. Prerequisites You have installed the Node Observability Operator. You have installed the OpenShift CLI (oc). You have access to the cluster with cluster-admin privileges. Procedure Log in to the OpenShift Container Platform CLI by running the following command: USD oc login -u kubeadmin https://<HOSTNAME>:6443 Switch back to the node-observability-operator namespace by running the following command: USD oc project node-observability-operator Create a CR file named nodeobservability.yaml that contains the following text: apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: crio-kubelet 1 You must specify the name as cluster because there should be only one NodeObservability CR per cluster. 2 Specify the nodes on which the Node Observability agent must be deployed. Run the NodeObservability CR: oc apply -f nodeobservability.yaml Example output nodeobservability.olm.openshift.io/cluster created Review the status of the NodeObservability CR by running the following command: USD oc get nob/cluster -o yaml | yq '.status.conditions' Example output conditions: conditions: - lastTransitionTime: "2022-07-05T07:33:54Z" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: "True" type: Ready NodeObservability CR run is completed when the reason is Ready and the status is True . 22.3.2. Running the profiling query To run the profiling query, you must create a NodeObservabilityRun resource. The profiling query is a blocking operation that fetches CRI-O and Kubelet profiling data for a duration of 30 seconds. After the profiling query is complete, you must retrieve the profiling data inside the container file system /run/node-observability directory. The lifetime of data is bound to the agent pod through the emptyDir volume, so you can access the profiling data while the agent pod is in the running status. Important You can request only one profiling query at any point of time. Prerequisites You have installed the Node Observability Operator. You have created the NodeObservability custom resource (CR). You have access to the cluster with cluster-admin privileges. Procedure Create a NodeObservabilityRun resource file named nodeobservabilityrun.yaml that contains the following text: apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster Trigger the profiling query by running the NodeObservabilityRun resource: USD oc apply -f nodeobservabilityrun.yaml Review the status of the NodeObservabilityRun by running the following command: USD oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions' Example output conditions: - lastTransitionTime: "2022-07-07T14:57:34Z" message: Ready to start profiling reason: Ready status: "True" type: Ready - lastTransitionTime: "2022-07-07T14:58:10Z" message: Profiling query done reason: Finished status: "True" type: Finished The profiling query is complete once the status is True and type is Finished . Retrieve the profiling data from the container's /run/node-observability path by running the following bash script: for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo "agent USD{a}" mkdir -p "/tmp/USD{a}" for p in USD(oc exec "USD{a}" -c node-observability-agent -- bash -c "ls /run/node-observability/*.pprof"); do f="USD(basename USD{p})" echo "copying USD{f} to /tmp/USD{a}/USD{f}" oc exec "USD{a}" -c node-observability-agent -- cat "USD{p}" > "/tmp/USD{a}/USD{f}" done done 22.4. Node Observability Operator scripting Scripting allows you to run pre-configured bash scripts, using the current Node Observability Operator and Node Observability Agent. These scripts monitor key metrics like CPU load, memory pressure, and worker node issues. They also collect sar reports and custom performance metrics. 22.4.1. Creating the Node Observability custom resource for scripting You must create and run the NodeObservability custom resource (CR) before you run the scripting. When you run the NodeObservability CR, it enables the agent in scripting mode on the compute nodes matching the nodeSelector label. Prerequisites You have installed the Node Observability Operator. You have installed the OpenShift CLI ( oc ). You have access to the cluster with cluster-admin privileges. Procedure Log in to the OpenShift Container Platform cluster by running the following command: USD oc login -u kubeadmin https://<host_name>:6443 Switch to the node-observability-operator namespace by running the following command: USD oc project node-observability-operator Create a file named nodeobservability.yaml that contains the following content: apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: scripting 3 1 You must specify the name as cluster because there should be only one NodeObservability CR per cluster. 2 Specify the nodes on which the Node Observability agent must be deployed. 3 To deploy the agent in scripting mode, you must set the type to scripting . Create the NodeObservability CR by running the following command: USD oc apply -f nodeobservability.yaml Example output nodeobservability.olm.openshift.io/cluster created Review the status of the NodeObservability CR by running the following command: USD oc get nob/cluster -o yaml | yq '.status.conditions' Example output conditions: conditions: - lastTransitionTime: "2022-07-05T07:33:54Z" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityScripting ready: true' reason: Ready status: "True" type: Ready The NodeObservability CR run is completed when the reason is Ready and status is "True" . 22.4.2. Configuring Node Observability Operator scripting Prerequisites You have installed the Node Observability Operator. You have created the NodeObservability custom resource (CR). You have access to the cluster with cluster-admin privileges. Procedure Create a file named nodeobservabilityrun-script.yaml that contains the following content: apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun-script namespace: node-observability-operator spec: nodeObservabilityRef: name: cluster type: scripting Important You can request only the following scripts: metrics.sh network-metrics.sh (uses monitor.sh ) Trigger the scripting by creating the NodeObservabilityRun resource with the following command: USD oc apply -f nodeobservabilityrun-script.yaml Review the status of the NodeObservabilityRun scripting by running the following command: USD oc get nodeobservabilityrun nodeobservabilityrun-script -o yaml | yq '.status.conditions' Example output Status: Agents: Ip: 10.128.2.252 Name: node-observability-agent-n2fpm Port: 8443 Ip: 10.131.0.186 Name: node-observability-agent-wcc8p Port: 8443 Conditions: Conditions: Last Transition Time: 2023-12-19T15:10:51Z Message: Ready to start profiling Reason: Ready Status: True Type: Ready Last Transition Time: 2023-12-19T15:11:01Z Message: Profiling query done Reason: Finished Status: True Type: Finished Finished Timestamp: 2023-12-19T15:11:01Z Start Timestamp: 2023-12-19T15:10:51Z The scripting is complete once Status is True and Type is Finished . Retrieve the scripting data from the root path of the container by running the following bash script: #!/bin/bash RUN=USD(oc get nodeobservabilityrun --no-headers | awk '{print USD1}') for a in USD(oc get nodeobservabilityruns.nodeobservability.olm.openshift.io/USD{RUN} -o json | jq .status.agents[].name); do echo "agent USD{a}" agent=USD(echo USD{a} | tr -d "\"\'\`") base_dir=USD(oc exec "USD{agent}" -c node-observability-agent -- bash -c "ls -t | grep node-observability-agent" | head -1) echo "USD{base_dir}" mkdir -p "/tmp/USD{agent}" for p in USD(oc exec "USD{agent}" -c node-observability-agent -- bash -c "ls USD{base_dir}"); do f="/USD{base_dir}/USD{p}" echo "copying USD{f} to /tmp/USD{agent}/USD{p}" oc exec "USD{agent}" -c node-observability-agent -- cat USD{f} > "/tmp/USD{agent}/USD{p}" done done 22.5. Additional resources For more information on how to collect worker metrics, see Red Hat Knowledgebase article .
[ "oc get packagemanifests -n openshift-marketplace node-observability-operator", "NAME CATALOG AGE node-observability-operator Red Hat Operators 9h", "oc new-project node-observability-operator", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name'", "install-dt54w", "oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase'", "COMPLETE", "oc get deploy -n node-observability-operator", "NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h", "oc login -u kubeadmin https://<HOSTNAME>:6443", "oc project node-observability-operator", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: crio-kubelet", "apply -f nodeobservability.yaml", "nodeobservability.olm.openshift.io/cluster created", "oc get nob/cluster -o yaml | yq '.status.conditions'", "conditions: conditions: - lastTransitionTime: \"2022-07-05T07:33:54Z\" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: \"True\" type: Ready", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster", "oc apply -f nodeobservabilityrun.yaml", "oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions'", "conditions: - lastTransitionTime: \"2022-07-07T14:57:34Z\" message: Ready to start profiling reason: Ready status: \"True\" type: Ready - lastTransitionTime: \"2022-07-07T14:58:10Z\" message: Profiling query done reason: Finished status: \"True\" type: Finished", "for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo \"agent USD{a}\" mkdir -p \"/tmp/USD{a}\" for p in USD(oc exec \"USD{a}\" -c node-observability-agent -- bash -c \"ls /run/node-observability/*.pprof\"); do f=\"USD(basename USD{p})\" echo \"copying USD{f} to /tmp/USD{a}/USD{f}\" oc exec \"USD{a}\" -c node-observability-agent -- cat \"USD{p}\" > \"/tmp/USD{a}/USD{f}\" done done", "oc login -u kubeadmin https://<host_name>:6443", "oc project node-observability-operator", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservability metadata: name: cluster 1 spec: nodeSelector: kubernetes.io/hostname: <node_hostname> 2 type: scripting 3", "oc apply -f nodeobservability.yaml", "nodeobservability.olm.openshift.io/cluster created", "oc get nob/cluster -o yaml | yq '.status.conditions'", "conditions: conditions: - lastTransitionTime: \"2022-07-05T07:33:54Z\" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityScripting ready: true' reason: Ready status: \"True\" type: Ready", "apiVersion: nodeobservability.olm.openshift.io/v1alpha2 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun-script namespace: node-observability-operator spec: nodeObservabilityRef: name: cluster type: scripting", "oc apply -f nodeobservabilityrun-script.yaml", "oc get nodeobservabilityrun nodeobservabilityrun-script -o yaml | yq '.status.conditions'", "Status: Agents: Ip: 10.128.2.252 Name: node-observability-agent-n2fpm Port: 8443 Ip: 10.131.0.186 Name: node-observability-agent-wcc8p Port: 8443 Conditions: Conditions: Last Transition Time: 2023-12-19T15:10:51Z Message: Ready to start profiling Reason: Ready Status: True Type: Ready Last Transition Time: 2023-12-19T15:11:01Z Message: Profiling query done Reason: Finished Status: True Type: Finished Finished Timestamp: 2023-12-19T15:11:01Z Start Timestamp: 2023-12-19T15:10:51Z", "#!/bin/bash RUN=USD(oc get nodeobservabilityrun --no-headers | awk '{print USD1}') for a in USD(oc get nodeobservabilityruns.nodeobservability.olm.openshift.io/USD{RUN} -o json | jq .status.agents[].name); do echo \"agent USD{a}\" agent=USD(echo USD{a} | tr -d \"\\\"\\'\\`\") base_dir=USD(oc exec \"USD{agent}\" -c node-observability-agent -- bash -c \"ls -t | grep node-observability-agent\" | head -1) echo \"USD{base_dir}\" mkdir -p \"/tmp/USD{agent}\" for p in USD(oc exec \"USD{agent}\" -c node-observability-agent -- bash -c \"ls USD{base_dir}\"); do f=\"/USD{base_dir}/USD{p}\" echo \"copying USD{f} to /tmp/USD{agent}/USD{p}\" oc exec \"USD{agent}\" -c node-observability-agent -- cat USD{f} > \"/tmp/USD{agent}/USD{p}\" done done" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/using-node-observability-operator
Image APIs
Image APIs OpenShift Container Platform 4.15 Reference guide for image APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/image_apis/index
Chapter 3. Managing GFS2
Chapter 3. Managing GFS2 This chapter describes the tasks and commands for managing GFS2 and consists of the following sections: Section 3.1, "Creating a GFS2 File System" Section 3.2, "Mounting a GFS2 File System" Section 3.3, "Unmounting a GFS2 File System" Section 3.4, "GFS2 Quota Management" Section 3.5, "Growing a GFS2 File System" Section 3.6, "Adding Journals to a GFS2 File System" Section 3.7, "Data Journaling" Section 3.8, "Configuring atime Updates" Section 3.9, "Suspending Activity on a GFS2 File System" Section 3.10, "Repairing a GFS2 File System" Section 3.11, "The GFS2 Withdraw Function" 3.1. Creating a GFS2 File System You create a GFS2 file system with the mkfs.gfs2 command. You can also use the mkfs command with the -t gfs2 option specified. A file system is created on an activated LVM volume. The following information is required to run the mkfs.gfs2 command: Lock protocol/module name (the lock protocol for a cluster is lock_dlm ) Cluster name (needed when specifying the LockTableName parameter) Number of journals (one journal required for each node that may be mounting the file system) When creating a GFS2 file system, you can use the mkfs.gfs2 command directly, or you can use the mkfs command with the -t parameter specifying a file system of type gfs2 , followed by the GFS2 file system options. Note Once you have created a GFS2 file system with the mkfs.gfs2 command, you cannot decrease the size of the file system. You can, however, increase the size of an existing file system with the gfs2_grow command, as described in Section 3.5, "Growing a GFS2 File System" . Usage When creating a clustered GFS2 file system, you can use either of the following formats: When creating a local GFS2 file system, you can use either of the following formats: Note As of the Red Hat Enterprise Linux 6 release, Red Hat does not support the use of GFS2 as a single-node file system. Warning Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption. LockProtoName Specifies the name of the locking protocol to use. The lock protocol for a cluster is lock_dlm . LockTableName This parameter is specified for a GFS2 file system in a cluster configuration. It has two parts separated by a colon (no spaces) as follows: ClusterName:FSName ClusterName , the name of the cluster for which the GFS2 file system is being created. FSName , the file system name, can be 1 to 16 characters long. The name must be unique for all lock_dlm file systems over the cluster, and for all file systems ( lock_dlm and lock_nolock ) on each local node. Number Specifies the number of journals to be created by the mkfs.gfs2 command. One journal is required for each node that mounts the file system. For GFS2 file systems, more journals can be added later without growing the file system, as described in Section 3.6, "Adding Journals to a GFS2 File System" . BlockDevice Specifies a logical or physical volume. Examples In these examples, lock_dlm is the locking protocol that the file system uses, since this is a clustered file system. The cluster name is alpha , and the file system name is mydata1 . The file system contains eight journals and is created on /dev/vg01/lvol0 . In these examples, a second lock_dlm file system is made, which can be used in cluster alpha . The file system name is mydata2 . The file system contains eight journals and is created on /dev/vg01/lvol1 . Complete Options Table 3.1, "Command Options: mkfs.gfs2 " describes the mkfs.gfs2 command options (flags and parameters). Table 3.1. Command Options: mkfs.gfs2 Flag Parameter Description -c Megabytes Sets the initial size of each journal's quota change file to Megabytes . -D Enables debugging output. -h Help. Displays available options. -J Megabytes Specifies the size of the journal in megabytes. Default journal size is 128 megabytes. The minimum size is 8 megabytes. Larger journals improve performance, although they use more memory than smaller journals. -j Number Specifies the number of journals to be created by the mkfs.gfs2 command. One journal is required for each node that mounts the file system. If this option is not specified, one journal will be created. For GFS2 file systems, you can add additional journals at a later time without growing the file system. -O Prevents the mkfs.gfs2 command from asking for confirmation before writing the file system. -p LockProtoName Specifies the name of the locking protocol to use. Recognized locking protocols include: lock_dlm - The standard locking module, required for a clustered file system. lock_nolock - Used when GFS2 is acting as a local file system (one node only). -q Quiet. Do not display anything. -r Megabytes Specifies the size of the resource groups in megabytes. The minimum resource group size is 32 megabytes. The maximum resource group size is 2048 megabytes. A large resource group size may increase performance on very large file systems. If this is not specified, mkfs.gfs2 chooses the resource group size based on the size of the file system: average size file systems will have 256 megabyte resource groups, and bigger file systems will have bigger RGs for better performance. -t LockTableName A unique identifier that specifies the lock table field when you use the lock_dlm protocol; the lock_nolock protocol does not use this parameter. This parameter has two parts separated by a colon (no spaces) as follows: ClusterName:FSName . ClusterName is the name of the cluster for which the GFS2 file system is being created; only members of this cluster are permitted to use this file system. FSName , the file system name, can be 1 to 16 characters in length, and the name must be unique among all file systems in the cluster. -u Megabytes Specifies the initial size of each journal's unlinked tag file. -V Displays command version information.
[ "mkfs.gfs2 -p LockProtoName -t LockTableName -j NumberJournals BlockDevice", "mkfs -t gfs2 -p LockProtoName -t LockTableName -j NumberJournals BlockDevice", "mkfs.gfs2 -p LockProtoName -j NumberJournals BlockDevice", "mkfs -t gfs2 -p LockProtoName -j NumberJournals BlockDevice", "mkfs.gfs2 -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0", "mkfs -t gfs2 -p lock_dlm -t alpha:mydata1 -j 8 /dev/vg01/lvol0", "mkfs.gfs2 -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1", "mkfs -t gfs2 -p lock_dlm -t alpha:mydata2 -j 8 /dev/vg01/lvol1" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/global_file_system_2/ch-manage
6.3. Confining Existing Linux Users: semanage login
6.3. Confining Existing Linux Users: semanage login If a Linux user is mapped to the SELinux unconfined_u user (the default behavior), and you would like to change which SELinux user they are mapped to, use the semanage login command. The following example creates a new Linux user named newuser , then maps that Linux user to the SELinux user_u user: As the Linux root user, run the useradd newuser command to create a new Linux user ( newuser ). Since this user uses the default mapping, it does not appear in the semanage login -l output: To map the Linux newuser user to the SELinux user_u user, run the following command as the Linux root user: The -a option adds a new record, and the -s option specifies the SELinux user to map a Linux user to. The last argument, newuser , is the Linux user you want mapped to the specified SELinux user. To view the mapping between the Linux newuser user and user_u , run the semanage login -l command as the Linux root user: As the Linux root user, run the passwd newuser command to assign a password to the Linux newuser user: Log out of your current session, and log in as the Linux newuser user. Run the id -Z command to view the newuser 's SELinux context: Log out of the Linux newuser 's session, and log back in with your account. If you do not want the Linux newuser user, run the userdel -r newuser command as the Linux root user to remove it, along with its home directory. Run the semanage login -d newuser command to remove the mapping between the Linux newuser user and user_u :
[ "~]# useradd newuser ~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u s0-s0:c0.c1023 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023", "~]# semanage login -a -s user_u newuser", "~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u s0-s0:c0.c1023 newuser user_u s0 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023", "~]# passwd newuser Changing password for user newuser. New password: Enter a password Retype new password: Enter the same password again passwd: all authentication tokens updated successfully.", "~]USD id -Z user_u:user_r:user_t:s0", "~]# userdel -r newuser ~]# semanage login -d newuser ~]# semanage login -l Login Name SELinux User MLS/MCS Range __default__ unconfined_u s0-s0:c0.c1023 root unconfined_u s0-s0:c0.c1023 system_u system_u s0-s0:c0.c1023" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-confining_users-confining_existing_linux_users_semanage_login
Chapter 91. ZipArtifact schema reference
Chapter 91. ZipArtifact schema reference Used in: Plugin Property Description url URL of the artifact which will be downloaded. AMQ Streams does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. string sha512sum SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. string insecure By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure. boolean type Must be zip . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ZipArtifact-reference
2.2. Virtual Performance Monitoring Unit (vPMU)
2.2. Virtual Performance Monitoring Unit (vPMU) The virtual performance monitoring unit (vPMU) displays statistics which indicate how a guest virtual machine is functioning. The virtual performance monitoring unit allows users to identify sources of possible performance problems in their guest virtual machines. The vPMU is based on Intel's PMU (Performance Monitoring Units) and can only be used on Intel machines. This feature is only supported with guest virtual machines running Red Hat Enterprise Linux 6 or Red Hat Enterprise Linux 7 and is disabled by default. To verify if the vPMU is supported on your system, check for the arch_perfmon flag on the host CPU by running: To enable the vPMU, specify the cpu mode in the guest XML as host-passthrough : After the vPMU is enabled, display a virtual machine's performance statistics by running the perf command from the guest virtual machine.
[ "cat /proc/cpuinfo|grep arch_perfmon", "virsh dumpxml guest_name |grep \"cpu mode\" <cpu mode='host-passthrough'>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-monitoring_tools-vpmu
Chapter 4. Managing networking in the web console
Chapter 4. Managing networking in the web console The the web console supports basic network configuration. You can: Configure IPv4/IPv6 network settings Manage network bridges Manage VLANs Manage Teams Manage Bonds Inspect a network log Note The the web console is build on top of the NetworkManager service. For details, see Getting started with NetworkManager . 4.1. Prerequisites The the web console installed and enabled. A+ For details about NetworkManager, see Installing the web console . 4.2. Configuring network bridges in the web console Network bridges are used to connect multiple interfaces to the one subnet with the same range of IP addresses. 4.2.1. Adding bridges in the web console This section describes creating a software bridge on multiple network interfaces using the web console. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open Networking . Click the Add Bridge button. In the Bridge Settings dialog box, enter a name for the new bridge. In the Port field, select interfaces which you want to put to the one subnet. Optionally, you can select the Spanning Tree protocol (STP) to avoid bridge loops and broadcast radiation. If you do not have a strong preference, leave the predefined values as they are. Click Create . If the bridge is successfully created, the web console displays the new bridge in the Networking section. Check values in the Sending and Receiving columns in the new bridge row. If you can see that zero bytes are sent and received through the bridge, the connection does not work correctly and you need to adjust the network settings. 4.2.2. Configuring a static IP address in the web console IP address for your system can be assigned from the pool automatically by the DHCP server or you can configure the IP address manually. The IP address will not be influenced by the DHCP server settings. This section describes configuring static IPv4 addresses of a network bridge using the RHEL web console. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open the Networking section. Click the interface where you want to set the static IP address. In the interface details screen, click the IPv4 configuration. In the IPv4 Settings dialog box, select Manual in the Addresses drop down list. Click Apply . In the Addresses field, enter the desired IP address, netmask and gateway. Click Apply . At this point, the IP address has been configured and the interface uses the new static IP address. 4.2.3. Removing interfaces from the bridge using the web console Network bridges can include multiple interfaces. You can remove them from the bridge. Each removed interface will be automatically changed to the standalone interface. This section describes removing a network interface from a software bridge created in the RHEL 7 system. Prerequisites Having a bridge with multiple interfaces in your system. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open Networking . Click the bridge you want to configure. In the bridge settings screen, scroll down to the table of ports (interfaces). Select the interface and click the - icon. The RHEL web console removes the interface from the bridge and you can see it back in the Networking section as standalone interface. 4.2.4. Deleting bridges in the web console You can delete a software network bridge in the RHEL web console. All network interfaces included in the bridge will be changed automatically to standalone interfaces. Prerequisites Having a bridge in your system. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open the Networking section. Click the bridge you want to configure. In the bridge settings screen, scroll down to the table of ports. Click Delete . At this stage, go back to Networking and verify that all the network interfaces are displayed on the Interfaces tab. Interfaces which were part of the bridge can be inactive now. Therefore, you may need to activate them and set network parameters manually. 4.3. Configuring VLANs in the web console VLANs (Virtual LANs) are virtual networks created on a single physical Ethernet interface. Each VLAN is defined by an ID which represents a unique positive integer and works as a standalone interface. The following procedure describes creating VLANs in the RHEL web console. Prerequisites Having a network interface in your system. Procedure Log in to the RHEL web console. For details, see Logging in to the web console . Open Networking . Click Add VLAN button. In the VLAN Settings dialog box, select the physical interface for which you want to create a VLAN. Enter the VLAN Id or just use the predefined number. In the Name field, you can see a predefined name consisted of the parent interface and VLAN Id. If it is not necessary, leave the name as it is. Click Apply . The new VLAN has been created and you need to click at the VLAN and configure the network settings.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/managing-networking-in-the-web-console_system-management-using-the-rhel-7-web-console
Chapter 3. Installing a user-provisioned bare metal cluster with network customizations
Chapter 3. Installing a user-provisioned bare metal cluster with network customizations In OpenShift Container Platform 4.14, you can install a cluster on bare metal infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. When you customize OpenShift Container Platform networking, you must set most of the network configuration parameters during installation. You can modify only kubeProxy network configuration parameters in a running cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. Additional resources See Installing a user-provisioned bare metal cluster on a restricted network for more information about performing a restricted network installation on bare metal infrastructure that you provision. 3.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 3.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 3.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Validating DNS resolution for user-provisioned infrastructure 3.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 3.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 3.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. Additional resources Verifying node health 3.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.9. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 3.9.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 3.10. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.11. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 3.12. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.12.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 3.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.13. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.14. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.16. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.17. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 3.18. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 3.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.13. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 3.14. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 3.14.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.14.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.14.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 3.14.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 3.14.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 3.14.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 3.14.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 3.14.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 3.14.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.14 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 3.14.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 3.14.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 3.14.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 3.14.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 3.14.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.14.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 3.14.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 3.14.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 3.14.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.14.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 3.14.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 3.14.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 3.14.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 3.20. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 3.14.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 3.21. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 3.14.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 3.14.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.14.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 3.15. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 3.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.17. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 3.18. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 3.18.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.18.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.18.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.19. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 3.20. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.21. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.14.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.14.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_bare_metal/installing-bare-metal-network-customizations
Chapter 2. Installing the core components of Service Telemetry Framework
Chapter 2. Installing the core components of Service Telemetry Framework Before you install Service Telemetry Framework (STF), ensure that Red Hat OpenShift Container Platform (OCP) version 4.x is running and that you understand the core components of the framework. As part of the OCP installation planning process, ensure that the administrator provides persistent storage and enough resources to run the STF components on top of the OCP environment. Warning Red Hat OpenShift Container Platform version 4.3 or later is currently required for a successful installation of STF. 2.1. The core components of STF The following STF core components are managed by Operators: Prometheus and AlertManager ElasticSearch Smart Gateway AMQ Interconnect Each component has a corresponding Operator that you can use to load the various application components and objects. Additional resources For more information about Operators, see the Understanding Operators guide. 2.2. Preparing your OCP environment for STF As you prepare your OCP environment for STF, you must plan for persistent storage, adequate resources, and event storage: Ensure that persistent storage is available in your Red Hat OpenShift Container Platform cluster to permit a production grade deployment. For more information, see Section 2.2.1, "Persistent volumes" . Ensure that enough resources are available to run the Operators and the application containers. For more information, see Section 2.2.2, "Resource allocation" . To install ElasticSearch, you must use a community catalog source. If you do not want to use a community catalog or if you do not want to store events, see Section 2.3, "Deploying STF to the OCP environment" . STF uses ElasticSearch to store events, which requires a larger than normal vm.max_map_count . The vm.max_map_count value is set by default in Red Hat OpenShift Container Platform. For more information about how to edit the value of vm.max_map_count , see Section 2.2.3, "Node tuning operator" . 2.2.1. Persistent volumes STF uses persistent storage in OCP to instantiate the volumes dynamically so that Prometheus and ElasticSearch can store metrics and events. Additional resources For more information about configuring persistent storage for OCP, see Understanding persistent storage. 2.2.1.1. Using ephemeral storage Warning You can use ephemeral storage with STF. However, if you use ephemeral storage, you might experience data loss if a pod is restarted, updated, or rescheduled onto another node. Use ephemeral storage only for development or testing, and not production environments. Procedure To enable ephemeral storage for STF, set storageEphemeralEnabled: true in your ServiceTelemetry manifest. Additional resources For more information about enabling ephemeral storage for STF, see Section 4.6.1, "Configuring ephemeral storage" . 2.2.2. Resource allocation To enable the scheduling of pods within the OCP infrastructure, you need resources for the components that are running. If you do not allocate enough resources, pods remain in a Pending state because they cannot be scheduled. The amount of resources that you require to run STF depends on your environment and the number of nodes and clouds that you want to monitor. Additional resources For recommendations about sizing for metrics collection see https://access.redhat.com/articles/4907241 . For information about sizing requirements for ElasticSearch, see https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html 2.2.3. Node tuning operator STF uses ElasticSearch to store events, which requires a larger than normal vm.max_map_count . The vm.max_map_count value is set by default in Red Hat OpenShift Container Platform. If you want to edit the value of vm.max_map_count , you cannot apply node tuning manually using the sysctl command because Red Hat OpenShift Container Platform manages nodes directly. To configure values and apply them to the infrastructure, you must use the node tuning operator. For more information, see Using the Node Tuning Operator . In an OCP deployment, the default node tuning operator specification provides the required profiles for ElasticSearch workloads or pods scheduled on nodes. To view the default cluster node tuning specification, run the following command: oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator The output of the default specification is documented at Default profiles set on a cluster . The assignment of profiles is managed in the recommend section where profiles are applied to a node when certain conditions are met. When scheduling ElasticSearch to a node in STF, one of the following profiles is applied: openshift-control-plane-es openshift-node-es When scheduling an ElasticSearch pod, there must be a label present that matches tuned.openshift.io/elasticsearch . If the label is present, one of the two profiles is assigned to the pod. No action is required by the administrator if you use the recommended Operator for ElasticSearch. If you use a custom-deployed ElasticSearch with STF, ensure that you add the tuned.openshift.io/elasticsearch label to all scheduled pods. Additional resources For more information about virtual memory usage by ElasticSearch, see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html For more information about how the profiles are applied to nodes, see Custom tuning specification . 2.3. Deploying STF to the OCP environment You can deploy STF to the OCP environment in one of two ways: Deploy STF and store events with ElasticSearch. For more information, see Section 2.3.1, "Deploying STF to the OCP environment with ElasticSearch" . Deploy STF without ElasticSearch and disable events support. For more information, see Section 2.3.2, "Deploying STF to the OCP environment without ElasticSearch" . 2.3.1. Deploying STF to the OCP environment with ElasticSearch Complete the following tasks: Section 2.3.3, "Creating a namespace" . Section 2.3.4, "Creating an OperatorGroup" . Section 2.3.5, "Enabling the OperatorHub.io Community Catalog Source" . Section 2.3.6, "Enabling Red Hat STF Operator Source" . Section 2.3.7, "Subscribing to the AMQ Certificate Manager Operator" . Section 2.3.8, "Subscribing to the Elastic Cloud on Kubernetes Operator" . Section 2.3.9, "Subscribing to the Service Telemetry Operator" . Section 2.3.10, "Creating a ServiceTelemetry object in OCP" . 2.3.2. Deploying STF to the OCP environment without ElasticSearch Complete the following tasks: Section 2.3.3, "Creating a namespace" . Section 2.3.4, "Creating an OperatorGroup" . Section 2.3.6, "Enabling Red Hat STF Operator Source" . Section 2.3.7, "Subscribing to the AMQ Certificate Manager Operator" . Section 2.3.9, "Subscribing to the Service Telemetry Operator" . Section 2.3.10, "Creating a ServiceTelemetry object in OCP" . 2.3.3. Creating a namespace Create a namespace to hold the STF components. The service-telemetry namespace is used throughout the documentation: Procedure Enter the following command: oc new-project service-telemetry 2.3.4. Creating an OperatorGroup Create an OperatorGroup in the namespace so that you can schedule the Operator pods. Procedure Enter the following command: oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: service-telemetry-operator-group namespace: service-telemetry spec: targetNamespaces: - service-telemetry EOF Additional resources For more information, see OperatorGroups . 2.3.5. Enabling the OperatorHub.io Community Catalog Source Before you install ElasticSearch, you must have access to the resources on the OperatorHub.io Community Catalog Source: Procedure Enter the following command: oc apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: operatorhubio-operators namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/operator-framework/upstream-community-operators:latest displayName: OperatorHub.io Operators publisher: OperatorHub.io EOF 2.3.6. Enabling Red Hat STF Operator Source Before you deploy STF on Red Hat OpenShift Container Platform, you must enable the operator source. Procedure Install an OperatorSource that contains the Service Telemetry Operator and the Smart Gateway Operator: oc apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorSource metadata: labels: opsrc-provider: redhat-operators-stf name: redhat-operators-stf namespace: openshift-marketplace spec: authorizationToken: {} displayName: Red Hat STF Operators endpoint: https://quay.io/cnr publisher: Red Hat registryNamespace: redhat-operators-stf type: appregistry EOF To validate the creation of your OperatorSource, use the oc get operatorsources command. A successful import results in the MESSAGE field returning a result of The object has been successfully reconciled . USD oc get -nopenshift-marketplace operatorsource redhat-operators-stf NAME TYPE ENDPOINT REGISTRY DISPLAYNAME PUBLISHER STATUS MESSAGE redhat-operators-stf appregistry https://quay.io/cnr redhat-operators-stf Red Hat STF Operators Red Hat Succeeded The object has been successfully reconciled To validate that the Operators are available from the catalog, use the oc get packagemanifest command: USD oc get packagemanifests | grep "Red Hat STF" smartgateway-operator Red Hat STF Operators 2m50s servicetelemetry-operator Red Hat STF Operators 2m50s 2.3.7. Subscribing to the AMQ Certificate Manager Operator You must subscribe to the AMQ Certificate Manager Operator before you deploy the other STF components because the AMQ Certificate Manager Operator runs globally-scoped and is not compatible with the dependency management of Operator Lifecycle Manager when used with other namespace-scoped operators. Procedure Subscribe to the AMQ Certificate Manager Operator, create the subscription, and validate the AMQ7 Certificate Manager: Note The AMQ Certificate Manager is installed globally for all namespaces, so the namespace value provided is openshift-operators . You might not see your amq7-cert-manager.v1.0.0 ClusterServiceVersion in the service-telemetry namespace for a few minutes until the processing executes against the namespace. oc apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq7-cert-manager namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: amq7-cert-manager source: redhat-operators sourceNamespace: openshift-marketplace EOF To validate your ClusterServiceVersion , use the oc get csv command. Ensure that amq7-cert-manager.v1.0.0 has a phase Succeeded . USD oc get --namespace openshift-operators csv NAME DISPLAY VERSION REPLACES PHASE amq7-cert-manager.v1.0.0 Red Hat Integration - AMQ Certificate Manager 1.0.0 Succeeded 2.3.8. Subscribing to the Elastic Cloud on Kubernetes Operator Before you install the Service Telemetry Operator and if you plan to store events in ElasticSearch, you must enable the Elastic Cloud Kubernetes Operator. Procedure Apply the following manifest to your OCP environment to enable the Elastic Cloud on Kubernetes Operator: oc apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elastic-cloud-eck namespace: service-telemetry spec: channel: stable installPlanApproval: Automatic name: elastic-cloud-eck source: operatorhubio-operators sourceNamespace: openshift-marketplace EOF To verify that the ClusterServiceVersion for ElasticSearch Cloud on Kubernetes succeeded , enter the oc get csv command: USD oc get csv NAME DISPLAY VERSION REPLACES PHASE elastic-cloud-eck.v1.1.0 Elastic Cloud on Kubernetes 1.1.0 elastic-cloud-eck.v1.0.1 Succeeded 2.3.9. Subscribing to the Service Telemetry Operator To instantiate an STF instance, create the ServiceTelemetry object to allow the Service Telemetry Operator to create the environment. Procedure To create the Service Telemetry Operator subscription, enter the oc apply -f command: oc apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: servicetelemetry-operator namespace: service-telemetry spec: channel: stable installPlanApproval: Automatic name: servicetelemetry-operator source: redhat-operators-stf sourceNamespace: openshift-marketplace EOF To validate the Service Telemetry Operator and the dependent operators, enter the following command: USD oc get csv --namespace service-telemetry NAME DISPLAY VERSION REPLACES PHASE amq7-cert-manager.v1.0.0 Red Hat Integration - AMQ Certificate Manager 1.0.0 Succeeded amq7-interconnect-operator.v1.2.0 Red Hat Integration - AMQ Interconnect 1.2.0 Succeeded elastic-cloud-eck.v1.1.0 Elastic Cloud on Kubernetes 1.1.0 elastic-cloud-eck.v1.0.1 Succeeded prometheusoperator.0.37.0 Prometheus Operator 0.37.0 prometheusoperator.0.32.0 Succeeded service-telemetry-operator.v1.0.2 Service Telemetry Operator 1.0.2 service-telemetry-operator.v1.0.1 Succeeded smart-gateway-operator.v1.0.1 Smart Gateway Operator 1.0.1 smart-gateway-operator.v1.0.0 Succeeded 2.3.10. Creating a ServiceTelemetry object in OCP To deploy the Service Telemetry Framework, you must create an instance of ServiceTelemetry in OCP. By default, eventsEnabled is set to false. If you do not want to store events in ElasticSearch, ensure that eventsEnabled is set to false. For more information, see Section 2.3.2, "Deploying STF to the OCP environment without ElasticSearch" . The following core parameters are available for a ServiceTelemetry manifest: Table 2.1. Core parameters for a ServiceTelemetry manifest Parameter Description Default Value eventsEnabled Enable events support in STF. Requires prerequisite steps to ensure ElasticSearch can be started. For more information, see Section 2.3.8, "Subscribing to the Elastic Cloud on Kubernetes Operator" . false metricsEnabled Enable metrics support in STF. true highAvailabilityEnabled Enable high availability in STF. For more information, see Section 4.3, "High availability" . false storageEphemeralEnabled Enable ephemeral storage support in STF. For more information, see Section 4.6, "Ephemeral storage" . false Procedure To store events in ElasticSearch, set eventsEnabled to true during deployment: oc apply -f - <<EOF apiVersion: infra.watch/v1alpha1 kind: ServiceTelemetry metadata: name: stf-default namespace: service-telemetry spec: eventsEnabled: true metricsEnabled: true EOF To view the STF deployment logs in the Service Telemetry Operator, use the oc logs command: oc logs USD(oc get pod --selector='name=service-telemetry-operator' -oname) -c ansible View the pods and the status of each pod to determine that all workloads are operating nominally: Note If you set eventsEnabled: true , the notification Smart Gateways will Error and CrashLoopBackOff for a period of time before ElasticSearch starts. USD oc get pods NAME READY STATUS RESTARTS AGE alertmanager-stf-default-0 2/2 Running 0 26m elastic-operator-645dc8b8ff-jwnzt 1/1 Running 0 88m elasticsearch-es-default-0 1/1 Running 0 26m interconnect-operator-6fd49d9fb9-4bl92 1/1 Running 0 46m prometheus-operator-bf7d97fb9-kwnlx 1/1 Running 0 46m prometheus-stf-default-0 3/3 Running 0 26m service-telemetry-operator-54f4c99d9b-k7ll6 2/2 Running 0 46m smart-gateway-operator-7ff58bcf94-66rvx 2/2 Running 0 46m stf-default-ceilometer-notification-smartgateway-6675df547q4lbj 1/1 Running 0 26m stf-default-collectd-notification-smartgateway-698c87fbb7-xj528 1/1 Running 0 26m stf-default-collectd-telemetry-smartgateway-79c967c8f7-9hsqn 1/1 Running 0 26m stf-default-interconnect-7458fd4d69-nqbfs 1/1 Running 0 26m 2.4. Removing STF from the OCP environment Remove STF from an OCP environment if you no longer require the STF functionality. Complete the following tasks: Section 2.4.1, "Deleting the namespace" . Section 2.4.2, "Removing the OperatorSource" . 2.4.1. Deleting the namespace To remove the operational resources for STF from OCP, delete the namespace. Procedure Run the oc delete command: oc delete project service-telemetry Verify that the resources have been deleted from the namespace: USD oc get all No resources found. 2.4.2. Removing the OperatorSource If you do not expect to install Service Telemetry Framework again, delete the OperatorSource. When you remove the OperatorSource, PackageManifests related to STF are removed from the Operator Lifecycle Manager catalog. Procedure Delete the OperatorSource: USD oc delete --namespace=openshift-marketplace operatorsource redhat-operators-stf operatorsource.operators.coreos.com "redhat-operators-stf" deleted Verify that the STF PackageManifests are removed from the platform. If successful, the following command returns no result: USD oc get packagemanifests | grep "Red Hat STF" If you enabled the OperatorHub.io Community Catalog Source during the installation process and you no longer need this catalog source, delete it: USD oc delete --namespace=openshift-marketplace catalogsource operatorhubio-operators catalogsource.operators.coreos.com "operatorhubio-operators" deleted Additional resources For more information about the OperatorHub.io Community Catalog Source, see Section 2.3, "Deploying STF to the OCP environment" .
[ "get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator", "new-project service-telemetry", "apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: service-telemetry-operator-group namespace: service-telemetry spec: targetNamespaces: - service-telemetry EOF", "apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: operatorhubio-operators namespace: openshift-marketplace spec: sourceType: grpc image: quay.io/operator-framework/upstream-community-operators:latest displayName: OperatorHub.io Operators publisher: OperatorHub.io EOF", "apply -f - <<EOF apiVersion: operators.coreos.com/v1 kind: OperatorSource metadata: labels: opsrc-provider: redhat-operators-stf name: redhat-operators-stf namespace: openshift-marketplace spec: authorizationToken: {} displayName: Red Hat STF Operators endpoint: https://quay.io/cnr publisher: Red Hat registryNamespace: redhat-operators-stf type: appregistry EOF", "oc get -nopenshift-marketplace operatorsource redhat-operators-stf NAME TYPE ENDPOINT REGISTRY DISPLAYNAME PUBLISHER STATUS MESSAGE redhat-operators-stf appregistry https://quay.io/cnr redhat-operators-stf Red Hat STF Operators Red Hat Succeeded The object has been successfully reconciled", "oc get packagemanifests | grep \"Red Hat STF\" smartgateway-operator Red Hat STF Operators 2m50s servicetelemetry-operator Red Hat STF Operators 2m50s", "apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq7-cert-manager namespace: openshift-operators spec: channel: alpha installPlanApproval: Automatic name: amq7-cert-manager source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get --namespace openshift-operators csv NAME DISPLAY VERSION REPLACES PHASE amq7-cert-manager.v1.0.0 Red Hat Integration - AMQ Certificate Manager 1.0.0 Succeeded", "apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: elastic-cloud-eck namespace: service-telemetry spec: channel: stable installPlanApproval: Automatic name: elastic-cloud-eck source: operatorhubio-operators sourceNamespace: openshift-marketplace EOF", "oc get csv NAME DISPLAY VERSION REPLACES PHASE elastic-cloud-eck.v1.1.0 Elastic Cloud on Kubernetes 1.1.0 elastic-cloud-eck.v1.0.1 Succeeded", "apply -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: servicetelemetry-operator namespace: service-telemetry spec: channel: stable installPlanApproval: Automatic name: servicetelemetry-operator source: redhat-operators-stf sourceNamespace: openshift-marketplace EOF", "oc get csv --namespace service-telemetry NAME DISPLAY VERSION REPLACES PHASE amq7-cert-manager.v1.0.0 Red Hat Integration - AMQ Certificate Manager 1.0.0 Succeeded amq7-interconnect-operator.v1.2.0 Red Hat Integration - AMQ Interconnect 1.2.0 Succeeded elastic-cloud-eck.v1.1.0 Elastic Cloud on Kubernetes 1.1.0 elastic-cloud-eck.v1.0.1 Succeeded prometheusoperator.0.37.0 Prometheus Operator 0.37.0 prometheusoperator.0.32.0 Succeeded service-telemetry-operator.v1.0.2 Service Telemetry Operator 1.0.2 service-telemetry-operator.v1.0.1 Succeeded smart-gateway-operator.v1.0.1 Smart Gateway Operator 1.0.1 smart-gateway-operator.v1.0.0 Succeeded", "apply -f - <<EOF apiVersion: infra.watch/v1alpha1 kind: ServiceTelemetry metadata: name: stf-default namespace: service-telemetry spec: eventsEnabled: true metricsEnabled: true EOF", "logs USD(oc get pod --selector='name=service-telemetry-operator' -oname) -c ansible", "PLAY RECAP *** localhost : ok=37 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0", "oc get pods NAME READY STATUS RESTARTS AGE alertmanager-stf-default-0 2/2 Running 0 26m elastic-operator-645dc8b8ff-jwnzt 1/1 Running 0 88m elasticsearch-es-default-0 1/1 Running 0 26m interconnect-operator-6fd49d9fb9-4bl92 1/1 Running 0 46m prometheus-operator-bf7d97fb9-kwnlx 1/1 Running 0 46m prometheus-stf-default-0 3/3 Running 0 26m service-telemetry-operator-54f4c99d9b-k7ll6 2/2 Running 0 46m smart-gateway-operator-7ff58bcf94-66rvx 2/2 Running 0 46m stf-default-ceilometer-notification-smartgateway-6675df547q4lbj 1/1 Running 0 26m stf-default-collectd-notification-smartgateway-698c87fbb7-xj528 1/1 Running 0 26m stf-default-collectd-telemetry-smartgateway-79c967c8f7-9hsqn 1/1 Running 0 26m stf-default-interconnect-7458fd4d69-nqbfs 1/1 Running 0 26m", "delete project service-telemetry", "oc get all No resources found.", "oc delete --namespace=openshift-marketplace operatorsource redhat-operators-stf operatorsource.operators.coreos.com \"redhat-operators-stf\" deleted", "oc get packagemanifests | grep \"Red Hat STF\"", "oc delete --namespace=openshift-marketplace catalogsource operatorhubio-operators catalogsource.operators.coreos.com \"operatorhubio-operators\" deleted" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/service_telemetry_framework_1.0/installing-the-core-components-of-stf_introduction-to-stf
Chapter 5. Jakarta Enterprise Beans subsystem tuning
Chapter 5. Jakarta Enterprise Beans subsystem tuning JBoss EAP can cache Jakarta Enterprise Beans to save initialization time. This is accomplished using bean pools. There are two different bean pools that can be tuned in JBoss EAP: bean instance pools and bean thread pools. Appropriate bean pool sizes depend on your environment and applications. It is recommended that you experiment with different bean pool sizes and perform stress testing in a development environment that emulates your expected real-world conditions. 5.1. Bean instance pools Bean instance pools are used for Stateless Session Beans (SLSBs) and Message Driven Beans (MDBs). By default, SLSBs use the instance pool default-slsb-instance-pool , and MDBs use the instance pool default-mdb-instance-pool . The size of a bean instance pool limits the number of instances of a particular enterprise bean that can be created at one time. If the pool for a particular enterprise bean is full, the client will block and wait for an instance to become available. If a client does not get an instance within the time set in the pool's timeout attributes, an exception is thrown. The size of a bean instance pool is configured using either derive-size or max-pool-size . The derive-size attribute allows you to configure the pool size using one of the following values: from-worker-pools , which indicates that the maximum pool size is derived from the size of the total threads for all worker pools configured on the system. from-cpu-count , which indicates that the maximum pool size is derived from the total number of processors available on the system. Note that this is not necessarily a 1:1 mapping, and might be augmented by other factors. If derive-size is undefined, then the value of max-pool-size is used for the size of the bean instance pool. Note The derive-size attribute overrides any value specified in max-pool-size . derive-size must be undefined for the max-pool-size value to take effect. You can configure an enterprise bean to use a specific instance pool. This allows for finer control of the instances available to each enterprise bean type. 5.1.1. Creating a bean instance pool This section shows you how to create a new bean instance pool using the management CLI. You can also configure bean instance pools using the management console by navigating to the Jakarta Enterprise Beans subsystem from the Configuration tab, and then selecting the Bean Pool tab. To create a new instance pool, use one of the following commands: To create a bean instance pool with a derived maximum pool size: The following example creates a bean instance pool named my_derived_pool with a maximum size derived from the CPU count, and a timeout of 2 minutes: To create a bean instance pool with an explicit maximum pool size: The following example creates a bean instance pool named my_pool with a maximum of 30 instances and a timeout of 30 seconds: 5.1.2. Specifying the instance pool a bean should use You can set a specific instance pool that a particular bean will use either by using the @org.jboss.ejb3.annotation.Pool annotation, or by modifying the jboss-ejb3.xml deployment descriptor of the bean. 5.1.3. Disabling the default bean instance pool The default bean instance pool can be disabled, which results in an enterprise bean not using any instance pool by default. Instead, a new enterprise bean instance is created when a thread needs to invoke a method on an enterprise bean. This might be useful if you do not want any limit on the number of enterprise bean instances that are created. To disable the default bean instance pool, use the following management CLI command: Note 5.2. Bean thread pools By default, a bean thread pool named default is used for asynchronous enterprise bean calls and enterprise bean timers. Note From JBoss EAP 7 onward, remote enterprise bean requests are handled in the worker defined in the io subsystem by default. If required, you can configure each of these enterprise bean services to use a different bean thread pool. This can be useful if you want finer control of each service's access to a bean thread pool. When determining an appropriate thread pool size, consider how many concurrent requests you expect will be processed at once. 5.2.1. Creating a bean thread pool This section shows you how to create a new bean thread pool using the management CLI. You can also configure bean thread pools using the management console by navigating to the Jakarta Enterprise Beans subsystem from the Configuration tab and selecting Container Thread Pool in the left menu. To create a new thread pool, use the following command: The following example creates a bean thread pool named my_thread_pool with a maximum of 30 threads: 5.2.2. Configuring enterprise bean services to use a specific bean thread pool The enterprise bean asynchronous invocation service and timer service can each be configured to use a specific bean thread pool. By default, both these services use the default bean thread pool. This section shows you how to configure the above enterprise bean services to use a specific bean thread pool using the management CLI. You can also configure these services using the management console by navigating to the Enterprise Bean subsystem from the Configuration tab, selecting the Services tab, and choosing the appropriate service. To configure an enterprise bean service to use a specific bean thread pool, use the following command: Replace SERVICE_NAME with the an enterprise bean service you want to configure: async for the enterprise bean asynchronous invocation service timer-service for the enterprise bean timer service The following example sets the enterprise bean async service to use the bean thread pool named my_thread_pool : 5.3. Runtime deployment information for beans You can add runtime deployment information to your beans for performance monitoring. For details about the available runtime data, see the ejb3 subsystem in the JBoss EAP management model. An application can include the runtime data as annotations in the bean code or in the deployment descriptor. An application can use both options. Additional resources For more information about available runtime data, see the ejb3 subsystem in the JBoss EAP management model . 5.3.1. Command line options for retrieving runtime data from Jakarta enterprise beans Runtime data from Jakarta Enterprise Beans is available from the management CLI so you can evaluate the performance of your Jakarta Enterprise Beans. The command to retrieve runtime data for all types of beans uses the following pattern: Replace <deployment_name> with the name of the deployment .jar file for which to retrieve runtime data. Replace <bean_type> with the type of the bean for which to retrieve runtime data. The following options are valid for this placeholder: stateless-session-bean stateful-session-bean singleton-bean message-driven-bean Replace <bean_name> with the name of the bean for which you to retrieve runtime data. The system delivers the result to stdout formatted as JavaScript Object Notation (JSON) data. Example command to retrieve runtime data for a singleton bean named ManagedSingletonBean deployed in a file named ejb-management.jar Example output runtime data for the singleton bean Example command to retrieve runtime data for a message-driven bean named NoTimerMDB deployed in a file named ejb-management.jar Example output for the message-driven bean 5.4. Exceptions that indicate an enterprise bean subsystem tuning might be required The Stateless Jakarta Enterprise Beans instance pool is not large enough or the timeout is too low The enterprise bean thread pool is not large enough, or an enterprise bean is taking longer to process than the invocation timeout 5.4.1. Default global timeout values for stateful session beans In the ejb3 subsystem, you can configure a default global timeout value for all stateful session beans (SFSBs) that are deployed on your server instance by using the default-stateful-bean-session-timeout attribute. With the default-stateful-bean-session-timeout attribute, you can use the following management CLI operations on the ejb3 subsystem: The read-attribute operation in the management CLI to view the current global timeout value for the attribute. The write-attribute operation to configure the attribute by using the management CLI. Attribute behavior varies according to the server mode. For example: When running in the standalone server, the configured value gets applied to all SFSBs deployed on the application server. When running a server in a managed domain, all SFSBs that are deployed on server instances within server groups receive concurrent timeout values. Note When you change the global timeout value for the attribute, the updated settings only apply to new deployments. You must reload the server to apply the new settings to current deployments. By default, the attribute value is set at -1 , which means that deployed SFSBs are configured to never time out. However, you can configure two of the following types of valid values for the attribute: When you set the attribute value to 0 , the attribute immediately marks eligible SFSBs for removal by the ejb container. When you set the attribute value greater than 0 , the SFSBs remain idle for the specified time in milliseconds before the ejb container removes the eligible SFSBs. Note You can still use the pre-existing @StatefulTimeout annotation or the stateful-timeout element, which is located in the ejb-jar.xml deployment descriptor, to configure the timeout value for an SFSB. However, setting such a configuration overrides the default global timeout value to the SFSB. Two methods exist for verifying a new value you set for the attribute: Use the read-attribute operation in the management CLI. Examine the ejb3 subsystem section of the server's configuration file. Additional resources For more information about viewing the current global timeout value for an attribute, see Display an Attribute Value in the Management CLI guide . For more information about updating the current global timeout value for an attribute, see Update an Attribute in the Management CLI guide .
[ "/subsystem=ejb3/strict-max-bean-instance-pool= POOL_NAME :add(derive-size= DERIVE_OPTION ,timeout-unit= TIMEOUT_UNIT ,timeout= TIMEOUT_VALUE )", "/subsystem=ejb3/strict-max-bean-instance-pool=my_derived_pool:add(derive-size=from-cpu-count,timeout-unit=MINUTES,timeout=2)", "/subsystem=ejb3/strict-max-bean-instance-pool= POOL_NAME :add(max-pool-size= POOL_SIZE ,timeout-unit= TIMEOUT_UNIT ,timeout= TIMEOUT_VALUE )", "/subsystem=ejb3/strict-max-bean-instance-pool=my_pool:add(max-pool-size=30,timeout-unit=SECONDS,timeout=30)", "/subsystem=ejb3:undefine-attribute(name=default-slsb-instance-pool)", "If a bean is configured to use a particular bean instance pool, disabling the default instance pool does not affect the pool that the bean uses.", "/subsystem=ejb3/thread-pool= POOL_NAME :add(max-threads= MAX_THREADS )", "/subsystem=ejb3/thread-pool=my_thread_pool:add(max-threads=30)", "/subsystem=ejb3/service= SERVICE_NAME :write-attribute(name=thread-pool-name,value= THREAD_POOL_NAME )", "/subsystem=ejb3/service=async:write-attribute(name=thread-pool-name,value=my_thread_pool)", "/deployment=<deployment_name>/subsystem=ejb3/<bean_type>=<bean_name>:read-resource(include-runtime)", "/deployment=ejb-management.jar/subsystem=ejb3/singleton-bean=ManagedSingletonBean:read-resource(include-runtime)", "{ \"outcome\" => \"success\", \"result\" => { \"async-methods\" => [\"void async(int, int)\"], \"business-local\" => [\"sample.ManagedSingletonBean\"], \"business-remote\" => [\"sample.BusinessInterface\"], \"component-class-name\" => \"sample.ManagedSingletonBean\", \"concurrency-management-type\" => undefined, \"declared-roles\" => [ \"Role3\", \"Role2\", \"Role1\" ], \"depends-on\" => undefined, \"execution-time\" => 156L, \"init-on-startup\" => false, \"invocations\" => 3L, \"jndi-names\" => [ \"java:module/ManagedSingletonBean!sample.ManagedSingletonBean\", \"java:global/ejb-management/ManagedSingletonBean!sample.ManagedSingletonBean\", \"java:app/ejb-management/ManagedSingletonBean!sample.ManagedSingletonBean\", \"java:app/ejb-management/ManagedSingletonBean!sample.BusinessInterface\", \"java:global/ejb-management/ManagedSingletonBean!sample.BusinessInterface\", \"java:module/ManagedSingletonBean!sample.BusinessInterface\" ], \"methods\" => {\"doIt\" => { \"execution-time\" => 156L, \"invocations\" => 3L, \"wait-time\" => 0L }}, \"peak-concurrent-invocations\" => 1L, \"run-as-role\" => \"Role3\", \"security-domain\" => \"other\", \"timeout-method\" => \"public void sample.ManagedSingletonBean.timeout(javax.ejb.Timer)\", \"timers\" => [{ \"time-remaining\" => 4304279L, \"next-timeout\" => 1577768415000L, \"calendar-timer\" => true, \"persistent\" => false, \"info\" => \"timer1\", \"schedule\" => { \"year\" => \"*\", \"month\" => \"*\", \"day-of-month\" => \"*\", \"day-of-week\" => \"*\", \"hour\" => \"0\", \"minute\" => \"0\", \"second\" => \"15\", \"timezone\" => undefined, \"start\" => undefined, \"end\" => undefined } }], \"transaction-type\" => \"CONTAINER\", \"wait-time\" => 0L, \"service\" => {\"timer-service\" => undefined} } }", "/deployment=ejb-management.jar/subsystem=ejb3/message-driven-bean=NoTimerMDB:read-resource(include-runtime)", "{ \"outcome\" => \"success\", \"result\" => { \"activation-config\" => [ (\"destination\" => \"java:/queue/NoTimerMDB-queue\"), (\"destinationType\" => \"javax.jms.Queue\"), (\"acknowledgeMode\" => \"Auto-acknowledge\") ], \"component-class-name\" => \"sample.NoTimerMDB\", \"declared-roles\" => [ \"Role3\", \"Role2\", \"Role1\" ], \"delivery-active\" => true, \"execution-time\" => 0L, \"invocations\" => 0L, \"message-destination-link\" => \"queue/NoTimerMDB-queue\", \"message-destination-type\" => \"javax.jms.Queue\", \"messaging-type\" => \"javax.jms.MessageListener\", \"methods\" => {}, \"peak-concurrent-invocations\" => 0L, \"pool-available-count\" => 16, \"pool-create-count\" => 0, \"pool-current-size\" => 0, \"pool-max-size\" => 16, \"pool-name\" => \"mdb-strict-max-pool\", \"pool-remove-count\" => 0, \"run-as-role\" => \"Role3\", \"security-domain\" => \"other\", \"timeout-method\" => undefined, \"timers\" => [], \"transaction-type\" => \"CONTAINER\", \"wait-time\" => 0L, \"service\" => undefined } }", "javax.ejb.EJBException: JBAS014516: Failed to acquire a permit within 20 SECONDS at org.jboss.as.ejb3.pool.strictmax.StrictMaxPool.get(StrictMaxPool.java:109)", "java.util.concurrent.TimeoutException: No invocation response received in 300000 milliseconds" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/performance_tuning_for_red_hat_jboss_enterprise_application_platform/assembly-jeb-subsystem-tuning_performance-tuning-guide
Chapter 2. Service Telemetry Framework release information
Chapter 2. Service Telemetry Framework release information Notes for updates released during the supported lifecycle of this Service Telemetry Framework (STF) release appear in the advisory text associated with each update. 2.1. Service Telemetry Framework 1.5.0 These release notes highlight enhancements and removed functionality to be taken into consideration when you install this release of Service Telemetry Framework (STF). This release includes the following advisories: RHEA-2022:8735-01 Release of components for Service Telemetry Framework 1.5.0 - Container Images 2.1.1. Release notes This section outlines important details about the release, including recommended practices and notable changes to STF. You must take this information into account to ensure the best possible outcomes for your installation. BZ# 2121457 STF 1.5.0 supports OpenShift Container Platform 4.10. releases of STF were limited to OpenShift Container Platform 4.8, which is nearing the end of extended support. OpenShift Container Platform 4.10 is an Extended Update Support (EUS) release with full support until November 2022, and maintenance support until September 2023. For more information, see Red Hat OpenShift Container Platform Life Cycle Policy . 2.1.2. Deprecated Functionality The items in this section are either no longer supported, or will no longer be supported in a future release. BZ# 2153825 The sg-core application plugin elasticsearch is deprecated in STF 1.5. BZ# 2152901 The use of prometheus-webhook-snmp is deprecated in STF 1.5. 2.1.3. Removed Functionality BZ# 2150029 The section in the STF documentation describing how to use STF and Gnocchi together has been removed. The use of Gnocchi is limited to use for autoscaling. 2.2. Service Telemetry Framework 1.5.1 These release notes highlight enhancements and removed functionality to be taken into consideration when you install this release of Service Telemetry Framework (STF). This release includes the following advisory: RHSA-2023:1529-04 Release of components for Service Telemetry Framework 1.5.1 - Container Images 2.2.1. Release notes This section outlines important details about the release, including recommended practices and notable changes to STF. You must take this information into account to ensure the best possible outcomes for your installation. BZ# 2176537 STF 1.5.1 supports OpenShift Container Platform 4.10 and 4.12. releases of STF were limited to OpenShift Container Platform 4.8, which is nearing the end of extended support. OpenShift Container Platform 4.12 is an Extended Update Support (EUS) release currently in full support, and maintenance support until July 2024. For more information, see Red Hat OpenShift Container Platform Life Cycle Policy . BZ# 2173856 There is an issue where the events datasource in Grafana is unavailable when events storage is disabled. The default setting of events storage is disabled. The virtual machine dashboard presents warnings about a missing datasource because the datasource is using annotations and is unavailable by default. Workaround (if any): You can use the available switch on the virtual machine dashboard to disable the annotations and match the default deployment options in STF. 2.2.2. Enhancements This release of STF features the following enhancements: BZ# 2092544 You can have more control over certificate renewal configuration with additional certificate expiration configuration for CA and endpoint certificates for QDR and Elasticsearch. STF-559 You can now use the additional SNMP trap delivery controls in STF to configure the trap delivery target, port, community, default trap OID, default trap severity, and trap OID prefix. BZ# 2159464 This feature has been rebuilt on golang 1.18, to remain on a supported golang version, which benefits future maintenance activities. 2.3. Service Telemetry Framework 1.5.2 These release notes highlight enhancements and removed functionality to be taken into consideration when you install this release of Service Telemetry Framework (STF). This release includes the following advisory: RHEA-2023:3785 Release for Service Telemetry Framework 1.5.2 2.3.1. Bug fixes These bugs were fixed in this release of STF: BZ# 2211897 Previously, you installed Prometheus Operator from OperatorHub.io Operators CatalogSource, which interfered with in-cluster monitoring in Red Hat OpenShift Container Platform. To remedy this, you now use Prometheus Operator from the Community Operators CatalogSource during STF installation. For more information on how to migrate from OperatorHub.io Operators CatalogSource to Community Operators CatalogSource, see the Knowledge Base Article Migrating Service Telemetry Framework to Prometheus Operator from community-operators 2.3.2. Enhancements This release of STF features the following enhancements: BZ# 2138179 You can now deploy Red Hat OpenStack Platform (RHOSP) with director Operator for monitoring RHOSP 16.2 with STF. 2.3.3. Removed functionality The following functionality has been removed from this release of STF: BZ# 2189670 Documentation about ephemeral storage is removed. Ensure that you use persistent storage in production deployments. 2.4. Documentation Changes This section details the major documentation updates delivered with Service Telemetry Framework (STF) 1.5, and the changes made to the documentation set that include adding new features, enhancements, and corrections. The section also details the addition of new titles and the removal of retired or replaced titles. Table 2.1. Document changes Date Versions impacted Affected content Description of change 01 Dec 2022 1.5 Removed section from STF documentation about using Gnocchi with STF. You can only use Gnocchi for autoscaling. 30 Mar 2023 1.5.1 Removed section from STF documentation titled, "Deploying to non-standard network topologies". The recommendations were unnecessary and potentially inaccurate. 30 Mar 2023 1.5.1 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html-single/service_telemetry_framework_1.5/index#configuration-parameters-for-snmptraps_assembly-advanced-features The additional configuration parameters available in STF 1.5.1 have been added to the "Sending Alerts as SNMP traps" section. There is more information and examples for configuring a ServiceTelemetry object for SNMP trap delivery from Prometheus Alerts. 30 Mar 2023 1.5.1 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html-single/service_telemetry_framework_1.5/index#proc-updating-the-amq-interconnect-ca-certificate_assembly-renewing-the-amq-interconnect-certificate The tripleo-ansible-inventory.yaml path has been updated to match the correct path on RHOSP 13 and 16.2 deployments. 22 Jun 2023 1.5.2 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.0/html-single/service_telemetry_framework_1.5/index#configuring-the-stf-connection-for-the-overcloud_assembly-completing-the-stf-configuration More information about AMQ Interconnect topic parameters and topic addresses for cloud configurations. 22 Jun 2023 1.5.2 https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html/service_telemetry_framework_1.5/index Section added about Red Hat OpenStack Platform (RHOSP) with director Operator for monitoring RHOSP 16.2 with STF.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/service_telemetry_framework_release_notes_1.5/assembly-stf-release-information_osp
Chapter 1. Introduction
Chapter 1. Introduction Red Hat OpenStack Platform director creates a cloud environment called the Overcloud . As a default, the Overcloud uses Internet Protocol version 4 (IPv4) to configure the service endpoints. However, the Overcloud also supports Internet Protocol version 6 (IPv6) endpoints, which is useful for organizations that support IPv6 infrastructure. This guide provides information and a configuration example for using IPv6 in your Overcloud. 1.1. Defining IPv6 Networking IPv6 is the latest version of the Internet Protocol standard. Internet Engineering Task Force (IETF) developed IPv6 as a means to combat the exhaustion of IP address from the current common IPv4 standard. IPv6 has various differences from IPv4 including: Large IP Address Range The IPv6 range is much larger than the IPv4 range. Better End-to-End Connectivity The larger IP range provides better end-to-end connectivity due to less reliance on network address translation. No Broadcasting IPv6 does not support traditional IP broadcasting. Instead, IPv6 uses multicasting to send packets to applicable hosts in a hierarchical manner. Stateless Address Autoconfiguration (SLAAC) IPv6 provides features for automatically configuring IP addresses and detecting duplicate addresses on a network. This reduces the reliance on a DHCP server to assign addresses. IPv6 uses 128 bits (represented with 4 hexadecimals using groups of 16 bits) to define addresses while IPv4 only uses only 32 bits (represented with decimal digits using groups of 8 bits). For example, a representation of an IPv4 address (192.168.0.1) looks like this: Bits Representation 11000000 192 10101000 168 00000000 0 00000001 1 For an IPv6 address (2001:db8:88ec:9fb3::1), the representation looks like this: Bits Representation 0010 0000 0000 0001 2001 0000 1101 1011 1000 0db8 1000 1000 1110 1100 88ec 1001 1111 1011 0011 9fb3 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0001 0001 Notice you can also represent IPv6 addresses without leading zeros in each bit group and omit a set of zero bit groups once per IP address. In our example, you can represent the 0db8 bit grouping as just db8 and omit the three sets of 0000 bit groups, which shortens the representation from 2001:0db8:88ec:9fb3:0000:0000:0000:0001 to 2001:db8:88ec:9fb3::1. For more information, see "RFC 5952: A Recommendation for IPv6 Address Text Representation" Subnetting in IPv6 Similar to IPv4, an IPv6 address uses a bit mask to define the address prefix as its network. For example, if you include a /64 bit mask to our sample IP address (e.g. 2001:db8:88ec:9fb3::1/64) the bit mask acts as a prefix that defines the first 64 bits (2001:db8:88ec:9fb3) as the network. The remaining bits (0000:0000:0000:0001) define the host. IPv6 also uses some special address types, including: Loopback The loopback device uses an IPv6 for the internal communication within the host. This device is always ::1/128. Link Local A link local address is an IP address valid within a particular network segment. IPv6 requires each network device to have a link local address and use the prefix fe80::/10. However, most of the time, these addresses are prefixed with fe80::/64. Unique local A unique local address is intended for local communication. These addresses use a fc00::/7 prefix. Multicast Hosts use multicast addresses to join multicast groups. These addresses use a ff00::/8 prefix. For example, FF02::1 is a multicast group for all nodes on the network and FF02::2 is a multicast group for all routers. Global Unicast These addresses are usually reserved for public IP address. These addresses use a 2000::/3 prefix. 1.2. Using IPv6 in Red Hat OpenStack Platform Red Hat OpenStack Platform director provides a method for mapping OpenStack services to isolated networks. These networks include: Internal API Storage Storage Management Tenant Networks (Neutron VLAN mode) External For more information about these network traffic types, see the Director Installation and Usage guide. Red Hat OpenStack Platform director also provides methods to use IPv6 communication for these networks. This means the required OpenStack services, databases, and other related services use IPv6 addresses to communicate. This also applies to environments using a high availability solution involving multiple Controller nodes. This helps organizations integrate Red Hat OpenStack Platform with their IPv6 infrastructure. Use the following table as a guide for what networks support IPv6 in Red Hat OpenStack Platform: Network Type Dual Stack (IPv4/v6) Single Stack (IPv6) Single Stack (IPv4) Notes Internal API Yes Yes Storage Yes Yes Storage Management Yes Yes Tenant Networks Yes Yes Yes Tenant Network Endpoints Yes Yes Yes This refers to the IP address of the network hosting the tenant network tunnels, not the tenant networks themselves. IPv6 for network endpoints supports only VXLAN and Geneve. Generic routing encapsulation (GRE) is not yet supported. External - Public API (and Horizon) Yes Yes External - Floating IPs Yes Yes Yes Dual stack and single stack (IPv6) only : neutron tenant networks that are assigned Global Unicast Address (GUA) prefixes and addresses do not require NAT on the external gateway port for the neutron router to access the outside world. Provider Networks Yes Yes Yes IPv6 support is dependent on the tenant operating system. Provisioning (PXE/DHCP) Yes Interfaces on this network are IPv4 only. IPMI or other BMC Yes RHOSP communicates with baseboard management controller (BMC) interfaces over the Provisioning network, which is IPv4. If BMC interfaces support dual stack IPv4 or IPv6, tools that are not part of RHOSP can use IPv6 to communicate with the BMCs. Overcloud Provisioning network The Provisioning network used for ironic in the overcloud. Overcloud Cleaning network The isolated network used to clean a machine before it is ready for reuse. 1.3. Setting Requirements This guide acts as supplementary information for the Director Installation and Usage guide. This means the same requirements specified in Director Installation and Usage also apply to this guide. Implement these requirements as necessary. This guide also requires the following: An Undercloud host with the Red Hat OpenStack Platform director installed. See the Director Installation and Usage guide. Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Both will be used in the deployment. 1.4. Defining the Scenario The scenario for this guide is to create an Overcloud with an isolated network that uses IPv6. The guide aims to achieve this objective through network isolation configured using Heat templates and environment files. This scenario also provides certain variants to these Heat templates and environment files to demonstrate specific differences in configuration. Note In this scenario, the Undercloud still uses IPv4 connectivity for PXE boot, introspection, deployment, and other services. This guide uses a scenario similar to the Advanced Overcloud scenario in the Director Installation and Usage guide. The main difference is the omission of the Ceph Storage nodes. For more information about this scenario, see the Director Installation and Usage guide. Important This guide uses the 2001:DB8::/32 IPv6 prefix for documentation purposes as defined in RFC 3849 . Make sure to substitute these example addresses for IPv6 addresses from your own network.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/ipv6_networking_for_the_overcloud/introduction
Chapter 102. MyBatis
Chapter 102. MyBatis Since Camel 2.7 Both producer and consumer are supported The MyBatis component allows you to query, poll, insert, update and delete data in a relational database using MyBatis . 102.1. Dependencies When using mybatis with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mybatis-starter</artifactId> </dependency> 102.2. URI format Where statementName is the statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you choose to evaluate. You can append query options to the URI in the following format, ?option=value&option=value&... This component will by default load the MyBatis SqlMapConfig file from the root of the classpath with the expected name of SqlMapConfig.xml . If the file is located in another location, you will need to configure the configurationUri option on the MyBatisComponent component. 102.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 102.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 102.3.2. Configuring Endpoint Options Endpoints have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. Use Property Placeholders to configure options that allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 102.4. Component Options The MyBatis component supports 5 options, which are listed below. Name Description Default Type configurationUri (common) Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean sqlSessionFactory (advanced) To use the SqlSessionFactory. SqlSessionFactory 102.5. Endpoint Options The MyBatis endpoint is configured using URI syntax: Following are the path and query parameters. 102.5.1. Path Parameters (1 parameters) Name Description Default Type statement (common) Required The statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you wish to evaluate. String 102.5.2. Query Parameters (30 parameters) Name Description Default Type maxMessagesPerPoll (consumer) This option is intended to split results returned by the database pool into the batches and deliver them in multiple exchanges. This integer defines the maximum messages to deliver in single exchange. By default, no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disable it. 0 int onConsume (consumer) Statement to run after data has been processed in the route. String routeEmptyResultSet (consumer) Whether allow empty resultset to be routed to the hop. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean transacted (consumer) Enables or disables transaction. If enabled then if processing an exchange failed then the consumer breaks out processing any further exchanges to cause a rollback eager. false boolean useIterator (consumer) Process resultset individually or as a list. true boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: * InOnly * InOut * InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy processingStrategy (consumer (advanced)) To use a custom MyBatisProcessingStrategy. MyBatisProcessingStrategy executorType (producer) The executor type to be used while executing statements. simple - executor does nothing special. reuse - executor reuses prepared statements. batch - executor reuses statements and batches updates. Enum values: * SIMPLE * REUSE * BATCH SIMPLE ExecutorType inputHeader (producer) User the header value for input parameters instead of the message body. By default, inputHeader == null and the input parameters are taken from the message body. If outputHeader is set, the value is used and query parameters will be taken from the header instead of the body. String outputHeader (producer) Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. Setting outputHeader will also omit populating the default CamelMyBatisResult header since it would be the same as outputHeader all the time. String statementType (producer) Mandatory to specify for the producer to control which kind of operation to invoke. Enum values: * SelectOne * SelectList * Insert * InsertList * Update * UpdateList * Delete * DeleteList StatementType lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: * TRACE * DEBUG * INFO * WARN * ERROR * OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 102.6. Message Headers The MyBatis component supports 2 message headers that are listed below. Name Description Default Type CamelMyBatisResult (producer) Constant: MYBATIS_RESULT The response returned from MtBatis in any of the operations. For instance an INSERT could return the auto-generated key, or number of rows etc. Object CamelMyBatisStatementName (common) Constant: MYBATIS_STATEMENT_NAME The statementName used (for example: insertAccount). String 102.7. Message Body The response from MyBatis will only be set as the body if it is a SELECT statement. For example, for INSERT statements Camel will not replace the body. This allows you to continue routing and keep the original body. The response from MyBatis is always stored in the header with the key CamelMyBatisResult . 102.8. Samples For example if you wish to consume beans from a JMS queue and insert them into a database you could do the following: from("activemq:queue:newAccount") .to("mybatis:insertAccount?statementType=Insert"); You must specify the statementType as you need to instruct Camel which kind of operation to invoke. Where insertAccount is the MyBatis ID in the SQL mapping file: <!-- Insert example, using the Account parameter class --> <insert id="insertAccount" parameterType="Account"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert> 102.9. Using StatementType for better control of MyBatis When routing to an MyBatis endpoint you will want more fine grained control so you can control whether the SQL statement to be executed is a SELECT , UPDATE , DELETE or INSERT etc. So for instance if we want to route to an MyBatis endpoint in which the IN body contains parameters to a SELECT statement we can do: In the code above we can invoke the MyBatis statement selectAccountById and the IN body should contain the account id we want to retrieve, such as an Integer type. You can do the same for some of the other operations, such as SelectList : And the same for UPDATE , where you can send an Account object as the IN body to MyBatis: 102.9.1. Using InsertList StatementType MyBatis allows you to insert multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: Then you can insert multiple rows, by sending a Camel message to the mybatis endpoint which uses the InsertList statement type, as shown below: 102.9.2. Using UpdateList StatementType MyBatis allows you to update multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <update id="batchUpdateAccount" parameterType="java.util.Map"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item="Account" collection="list" open="(" close=")" separator=","> #{Account.id} </foreach> </update> Then you can update multiple rows, by sending a Camel message to the mybatis endpoint which uses the UpdateList statement type, as shown below: from("direct:start") .to("mybatis:batchUpdateAccount?statementType=UpdateList") .to("mock:result"); 102.9.3. Using DeleteList StatementType MyBatis allows you to delete multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <delete id="batchDeleteAccountById" parameterType="java.util.List"> delete from ACCOUNT where ACC_ID in <foreach item="AccountID" collection="list" open="(" close=")" separator=","> #{AccountID} </foreach> </delete> Then you can delete multiple rows, by sending a Camel message to the mybatis endpoint which uses the DeleteList statement type, as shown below: from("direct:start") .to("mybatis:batchDeleteAccount?statementType=DeleteList") .to("mock:result"); 102.9.4. Notice on InsertList, UpdateList and DeleteList StatementTypes Parameter of any type (List, Map, etc.) can be passed to mybatis and an end user is responsible for handling it as required with the help of mybatis dynamic queries capabilities. 102.9.5. cheduled polling example This component supports scheduled polling and can therefore be used as a Polling Consumer. For example to poll the database every minute: from("mybatis:selectAllAccounts?delay=60000") .to("activemq:queue:allAccounts"); See "ScheduledPollConsumer Options" on Polling Consumer for more options. Alternatively you can use another mechanism for triggering the scheduled polls, such as the Timer or Quartz components. In the sample below we poll the database, every 30 seconds using the Timer component and send the data to the JMS queue: from("timer://pollTheDatabase?delay=30000") .to("mybatis:selectAllAccounts") .to("activemq:queue:allAccounts"); And the MyBatis SQL mapping file used: <!-- Select with no parameters using the result map for Account class. --> <select id="selectAllAccounts" resultMap="AccountResult"> select * from ACCOUNT </select> 102.9.6. Using onConsume This component supports executing statements after data have been consumed and processed by Camel. This allows you to do post updates in the database. Notice all statements must be UPDATE statements. Camel supports executing multiple statements whose names should be separated by commas. The route below illustrates we execute the consumeAccount statement data is processed. This allows us to change the status of the row in the database to processed, so we avoid consuming it twice or more. And the statements in the sqlmap file: 102.9.7. Participating in transactions Setting up a transaction manager under camel-mybatis can be a little bit fiddly, as it involves externalizing the database configuration outside the standard MyBatis SqlMapConfig.xml file. The first part requires the setup of a DataSource . This is typically a pool (either DBCP, or c3p0), which needs to be wrapped in a Spring proxy. This proxy enables non-Spring use of the DataSource to participate in Spring transactions (the MyBatis SqlSessionFactory does just this). <bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy"> <constructor-arg> <bean class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="driverClass" value="org.postgresql.Driver"/> <property name="jdbcUrl" value="jdbc:postgresql://localhost:5432/myDatabase"/> <property name="user" value="myUser"/> <property name="password" value="myPassword"/> </bean> </constructor-arg> </bean> This has the additional benefit of enabling the database configuration to be externalized using property placeholders. A transaction manager is then configured to manage the outermost DataSource : <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"/> </bean> A mybatis-spring SqlSessionFactoryBean then wraps that same DataSource : <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <!-- standard mybatis config file --> <property name="configLocation" value="/META-INF/SqlMapConfig.xml"/> <!-- externalised mappers --> <property name="mapperLocations" value="classpath*:META-INF/mappers/**/*.xml"/> </bean> The camel-mybatis component is then configured with that factory: <bean id="mybatis" class="org.apache.camel.component.mybatis.MyBatisComponent"> <property name="sqlSessionFactory" ref="sqlSessionFactory"/> </bean> Finally, a transaction policy is defined over the top of the transaction manager, which can then be used as usual: <bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager"/> <property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/> </bean> <camelContext id="my-model-context" xmlns="http://camel.apache.org/schema/spring"> <route id="insertModel"> <from uri="direct:insert"/> <transacted ref="PROPAGATION_REQUIRED"/> <to uri="mybatis:myModel.insert?statementType=Insert"/> </route> </camelContext> 102.10. MyBatis Spring Boot Starter integration Spring Boot users can use mybatis-spring-boot-starter artifact provided by the mybatis team <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.3.0</version> </dependency> in particular AutoConfigured beans from mybatis-spring-boot-starter can be used as follow: #application.properties camel.component.mybatis.sql-session-factory=#sqlSessionFactory 102.11. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.mybatis-bean.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mybatis-bean.configuration-uri Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String camel.component.mybatis-bean.enabled Whether to enable auto configuration of the mybatis-bean component. This is enabled by default. Boolean camel.component.mybatis-bean.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mybatis-bean.sql-session-factory To use the SqlSessionFactory. The option is a org.apache.ibatis.session.SqlSessionFactory type. SqlSessionFactory camel.component.mybatis.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mybatis.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.mybatis.configuration-uri Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String camel.component.mybatis.enabled Whether to enable auto configuration of the mybatis component. This is enabled by default. Boolean camel.component.mybatis.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mybatis.sql-session-factory To use the SqlSessionFactory. The option is a org.apache.ibatis.session.SqlSessionFactory type. SqlSessionFactory
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mybatis-starter</artifactId> </dependency>", "mybatis:statementName[?options]", "mybatis:statement", "from(\"activemq:queue:newAccount\") .to(\"mybatis:insertAccount?statementType=Insert\");", "<!-- Insert example, using the Account parameter class --> <insert id=\"insertAccount\" parameterType=\"Account\"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert>", "<update id=\"batchUpdateAccount\" parameterType=\"java.util.Map\"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item=\"Account\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{Account.id} </foreach> </update>", "from(\"direct:start\") .to(\"mybatis:batchUpdateAccount?statementType=UpdateList\") .to(\"mock:result\");", "<delete id=\"batchDeleteAccountById\" parameterType=\"java.util.List\"> delete from ACCOUNT where ACC_ID in <foreach item=\"AccountID\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{AccountID} </foreach> </delete>", "from(\"direct:start\") .to(\"mybatis:batchDeleteAccount?statementType=DeleteList\") .to(\"mock:result\");", "from(\"mybatis:selectAllAccounts?delay=60000\") .to(\"activemq:queue:allAccounts\");", "from(\"timer://pollTheDatabase?delay=30000\") .to(\"mybatis:selectAllAccounts\") .to(\"activemq:queue:allAccounts\");", "<!-- Select with no parameters using the result map for Account class. --> <select id=\"selectAllAccounts\" resultMap=\"AccountResult\"> select * from ACCOUNT </select>", "<bean id=\"dataSource\" class=\"org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy\"> <constructor-arg> <bean class=\"com.mchange.v2.c3p0.ComboPooledDataSource\"> <property name=\"driverClass\" value=\"org.postgresql.Driver\"/> <property name=\"jdbcUrl\" value=\"jdbc:postgresql://localhost:5432/myDatabase\"/> <property name=\"user\" value=\"myUser\"/> <property name=\"password\" value=\"myPassword\"/> </bean> </constructor-arg> </bean>", "<bean id=\"txManager\" class=\"org.springframework.jdbc.datasource.DataSourceTransactionManager\"> <property name=\"dataSource\" ref=\"dataSource\"/> </bean>", "<bean id=\"sqlSessionFactory\" class=\"org.mybatis.spring.SqlSessionFactoryBean\"> <property name=\"dataSource\" ref=\"dataSource\"/> <!-- standard mybatis config file --> <property name=\"configLocation\" value=\"/META-INF/SqlMapConfig.xml\"/> <!-- externalised mappers --> <property name=\"mapperLocations\" value=\"classpath*:META-INF/mappers/**/*.xml\"/> </bean>", "<bean id=\"mybatis\" class=\"org.apache.camel.component.mybatis.MyBatisComponent\"> <property name=\"sqlSessionFactory\" ref=\"sqlSessionFactory\"/> </bean>", "<bean id=\"PROPAGATION_REQUIRED\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\"/> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_REQUIRED\"/> </bean> <camelContext id=\"my-model-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"insertModel\"> <from uri=\"direct:insert\"/> <transacted ref=\"PROPAGATION_REQUIRED\"/> <to uri=\"mybatis:myModel.insert?statementType=Insert\"/> </route> </camelContext>", "<dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.3.0</version> </dependency>", "#application.properties camel.component.mybatis.sql-session-factory=#sqlSessionFactory" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mybatis-component
Configuring high availability for instances
Configuring high availability for instances Red Hat OpenStack Platform 17.1 Configure high availability for Compute instances OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_high_availability_for_instances/index
30.7. VDO Commands
30.7. VDO Commands This section describes the following VDO utilities: vdo The vdo utility manages both the kvdo and UDS components of VDO. It is also used to enable or disable compression. vdostats The vdostats utility displays statistics for each configured (or specified) device in a format similar to the Linux df utility. 30.7.1. vdo The vdo utility manages both the kvdo and UDS components of VDO. Synopsis Sub-Commands Table 30.4. VDO Sub-Commands Sub-Command Description create Creates a VDO volume and its associated index and makes it available. If −−activate=disabled is specified the VDO volume is created but not made available. Will not overwrite an existing file system or formatted VDO volume unless −−force is given. This command must be run with root privileges. Applicable options include: --name= volume (required) --device= device (required) --activate={enabled | disabled} --indexMem= gigabytes --blockMapCacheSize= megabytes --blockMapPeriod= period --compression={enabled | disabled} --confFile= file --deduplication={enabled | disabled} --emulate512={enabled | disabled} --sparseIndex={enabled | disabled} --vdoAckThreads= thread count --vdoBioRotationInterval= I/O count --vdoBioThreads= thread count --vdoCpuThreads= thread count --vdoHashZoneThreads= thread count --vdoLogicalThreads= thread count --vdoLogLevel= level --vdoLogicalSize= megabytes --vdoPhysicalThreads= thread count --readCache={enabled | disabled} --readCacheSize= megabytes --vdoSlabSize= megabytes --verbose --writePolicy={ auto | sync | async } --logfile=pathname remove Removes one or more stopped VDO volumes and associated indexes. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --force --verbose --logfile=pathname start Starts one or more stopped, activated VDO volumes and associated services. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --forceRebuild --verbose --logfile=pathname stop Stops one or more running VDO volumes and associated services. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --force --verbose --logfile=pathname activate Activates one or more VDO volumes. Activated volumes can be started using the start command. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --logfile=pathname --verbose deactivate Deactivates one or more VDO volumes. Deactivated volumes cannot be started by the start command. Deactivating a currently running volume does not stop it. Once stopped a deactivated VDO volume must be activated before it can be started again. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --verbose --logfile=pathname status Reports VDO system and volume status in YAML format. This command does not require root privileges though information will be incomplete if run without. Applicable options include: { --name= volume | --all } (required) --confFile= file --verbose --logfile=pathname See Table 30.6, "VDO Status Output" for the output provided. list Displays a list of started VDO volumes. If −−all is specified it displays both started and non‐started volumes. This command must be run with root privileges. Applicable options include: --all --confFile= file --logfile=pathname --verbose modify Modifies configuration parameters of one or all VDO volumes. Changes take effect the time the VDO device is started; already‐running devices are not affected. Applicable options include: { --name= volume | --all } (required) --blockMapCacheSize= megabytes --blockMapPeriod= period --confFile= file --vdoAckThreads= thread count --vdoBioThreads= thread count --vdoCpuThreads= thread count --vdoHashZoneThreads= thread count --vdoLogicalThreads= thread count --vdoPhysicalThreads= thread count --readCache={enabled | disabled} --readCacheSize= megabytes --verbose --logfile=pathname changeWritePolicy Modifies the write policy of one or all running VDO volumes. This command must be run with root privileges. { --name= volume | --all } (required) --writePolicy={ auto | sync | async } (required) --confFile= file --logfile=pathname --verbose enableDeduplication Enables deduplication on one or more VDO volumes. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --verbose --logfile=pathname disableDeduplication Disables deduplication on one or more VDO volumes. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --verbose --logfile=pathname enableCompression Enables compression on one or more VDO volumes. If the VDO volume is running, takes effect immediately. If the VDO volume is not running compression will be enabled the time the VDO volume is started. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --verbose --logfile=pathname disableCompression Disables compression on one or more VDO volumes. If the VDO volume is running, takes effect immediately. If the VDO volume is not running compression will be disabled the time the VDO volume is started. This command must be run with root privileges. Applicable options include: { --name= volume | --all } (required) --confFile= file --verbose --logfile=pathname growLogical Adds logical space to a VDO volume. The volume must exist and must be running. This command must be run with root privileges. Applicable options include: --name= volume (required) --vdoLogicalSize= megabytes (required) --confFile= file --verbose --logfile=pathname growPhysical Adds physical space to a VDO volume. The volume must exist and must be running. This command must be run with root privileges. Applicable options include: --name= volume (required) --confFile= file --verbose --logfile=pathname printConfigFile Prints the configuration file to stdout . This command require root privileges. Applicable options include: --confFile= file --logfile=pathname --verbose Options Table 30.5. VDO Options Option Description --indexMem= gigabytes Specifies the amount of UDS server memory in gigabytes; the default size is 1 GB. The special decimal values 0.25, 0.5, 0.75 can be used, as can any positive integer. --sparseIndex={enabled | disabled} Enables or disables sparse indexing. The default is disabled . --all Indicates that the command should be applied to all configured VDO volumes. May not be used with --name . --blockMapCacheSize= megabytes Specifies the amount of memory allocated for caching block map pages; the value must be a multiple of 4096. Using a value with a B (ytes), K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes) or E (xabytes) suffix is optional. If no suffix is supplied, the value will be interpreted as megabytes. The default is 128M; the value must be at least 128M and less than 16T. Note that there is a memory overhead of 15%. --blockMapPeriod= period A value between 1 and 16380 which determines the number of block map updates which may accumulate before cached pages are flushed to disk. Higher values decrease recovery time after a crash at the expense of decreased performance during normal operation. The default value is 16380. Speak with your Red Hat representative before tuning this parameter. --compression={enabled | disabled} Enables or disables compression within the VDO device. The default is enabled. Compression may be disabled if necessary to maximize performance or to speed processing of data that is unlikely to compress. --confFile= file Specifies an alternate configuration file. The default is /etc/vdoconf.yml . --deduplication={enabled | disabled} Enables or disables deduplication within the VDO device. The default is enabled . Deduplication may be disabled in instances where data is not expected to have good deduplication rates but compression is still desired. --emulate512={enabled | disabled} Enables 512-byte block device emulation mode. The default is disabled . --force Unmounts mounted file systems before stopping a VDO volume. --forceRebuild Forces an offline rebuild before starting a read-only VDO volume so that it may be brought back online and made available. This option may result in data loss or corruption. --help Displays documentation for the vdo utility. --logfile=pathname Specify the file to which this script's log messages are directed. Warning and error messages are always logged to syslog as well. --name= volume Operates on the specified VDO volume. May not be used with --all . --device= device Specifies the absolute path of the device to use for VDO storage. --activate={enabled | disabled} The argument disabled indicates that the VDO volume should only be created. The volume will not be started or enabled. The default is enabled . --vdoAckThreads= thread count Specifies the number of threads to use for acknowledging completion of requested VDO I/O operations. The default is 1; the value must be at least 0 and less than or equal to 100. --vdoBioRotationInterval= I/O count Specifies the number of I/O operations to enqueue for each bio-submission thread before directing work to the . The default is 64; the value must be at least 1 and less than or equal to 1024. --vdoBioThreads= thread count Specifies the number of threads to use for submitting I/O operations to the storage device. Minimum is 1; maximum is 100. The default is 4; the value must be at least 1 and less than or equal to 100. --vdoCpuThreads= thread count Specifies the number of threads to use for CPU- intensive work such as hashing or compression. The default is 2; the value must be at least 1 and less than or equal to 100. --vdoHashZoneThreads= thread count Specifies the number of threads across which to subdivide parts of the VDO processing based on the hash value computed from the block data. The default is 1 ; the value must be at least 0 and less than or equal to 100. vdoHashZoneThreads , vdoLogicalThreads and vdoPhysicalThreads must be either all zero or all non-zero. --vdoLogicalThreads= thread count Specifies the number of threads across which to subdivide parts of the VDO processing based on the hash value computed from the block data. The value must be at least 0 and less than or equal to 100. A logical thread count of 9 or more will require explicitly specifying a sufficiently large block map cache size, as well. vdoHashZoneThreads , vdoLogicalThreads , and vdoPhysicalThreads must be either all zero or all non‐zero. The default is 1. --vdoLogLevel= level Specifies the VDO driver log level: critical , error , warning , notice , info , or debug . Levels are case sensitive; the default is info . --vdoLogicalSize= megabytes Specifies the logical VDO volume size in megabytes. Using a value with a S (ectors), B (ytes), K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes) or E (xabytes) suffix is optional. Used for over- provisioning volumes. This defaults to the size of the storage device. --vdoPhysicalThreads= thread count Specifies the number of threads across which to subdivide parts of the VDO processing based on physical block addresses. The value must be at least 0 and less than or equal to 16. Each additional thread after the first will use an additional 10 MB of RAM. vdoPhysicalThreads , vdoHashZoneThreads , and vdoLogicalThreads must be either all zero or all non‐zero. The default is 1. --readCache={enabled | disabled} Enables or disables the read cache within the VDO device. The default is disabled . The cache should be enabled if write workloads are expected to have high levels of deduplication, or for read intensive workloads of highly compressible data. --readCacheSize= megabytes Specifies the extra VDO device read cache size in megabytes. This space is in addition to a system- defined minimum. Using a value with a B (ytes), K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes) or E (xabytes) suffix is optional. The default is 0M. 1.12 MB of memory will be used per MB of read cache specified, per bio thread. --vdoSlabSize= megabytes Specifies the size of the increment by which a VDO is grown. Using a smaller size constrains the total maximum physical size that can be accommodated. Must be a power of two between 128M and 32G; the default is 2G. Using a value with a S (ectors), B (ytes), K (ilobytes), M (egabytes), G (igabytes), T (erabytes), P (etabytes) or E (xabytes) suffix is optional. If no suffix is used, the value will be interpreted as megabytes. --verbose Prints commands before executing them. --writePolicy={ auto | sync | async } Specifies the write policy: auto : Select sync or async based on the storage layer underneath VDO. If a write back cache is present, async will be chosen. Otherwise, sync will be chosen. sync : Writes are acknowledged only after data is stably written. This is the default policy. This policy is not supported if the underlying storage is not also synchronous. async : Writes are acknowledged after data has been cached for writing to stable storage. Data which has not been flushed is not guaranteed to persist in this mode. The status subcommand returns the following information in YAML format, divided into keys as follows: Table 30.6. VDO Status Output Key Description VDO Status Information in this key covers the name of the host and date and time at which the status inquiry is being made. Parameters reported in this area include: Node The host name of the system on which VDO is running. Date The date and time at which the vdo status command is run. Kernel Module Information in this key covers the configured kernel. Loaded Whether or not the kernel module is loaded (True or False). Version Information Information on the version of kvdo that is configured. Configuration Information in this key covers the location and status of the VDO configuration file. File Location of the VDO configuration file. Last modified The last-modified date of the VDO configuration file. VDOs Provides configuration information for all VDO volumes. Parameters reported for each VDO volume include: Block size The block size of the VDO volume, in bytes. 512 byte emulation Indicates whether the volume is running in 512-byte emulation mode. Enable deduplication Whether deduplication is enabled for the volume. Logical size The logical size of the VDO volume. Physical size The size of a VDO volume's underlying physical storage. Write policy The configured value of the write policy (sync or async). VDO Statistics Output of the vdostats utility. 30.7.2. vdostats The vdostats utility displays statistics for each configured (or specified) device in a format similar to the Linux df utility. The output of the vdostats utility may be incomplete if it is not run with root privileges. Synopsis Options Table 30.7. vdostats Options Option Description --verbose Displays the utilization and block I/O (bios) statistics for one (or more) VDO devices. See Table 30.9, "vdostats --verbose Output" for details. --human-readable Displays block values in readable form (Base 2: 1 KB = 2 10 bytes = 1024 bytes). --si The --si option modifies the output of the --human-readable option to use SI units (Base 10: 1 KB = 10 3 bytes = 1000 bytes). If the --human-readable option is not supplied, the --si option has no effect. --all This option is only for backwards compatibility. It is now equivalent to the --verbose option. --version Displays the vdostats version. device ... Specifies one or more specific volumes to report on. If this argument is omitted, vdostats will report on all devices. Output The following example shows sample output if no options are provided, which is described in Table 30.8, "Default vdostats Output" : Table 30.8. Default vdostats Output Item Description Device The path to the VDO volume. 1K-blocks The total number of 1K blocks allocated for a VDO volume (= physical volume size * block size / 1024) Used The total number of 1K blocks used on a VDO volume (= physical blocks used * block size / 1024) Available The total number of 1K blocks available on a VDO volume (= physical blocks free * block size / 1024) Use% The percentage of physical blocks used on a VDO volume (= used blocks / allocated blocks * 100) Space Saving% The percentage of physical blocks saved on a VDO volume (= [logical blocks used - physical blocks used] / logical blocks used) The --human-readable option converts block counts into conventional units (1 KB = 1024 bytes): The --human-readable and --si options convert block counts into SI units (1 KB = 1000 bytes): The --verbose ( Table 30.9, "vdostats --verbose Output" ) option displays VDO device statistics in YAML format for one (or all) VDO devices. Statistics printed in bold in Table 30.9, "vdostats --verbose Output" will continue to be reported in future releases. The remaining fields are primarily intended for software support and are subject to change in future releases; management tools should not rely upon them. Management tools should also not rely upon the order in which any of the statistics are reported. Table 30.9. vdostats --verbose Output Item Description Version The version of these statistics. Release version The release version of the VDO. Data blocks used The number of physical blocks currently in use by a VDO volume to store data. Overhead blocks used The number of physical blocks currently in use by a VDO volume to store VDO metadata. Logical blocks used The number of logical blocks currently mapped. Physical blocks The total number of physical blocks allocated for a VDO volume. Logical blocks The maximum number of logical blocks that can be mapped by a VDO volume. 1K-blocks The total number of 1K blocks allocated for a VDO volume (= physical volume size * block size / 1024) 1K-blocks used The total number of 1K blocks used on a VDO volume (= physical blocks used * block size / 1024) 1K-blocks available The total number of 1K blocks available on a VDO volume (= physical blocks free * block size / 1024) Used percent The percentage of physical blocks used on a VDO volume (= used blocks / allocated blocks * 100) Saving percent The percentage of physical blocks saved on a VDO volume (= [logical blocks used - physical blocks used] / logical blocks used) Block map cache size The size of the block map cache, in bytes. Write policy The active write policy (sync or async). This is configured via vdo changeWritePolicy --writePolicy=auto|sync|async . Block size The block size of a VDO volume, in bytes. Completed recovery count The number of times a VDO volume has recovered from an unclean shutdown. Read-only recovery count The number of times a VDO volume has been recovered from read-only mode (via vdo start --forceRebuild ). Operating mode Indicates whether a VDO volume is operating normally, is in recovery mode, or is in read-only mode. Recovery progress (%) Indicates online recovery progress, or N/A if the volume is not in recovery mode. Compressed fragments written The number of compressed fragments that have been written since the VDO volume was last restarted. Compressed blocks written The number of physical blocks of compressed data that have been written since the VDO volume was last restarted. Compressed fragments in packer The number of compressed fragments being processed that have not yet been written. Slab count The total number of slabs. Slabs opened The total number of slabs from which blocks have ever been allocated. Slabs reopened The number of times slabs have been re-opened since the VDO was started. Journal disk full count The number of times a request could not make a recovery journal entry because the recovery journal was full. Journal commits requested count The number of times the recovery journal requested slab journal commits. Journal entries batching The number of journal entry writes started minus the number of journal entries written. Journal entries started The number of journal entries which have been made in memory. Journal entries writing The number of journal entries in submitted writes minus the number of journal entries committed to storage. Journal entries written The total number of journal entries for which a write has been issued. Journal entries committed The number of journal entries written to storage. Journal blocks batching The number of journal block writes started minus the number of journal blocks written. Journal blocks started The number of journal blocks which have been touched in memory. Journal blocks writing The number of journal blocks written (with metadatata in active memory) minus the number of journal blocks committed. Journal entries written The total number of journal blocks for which a write has been issued. Journal blocks committed The number of journal blocks written to storage. Slab journal disk full count The number of times an on-disk slab journal was full. Slab journal flush count The number of times an entry was added to a slab journal that was over the flush threshold. Slab journal blocked count The number of times an entry was added to a slab journal that was over the blocking threshold. Slab journal blocks written The number of slab journal block writes issued. Slab journal tail busy count The number of times write requests blocked waiting for a slab journal write. Slab summary blocks written The number of slab summary block writes issued. Reference blocks written The number of reference block writes issued. Block map dirty pages The number of dirty pages in the block map cache. Block map clean pages The number of clean pages in the block map cache. Block map free pages The number of free pages in the block map cache. Block map failed pages The number of block map cache pages that have write errors. Block map incoming pages The number of block map cache pages that are being read into the cache. Block map outgoing pages The number of block map cache pages that are being written. Block map cache pressure The number of times a free page was not available when needed. Block map read count The total number of block map page reads. Block map write count The total number of block map page writes. Block map failed reads The total number of block map read errors. Block map failed writes The total number of block map write errors. Block map reclaimed The total number of block map pages that were reclaimed. Block map read outgoing The total number of block map reads for pages that were being written. Block map found in cache The total number of block map cache hits. Block map discard required The total number of block map requests that required a page to be discarded. Block map wait for page The total number of requests that had to wait for a page. Block map fetch required The total number of requests that required a page fetch. Block map pages loaded The total number of page fetches. Block map pages saved The total number of page saves. Block map flush count The total number of flushes issued by the block map. Invalid advice PBN count The number of times the index returned invalid advice No space error count. The number of write requests which failed due to the VDO volume being out of space. Read only error count The number of write requests which failed due to the VDO volume being in read-only mode. Instance The VDO instance. 512 byte emulation Indicates whether 512 byte emulation is on or off for the volume. Current VDO IO requests in progress. The number of I/O requests the VDO is current processing. Maximum VDO IO requests in progress The maximum number of simultaneous I/O requests the VDO has processed. Current dedupe queries The number of deduplication queries currently in flight. Maximum dedupe queries The maximum number of in-flight deduplication queries. Dedupe advice valid The number of times deduplication advice was correct. Dedupe advice stale The number of times deduplication advice was incorrect. Dedupe advice timeouts The number of times deduplication queries timed out. Flush out The number of flush requests submitted by VDO to the underlying storage. Bios in... Bios in partial... Bios out... Bios meta... Bios journal... Bios page cache... Bios out completed... Bio meta completed... Bios journal completed... Bios page cache completed... Bios acknowledged... Bios acknowledged partial... Bios in progress... These statistics count the number of bios in each category with a given flag. The categories are: bios in: The number of block I/O requests received by VDO. bios in partial: The number of partial block I/O requests received by VDO. Applies only to 512-byte emulation mode. bios out: The number of non-metadata block I/O requests submitted by VDO to the storage device. bios meta: The number of metadata block I/O requests submitted by VDO to the storage device. bios journal: The number of recovery journal block I/O requests submitted by VDO to the storage device. bios page cache: The number of block map I/O requests submitted by VDO to the storage device. bios out completed: The number of non-metadata block I/O requests completed by the storage device. bios meta completed: The number of metadata block I/O requests completed by the storage device. bios journal completed: The number of recovery journal block I/O requests completed by the storage device. bios page cache completed: The number of block map I/O requests completed by the storage device. bios acknowledged: The number of block I/O requests acknowledged by VDO. bios acknowledged partial: The number of partial block I/O requests acknowledged by VDO. Applies only to 512-byte emulation mode. bios in progress: The number of bios submitted to the VDO which have not yet been acknowledged. There are three types of flags: read: The number of non-write bios (bios without the REQ_WRITE flag set) write: The number of write bios (bios with the REQ_WRITE flag set) discard: The number of bios with a REQ_DISCARD flag set Read cache accesses The number of times VDO searched the read cache. Read cache hits The number of read cache hits.
[ "vdo { activate | changeWritePolicy | create | deactivate | disableCompression | disableDeduplication | enableCompression | enableDeduplication | growLogical | growPhysical | list | modify | printConfigFile | remove | start | status | stop } [ options... ]", "vdostats [ --verbose | --human-readable | --si | --all ] [ --version ] [ device ...]", "Device 1K-blocks Used Available Use% Space Saving% /dev/mapper/my_vdo 1932562432 427698104 1504864328 22% 21%", "Device Size Used Available Use% Space Saving% /dev/mapper/my_vdo 1.8T 407.9G 1.4T 22% 21%", "Device Size Used Available Use% Space Saving% /dev/mapper/my_vdo 2.0T 438G 1.5T 22% 21%" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-ig-commands
15.7. Promoting a Consumer or Hub to a Supplier
15.7. Promoting a Consumer or Hub to a Supplier In certain situations, such as when a supplier in a replication topology is unavailable due to a hardware outage, administrators want to promote a read-only consumer or hub to a writable supplier. 15.7.1. Promoting a Consumer or Hub to a Supplier Using the Command Line For example, to promote the server.example.com host to a supplier for the dc=example,dc=com suffix: Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Optionally, you can now configure the new supplier to replicate changes for the suffix to other servers in the topology. For details about configuring replication, see: Section 15.2.1, "Setting up Single-supplier Replication Using the Command Line" Section 15.3.1, "Setting up Multi-supplier Replication Using the Command Line" Section 15.4.1, "Setting up Cascading Replication Using the Command Line" 15.7.2. Promoting a Consumer or Hub to a Supplier Using the Web Console For example, to promote a consumer or hub to a supplier for the dc=example,dc=com suffix: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Replication menu and select the Configuration entry. Select the dc=example,dc=com suffix. Click Promote . Select Supplier in the Replication Role field and enter a replica ID. Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Select Yes, I am sure . Click Change Role to confirm the new role. Optionally, you can now configure the new supplier to replicate changes for the suffix to other servers in the topology. For details about configuring replication, see: Section 15.2.2, "Setting up Single-supplier Replication Using the Web Console" Section 15.3.2, "Setting up Multi-supplier Replication Using the Web Console" Section 15.4.2, "Setting up Cascading Replication Using the Web Console"
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication promote --suffix=\" dc=example,dc=com \" --newrole=\"supplier\" --replica-id= 2" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/promoting_a_consumer_or_hub_to_a_supplier
Providing feedback on Red Hat build of Quarkus documentation
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/deploying_your_red_hat_build_of_quarkus_applications_to_openshift_container_platform/proc_providing-feedback-on-red-hat-documentation_quarkus-openshift
High Availability Add-On Reference
High Availability Add-On Reference Red Hat Enterprise Linux 7 Reference guide for configuration and management of the High Availability Add-On Steven Levine Red Hat Customer Content Services [email protected]
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/index
Chapter 5. address
Chapter 5. address This chapter describes the commands under the address command. 5.1. address group create Create a new Address Group Usage: Table 5.1. Positional arguments Value Summary <name> New address group name Table 5.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> New address group description --address <ip-address> Ip address or cidr (repeat option to set multiple addresses) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 5.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.2. address group delete Delete address group(s) Usage: Table 5.7. Positional arguments Value Summary <address-group> Address group(s) to delete (name or id) Table 5.8. Command arguments Value Summary -h, --help Show this help message and exit 5.3. address group list List address groups Usage: Table 5.9. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only address groups of given name in output --project <project> List address groups according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 5.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 5.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 5.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.4. address group set Set address group properties Usage: Table 5.14. Positional arguments Value Summary <address-group> Address group to modify (name or id) Table 5.15. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set address group name --description <description> Set address group description --address <ip-address> Ip address or cidr (repeat option to set multiple addresses) 5.5. address group show Display address group details Usage: Table 5.16. Positional arguments Value Summary <address-group> Address group to display (name or id) Table 5.17. Command arguments Value Summary -h, --help Show this help message and exit Table 5.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.6. address group unset Unset address group properties Usage: Table 5.22. Positional arguments Value Summary <address-group> Address group to modify (name or id) Table 5.23. Command arguments Value Summary -h, --help Show this help message and exit --address <ip-address> Ip address or cidr (repeat option to unset multiple addresses) 5.7. address scope create Create a new Address Scope Usage: Table 5.24. Positional arguments Value Summary <name> New address scope name Table 5.25. Command arguments Value Summary -h, --help Show this help message and exit --ip-version {4,6} Ip version (default is 4) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share Share the address scope between projects --no-share Do not share the address scope between projects (default) Table 5.26. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.27. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.28. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.29. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.8. address scope delete Delete address scope(s) Usage: Table 5.30. Positional arguments Value Summary <address-scope> Address scope(s) to delete (name or id) Table 5.31. Command arguments Value Summary -h, --help Show this help message and exit 5.9. address scope list List address scopes Usage: Table 5.32. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only address scopes of given name in output --ip-version <ip-version> List address scopes of given ip version networks (4 or 6) --project <project> List address scopes according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share List address scopes shared between projects --no-share List address scopes not shared between projects Table 5.33. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 5.34. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 5.35. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.36. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.10. address scope set Set address scope properties Usage: Table 5.37. Positional arguments Value Summary <address-scope> Address scope to modify (name or id) Table 5.38. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set address scope name --share Share the address scope between projects --no-share Do not share the address scope between projects 5.11. address scope show Display address scope details Usage: Table 5.39. Positional arguments Value Summary <address-scope> Address scope to display (name or id) Table 5.40. Command arguments Value Summary -h, --help Show this help message and exit Table 5.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack address group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--address <ip-address>] [--project <project>] [--project-domain <project-domain>] <name>", "openstack address group delete [-h] <address-group> [<address-group> ...]", "openstack address group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--project <project>] [--project-domain <project-domain>]", "openstack address group set [-h] [--name <name>] [--description <description>] [--address <ip-address>] <address-group>", "openstack address group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-group>", "openstack address group unset [-h] [--address <ip-address>] <address-group>", "openstack address scope create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--ip-version {4,6}] [--project <project>] [--project-domain <project-domain>] [--share | --no-share] <name>", "openstack address scope delete [-h] <address-scope> [<address-scope> ...]", "openstack address scope list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>] [--share | --no-share]", "openstack address scope set [-h] [--name <name>] [--share | --no-share] <address-scope>", "openstack address scope show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-scope>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/address
Managing and allocating storage resources
Managing and allocating storage resources Red Hat OpenShift Data Foundation 4.17 Instructions on how to allocate storage to core services and hosted applications in OpenShift Data Foundation, including snapshot and clone. Red Hat Storage Documentation Team Abstract This document explains how to allocate storage to core services and hosted applications in Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Chapter 1. Overview Read this document to understand how to create, configure, and allocate storage to core services or hosted applications in Red Hat OpenShift Data Foundation. Chapter 2, Storage classes shows you how to create custom storage classes. Chapter 3, Block pools provides you with information on how to create, update and delete block pools. Chapter 4, Configure storage for OpenShift Container Platform services shows you how to use OpenShift Data Foundation for core OpenShift Container Platform services. Chapter 6, Backing OpenShift Container Platform applications with OpenShift Data Foundation provides information about how to configure OpenShift Container Platform applications to use OpenShift Data Foundation. Adding file and object storage to an existing external OpenShift Data Foundation cluster Chapter 8, How to use dedicated worker nodes for Red Hat OpenShift Data Foundation provides information about how to use dedicated worker nodes for Red Hat OpenShift Data Foundation. Chapter 9, Managing Persistent Volume Claims provides information about managing Persistent Volume Claim requests, and automating the fulfillment of those requests. Chapter 10, Reclaiming space on target volumes shows you how to reclaim the actual available storage space. Chapter 12, Volume Snapshots shows you how to create, restore, and delete volume snapshots. Chapter 13, Volume cloning shows you how to create volume clones. Chapter 14, Managing container storage interface (CSI) component placements provides information about setting tolerations to bring up container storage interface component on the nodes. Chapter 2. Storage classes The OpenShift Data Foundation operator installs a default storage class depending on the platform in use. This default storage class is owned and controlled by the operator and it cannot be deleted or modified. However, you can create custom storage classes to use other storage resources or to offer a different behavior to applications. Note Custom storage classes are not supported for external mode OpenShift Data Foundation clusters. 2.1. Creating storage classes and pools You can create a storage class using an existing pool or you can create a new pool for the storage class while creating it. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Reclaim Policy is set to Delete as the default option. Use this setting. If you change the reclaim policy to Retain in the storage class, the persistent volume (PV) remains in Released state even after deleting the persistent volume claim (PVC). Volume binding mode is set to WaitForConsumer as the default option. If you choose the Immediate option, then the PV gets created immediately when creating the PVC. Select RBD or CephFS Provisioner as the plugin for provisioning the persistent volumes. Choose a Storage system for your workloads. Select an existing Storage Pool from the list or create a new pool. Note The 2-way replication data protection policy is only supported for the non-default RBD pool. 2-way replication can be used by creating an additional pool. To know about Data Availability and Integrity considerations for replica 2 pools, see Knowledgebase Customer Solution Article . Create new pool Click Create New Pool . Enter Pool name . Choose 2-way-Replication or 3-way-Replication as the Data Protection Policy. Select Enable compression if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression will not be compressed. Click Create to create the new storage pool. Click Finish after the pool is created. Optional: Select Enable Encryption checkbox. Click Create to create the storage class. 2.2. Storage class for persistent volume encryption Persistent volume (PV) encryption guarantees isolation and confidentiality between tenants (applications). Before you can use PV encryption, you must create a storage class for PV encryption. Persistent volume encryption is only available for RBD PVs. OpenShift Data Foundation supports storing encryption passphrases in HashiCorp Vault and Thales CipherTrust Manager. You can create an encryption enabled storage class using an external key management system (KMS) for persistent volume encryption. You need to configure access to the KMS before creating the storage class. Note For PV encryption, you must have a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . 2.2.1. Access configuration for Key Management System (KMS) Based on your use case, you need to configure access to KMS using one of the following ways: Using vaulttokens : allows users to authenticate using a token Using Thales CipherTrust Manager : uses Key Management Interoperability Protocol (KMIP) Using vaulttenantsa (Technology Preview): allows users to use serviceaccounts to authenticate with Vault Important Accessing the KMS using vaulttenantsa is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 2.2.1.1. Configuring access to KMS using vaulttokens Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy with a token exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Procedure Create a secret in the tenant's namespace. In the OpenShift Container Platform web console, navigate to Workloads -> Secrets . Click Create -> Key/value secret . Enter Secret Name as ceph-csi-kms-token . Enter Key as token . Enter Value . It is the token from Vault. You can either click Browse to select and upload the file containing the token or enter the token directly in the text box. Click Create . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. 2.2.1.2. Configuring access to KMS using Thales CipherTrust Manager Prerequisites Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token be navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both meta-data and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Procedure To create a key to act as the Key Encryption Key (KEK) for storageclass encryption, follow the steps below: Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. 2.2.1.3. Configuring access to KMS using vaulttenantsa Prerequisites The OpenShift Data Foundation cluster is in Ready state. On the external key management system (KMS), Ensure that a policy exists and the key value backend path in Vault is enabled. Ensure that you are using signed certificates on your Vault servers. Create the following serviceaccount in the tenant namespace as shown below: Procedure You need to configure the Kubernetes authentication method before OpenShift Data Foundation can authenticate with and start using Vault . The following instructions create and configure serviceAccount , ClusterRole , and ClusterRoleBinding required to allow OpenShift Data Foundation to authenticate with Vault . Apply the following YAML to your Openshift cluster: Create a secret for serviceaccount token and CA certificate. Get the token and the CA certificate from the secret. Retrieve the OpenShift cluster endpoint. Use the information collected in the steps to set up the kubernetes authentication method in Vault as shown: Create a role in Vault for the tenant namespace: csi-kubernetes is the default role name that OpenShift Data Foundation looks for in Vault. The default service account name in the tenant namespace in the OpenShift Data Foundation cluster is ceph-csi-vault-sa . These default values can be overridden by creating a ConfigMap in the tenant namespace. For more information about overriding the default names, see Overriding Vault connection details using tenant ConfigMap . Sample YAML To create a storageclass that uses the vaulttenantsa method for PV encryption, you must either edit the existing ConfigMap or create a ConfigMap named csi-kms-connection-details that will hold all the information needed to establish the connection with Vault. The sample yaml given below can be used to update or create the csi-kms-connection-detail ConfigMap: encryptionKMSType Set to vaulttenantsa to use service accounts for authentication with vault. vaultAddress The hostname or IP address of the vault server with the port number. vaultTLSServerName (Optional) The vault TLS server name vaultAuthPath (Optional) The path where kubernetes auth method is enabled in Vault. The default path is kubernetes . If the auth method is enabled in a different path other than kubernetes , this variable needs to be set as "/v1/auth/<path>/login" . vaultAuthNamespace (Optional) The Vault namespace where kubernetes auth method is enabled. vaultNamespace (Optional) The Vault namespace where the backend path being used to store the keys exists vaultBackendPath The backend path in Vault where the encryption keys will be stored vaultCAFromSecret The secret in the OpenShift Data Foundation cluster containing the CA certificate from Vault vaultClientCertFromSecret The secret in the OpenShift Data Foundation cluster containing the client certificate from Vault vaultClientCertKeyFromSecret The secret in the OpenShift Data Foundation cluster containing the client private key from Vault tenantSAName (Optional) The service account name in the tenant namespace. The default value is ceph-csi-vault-sa . If a different name is to be used, this variable has to be set accordingly. 2.2.2. Creating a storage class for persistent volume encryption Prerequisites Based on your use case, you must ensure to configure access to KMS for one of the following: Using vaulttokens : Ensure to configure access as described in Configuring access to KMS using vaulttokens Using vaulttenantsa (Technology Preview): Ensure to configure access as described in Configuring access to KMS using vaulttenantsa Using Thales CipherTrust Manager (using KMIP): Ensure to configure access as described in Configuring access to KMS using Thales CipherTrust Manager (For users on Azure platform only) Using Azure Vault: Ensure to set up client authentication and fetch the client credentials from Azure using the following steps: Create Azure Vault. For more information, see Quickstart: Create a key vault using the Azure portal in Microsoft product documentation. Create Service Principal with certificate based authentication. For more information, see Create an Azure service principal with Azure CLI in Microsoft product documentation. Set Azure Key Vault role based access control (RBAC). For more information, see Enable Azure RBAC permissions on Key Vault . Procedure In the OpenShift Web Console, navigate to Storage -> StorageClasses . Click Create Storage Class . Enter the storage class Name and Description . Select either Delete or Retain for the Reclaim Policy . By default, Delete is selected. Select either Immediate or WaitForFirstConsumer as the Volume binding mode . WaitForConsumer is set as the default option. Select RBD Provisioner openshift-storage.rbd.csi.ceph.com which is the plugin used for provisioning the persistent volumes. Select Storage Pool where the volume data is stored from the list or create a new pool. Select the Enable encryption checkbox. Choose one of the following options to set the KMS connection details: Choose existing KMS connection : Select an existing KMS connection from the drop-down list. The list is populated from the the connection details available in the csi-kms-connection-details ConfigMap. Select the Provider from the drop down. Select the Key service for the given provider from the list. Create new KMS connection : This is applicable for vaulttokens and Thales CipherTrust Manager (using KMIP) only. Select one of the following Key Management Service Provider and provide the required details. Vault Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name . In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example, Address : 123.34.3.2, Port : 5696. Upload the Client Certificate , CA certificate , and Client Private Key . Enter the Unique Identifier for the key to be used for encryption and decryption, generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Azure Key Vault (Only for Azure users on Azure platform) For information about setting up client authentication and fetching the client credentials, see the Prerequisites in Creating an OpenShift Data Foundation cluster section of the Deploying OpenShift Data Foundation using Microsoft Azure guide. Enter a unique Connection name for the key management service within the project. Enter Azure Vault URL . Enter Client ID . Enter Tenant ID . Upload Certificate file in .PEM format and the certificate file must include a client certificate and a private key. Click Save . Click Create . Edit the ConfigMap to add the vaultBackend parameter if the HashiCorp Vault setup does not allow automatic detection of the Key/Value (KV) secret engine API version used by the backend path. Note vaultBackend is an optional parameters that is added to the configmap to specify the version of the KV secret engine API associated with the backend path. Ensure that the value matches the KV secret engine API version that is set for the backend path, otherwise it might result in a failure during persistent volume claim (PVC) creation. Identify the encryptionKMSID being used by the newly created storage class. On the OpenShift Web Console, navigate to Storage -> Storage Classes . Click the Storage class name -> YAML tab. Capture the encryptionKMSID being used by the storage class. Example: On the OpenShift Web Console, navigate to Workloads -> ConfigMaps . To view the KMS connection details, click csi-kms-connection-details . Edit the ConfigMap. Click Action menu (...) -> Edit ConfigMap . Add the vaultBackend parameter depending on the backend that is configured for the previously identified encryptionKMSID . You can assign kv for KV secret engine API, version 1 and kv-v2 for KV secret engine API, version 2. Example: Click Save steps The storage class can be used to create encrypted persistent volumes. For more information, see managing persistent volume claims . Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the HashiCorp product. For technical assistance with this product, contact HashiCorp . 2.2.2.1. Overriding Vault connection details using tenant ConfigMap The Vault connections details can be reconfigured per tenant by creating a ConfigMap in the Openshift namespace with configuration options that differ from the values set in the csi-kms-connection-details ConfigMap in the openshift-storage namespace. The ConfigMap needs to be located in the tenant namespace. The values in the ConfigMap in the tenant namespace will override the values set in the csi-kms-connection-details ConfigMap for the encrypted Persistent Volumes created in that namespace. Procedure Ensure that you are in the tenant namespace. Click on Workloads -> ConfigMaps . Click on Create ConfigMap . The following is a sample yaml. The values to be overidden for the given tenant namespace can be specified under the data section as shown below: After the yaml is edited, click on Create . 2.3. Storage class with single replica You can create a storage class with a single replica to be used by your applications. This avoids redundant data copies and allows resiliency management on the application level. Warning Enabling this feature creates a single replica pool without data replication, increasing the risk of data loss, data corruption, and potential system instability if your application does not have its own replication. If any OSDs are lost, this feature requires very disruptive steps to recover. All applications can lose their data, and must be recreated in case of a failed OSD. Procedure Enable the single replica feature using the following command: Verify storagecluster is in Ready state: Example output: New cephblockpools are created for each failure domain. Verify cephblockpools are in Ready state: Example output: Verify new storage classes have been created: Example output: New OSD pods are created; 3 osd-prepare pods and 3 additional pods. Verify new OSD pods are in Running state: Example output: 2.3.1. Recovering after OSD lost from single replica When using replica 1, a storage class with a single replica, data loss is guaranteed when an OSD is lost. Lost OSDs go into a failing state. Use the following steps to recover after OSD loss. Procedure Follow these recovery steps to get your applications running again after data loss from replica 1. You first need to identify the domain where the failing OSD is. If you know which failure domain the failing OSD is in, run the following command to get the exact replica1-pool-name required for the steps. If you do not know where the failing OSD is, skip to step 2. Example output: Copy the corresponding failure domain name for use in steps, then skip to step 4. Find the OSD pod that is in Error state or CrashLoopBackoff state to find the failing OSD: Identify the replica-1 pool that had the failed OSD. Identify the node where the failed OSD was running: Identify the failureDomainLabel for the node where the failed OSD was running: The output shows the replica-1 pool name whose OSD is failing, for example: where USDfailure_domain_value is the failureDomainName. Delete the replica-1 pool. Connect to the toolbox pod: Delete the replica-1 pool. Note that you have to enter the replica-1 pool name twice in the command, for example: Replace replica1-pool-name with the failure domain name identified earlier. Purge the failing OSD by following the steps in section "Replacing operational or failed storage devices" based on your platform in the Replacing devices guide. Restart the rook-ceph operator: Recreate any affected applications in that avaialbity zone to start using the new pool with same name. Chapter 3. Block pools The OpenShift Data Foundation operator installs a default set of storage pools depending on the platform in use. These default storage pools are owned and controlled by the operator and it cannot be deleted or modified. Note Multiple block pools are not supported for external mode OpenShift Data Foundation clusters. 3.1. Managing block pools in internal mode With OpenShift Container Platform, you can create multiple custom storage pools which map to storage classes that provide the following features: Enable applications with their own high availability to use persistent volumes with two replicas, potentially improving application performance. Save space for persistent volume claims using storage classes with compression enabled. 3.1.1. Creating a block pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click Create storage pool . Select Volume type as Block . Enter Pool name . Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Select Data protection policy as either 2-way Replication or 3-way Replication . Optional: Select Enable compression checkbox if you need to compress the data. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Create . 3.1.2. Updating an existing pool Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Storage pools . Click the Action Menu (...) at the end the pool you want to update. Click Edit storage pool . Modify the form details as follows: Note Using 2-way replication data protection policy is not supported for the default pool. However, you can use 2-way replication if you are creating an additional pool. Change the Data protection policy to either 2-way Replication or 3-way Replication. Enable or disable the compression option. Enabling compression can impact application performance and might prove ineffective when data to be written is already compressed or encrypted. Data written before enabling compression is not compressed. Click Save . 3.1.3. Deleting a pool Use this procedure to delete a pool in OpenShift Data Foundation. Prerequisites You must be logged into the OpenShift Container Platform web console as an administrator. Procedure . Click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click the Storage pools tab. Click the Action Menu (...) at the end the pool you want to delete. Click Delete Storage Pool . Click Delete to confirm the removal of the Pool. Note A pool cannot be deleted when it is bound to a PVC. You must detach all the resources before performing this activity. Note When a pool is deleted, the underlying Ceph pool is not deleted. Chapter 4. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as the following: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have a plenty of storage capacity for the following OpenShift services that you configure: OpenShift image registry OpenShift monitoring OpenShift logging (Loki) OpenShift tracing platform (Tempo) If the storage for these critical services runs out of space, the OpenShift cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 4.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration -> Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) -> Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 4.2. Using Multicloud Object Gateway as OpenShift Image Registry backend storage You can use Multicloud Object Gateway (MCG) as OpenShift Container Platform (OCP) Image Registry backend storage in an on-prem OpenShift deployment. To configure MCG as a backend storage for the OCP image registry, follow the steps mentioned in the procedure. Prerequisites Administrative access to OCP Web Console. A running OpenShift Data Foundation cluster with MCG. Procedure Create ObjectBucketClaim by following the steps in Creating Object Bucket Claim . Create an image-registry-private-configuration-user secret. Go to the OpenShift web-console. Click ObjectBucketClaim --> ObjectBucketClaim Data . In the ObjectBucketClaim data , look for MCG access key and MCG secret key in the openshift-image-registry namespace . Create the secret using the following command: Change the status of managementState of Image Registry Operator to Managed . Edit the spec.storage section of Image Registry Operator configuration file: Get the unique-bucket-name and regionEndpoint under the Object Bucket Claim Data section from the Web Console OR you can also get the information on regionEndpoint and unique-bucket-name from the command: Add regionEndpoint as http://<Endpoint-name>:<port> if the storageclass is ceph-rgw storageclass and the endpoint points to the internal SVC from the openshift-storage namespace. An image-registry pod spawns after you make the changes to the Operator registry configuration file. Reset the image registry settings to default. Verification steps Run the following command to check if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output (Optional) You can also the run the following command to verify if you have configured the MCG as OpenShift Image Registry backend storage successfully. Example output 4.3. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators -> Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration -> Cluster Settings -> Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage -> StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads -> Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 4.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads -> Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 4.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 4.3. Persistent Volume Claims attached to prometheus-k8s-* pod 4.4. Overprovision level policy control Overprovision control is a mechanism that enables you to define a quota on the amount of Persistent Volume Claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of ClusterResourceQuota . For more information see, OpenShift ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. For more information about OpenShift Data Foundation deployment, refer to Product Documentation and select the deployment procedure according to the platform. Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Deploy storagecluster either from the command line interface or the user interface. Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Edit the storagecluster to set the quota limit on the storage class. <ocs_storagecluster_name> Specify the name of the storage cluster. Add an entry for Overprovision Control with the desired hard limit into the StorageCluster.Spec : <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Save the modified storagecluster . Verify that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined in the step, for example, quota1 . 4.5. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 4.5.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 4.5.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration -> Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage -> Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 4.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload -> Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide. Chapter 5. Creating Multus networks OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. You can configure your default pod network during cluster installation. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition (NAD) custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. 5.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Requirements for Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. Note Network attachment definitions can only use the whereabouts IP address management (IPAM), and it must specify the range field. ipRanges and plugin chaining are not supported. You can select the newly created NetworkAttachmentDefinition (NAD) during the Storage Cluster installation. This is the reason you must create the NAD before you create the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of the two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all the storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface): Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting object storage device (OSD) pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface): Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). Chapter 6. Backing OpenShift Container Platform applications with OpenShift Data Foundation You cannot directly install OpenShift Data Foundation during the OpenShift Container Platform installation. However, you can install OpenShift Data Foundation on an existing OpenShift Container Platform by using the Operator Hub and then configure the OpenShift Container Platform applications to be backed by OpenShift Data Foundation. Prerequisites OpenShift Container Platform is installed and you have administrative access to OpenShift Web Console. OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure In the OpenShift Web Console, perform one of the following: Click Workloads -> Deployments . In the Deployments page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. Click Workloads -> Deployment Configs . In the Deployment Configs page, you can do one of the following: Select any existing deployment and click Add Storage option from the Action menu (...). Create a new deployment and then add storage. Click Create Deployment Config to create a new deployment. Edit the YAML based on your requirement to create a deployment. Click Create . Select Add Storage from the Actions drop-down menu on the top right of the page. In the Add Storage page, you can choose one of the following options: Click the Use existing claim option and select a suitable PVC from the drop-down list. Click the Create new claim option. Select the appropriate CephFS or RBD storage class from the Storage Class drop-down list. Provide a name for the Persistent Volume Claim. Select ReadWriteOnce (RWO) or ReadWriteMany (RWX) access mode. Note ReadOnlyMany (ROX) is deactivated as it is not supported. Select the size of the desired storage capacity. Note You can expand the block PVs but cannot reduce the storage capacity after the creation of Persistent Volume Claim. Specify the mount path and subpath (if required) for the mount path volume inside the container. Click Save . Verification steps Depending on your configuration, perform one of the following: Click Workloads -> Deployments . Click Workloads -> Deployment Configs . Set the Project as required. Click the deployment for which you added storage to display the deployment details. Scroll down to Volumes and verify that your deployment has a Type that matches the Persistent Volume Claim that you assigned. Click the Persistent Volume Claim name and verify the storage class name in the Persistent Volume Claim Overview page. Chapter 7. Adding file and object storage to an existing external OpenShift Data Foundation cluster When OpenShift Data Foundation is configured in external mode, there are several ways to provide storage for persistent volume claims and object bucket claims. Persistent volume claims for block storage are provided directly from the external Red Hat Ceph Storage cluster. Persistent volume claims for file storage can be provided by adding a Metadata Server (MDS) to the external Red Hat Ceph Storage cluster. Object bucket claims for object storage can be provided either by using the Multicloud Object Gateway or by adding the Ceph Object Gateway to the external Red Hat Ceph Storage cluster. Use the following process to add file storage (using Metadata Servers) or object storage (using Ceph Object Gateway) or both to an external OpenShift Data Foundation cluster that was initially deployed to provide only block storage. Prerequisites OpenShift Data Foundation 4.17 is installed and running on the OpenShift Container Platform version 4.17 or above. Also, the OpenShift Data Foundation Cluster in external mode is in the Ready state. Your external Red Hat Ceph Storage cluster is configured with one or both of the following: a Ceph Object Gateway (RGW) endpoint that can be accessed by the OpenShift Container Platform cluster for object storage a Metadata Server (MDS) pool for file storage Ensure that you know the parameters used with the ceph-external-cluster-details-exporter.py script during external OpenShift Data Foundation cluster deployment. Procedure Download the OpenShift Data Foundation version of the ceph-external-cluster-details-exporter.py python script using the following command: Update permission caps on the external Red Hat Ceph Storage cluster by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. You may need to ask your Red Hat Ceph Storage administrator to do this. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. Generate and save configuration details from the external Red Hat Ceph Storage cluster. Generate configuration details by running ceph-external-cluster-details-exporter.py on any client node in the external Red Hat Ceph Storage cluster. --monitoring-endpoint Is optional. It accepts comma separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. --monitoring-endpoint-port Is optional. It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. --run-as-user The client name used during OpenShift Data Foundation cluster deployment. Use the default client name client.healthchecker if a different client name was not set. --rgw-endpoint Provide this parameter to provision object storage through Ceph Object Gateway for OpenShift Data Foundation. (optional parameter) --rgw-pool-prefix The prefix used for the Ceph Object Gateway pool. This can be omitted if the default prefix is used. User permissions are updated as shown: Note Ensure that all the parameters (including the optional arguments) except the Ceph Object Gateway details (if provided), are the same as what was used during the deployment of OpenShift Data Foundation in external mode. Save the output of the script in an external-cluster-config.json file. The following example output shows the generated configuration changes in bold text. Upload the generated JSON file. Log in to the OpenShift web console. Click Workloads -> Secrets . Set project to openshift-storage . Click on rook-ceph-external-cluster-details . Click Actions (...) -> Edit Secret Click Browse and upload the external-cluster-config.json file. Click Save . Verification steps To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. On the Overview -> Block and File tab, check the Status card to confirm that the Storage Cluster has a green tick indicating it is healthy. If you added a Metadata Server for file storage: Click Workloads -> Pods and verify that csi-cephfsplugin-* pods are created new and are in the Running state. Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-cephfs storage class is created. If you added the Ceph Object Gateway for object storage: Click Storage -> Storage Classes and verify that the ocs-external-storagecluster-ceph-rgw storage class is created. To verify that the OpenShift Data Foundation cluster is healthy and data is resilient, navigate to Storage -> Data foundation -> Storage Systems tab and then click on the storage system name. Click the Object tab and confirm Object Service and Data resiliency has a green tick indicating it is healthy. Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 8.3, "Manual creation of infrastructure nodes" section for more information. 8.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 8.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 8.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 8.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute -> Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute -> Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . Chapter 9. Managing Persistent Volume Claims 9.1. Configuring application pods to use OpenShift Data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for an application pod. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators -> Installed Operators to view installed operators. The default storage classes provided by OpenShift Data Foundation are available. In OpenShift Web Console, click Storage -> StorageClasses to view default storage classes. Procedure Create a Persistent Volume Claim (PVC) for the application to use. In OpenShift Web Console, click Storage -> Persistent Volume Claims . Set the Project for the application pod. Click Create Persistent Volume Claim . Specify a Storage Class provided by OpenShift Data Foundation. Specify the PVC Name , for example, myclaim . Select the required Access Mode . Note The Access Mode , Shared access (RWX) is not supported in IBM FlashSystem. For Rados Block Device (RBD), if the Access mode is ReadWriteOnce ( RWO ), select the required Volume mode . The default volume mode is Filesystem . Specify a Size as per application requirement. Click Create and wait until the PVC is in Bound status. Configure a new or existing application pod to use the new PVC. For a new application pod, perform the following steps: Click Workloads -> Pods . Create a new application pod. Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod. For example: For an existing application pod, perform the following steps: Click Workloads -> Deployment Configs . Search for the required deployment config associated with the application pod. Click on its Action menu (...) -> Edit Deployment Config . Under the spec: section, add volumes: section to add the new PVC as a volume for the application pod and click Save . For example: Verify that the new configuration is being used. Click Workloads -> Pods . Set the Project for the application pod. Verify that the application pod appears with a status of Running . Click the application pod name to view pod details. Scroll down to Volumes section and verify that the volume has a Type that matches your new Persistent Volume Claim, for example, myclaim . 9.2. Viewing Persistent Volume Claim request status Use this procedure to view the status of a PVC request. Prerequisites Administrator access to OpenShift Data Foundation. Procedure Log in to OpenShift Web Console. Click Storage -> Persistent Volume Claims Search for the required PVC name by using the Filter textbox. You can also filter the list of PVCs by Name or Label to narrow down the list Check the Status column corresponding to the required PVC. Click the required Name to view the PVC details. 9.3. Reviewing Persistent Volume Claim request events Use this procedure to review and address Persistent Volume Claim (PVC) request events. Prerequisites Administrator access to OpenShift Web Console. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Storage systems tab, select the storage system and then click Overview -> Block and File . Locate the Inventory card to see the number of PVCs with errors. Click Storage -> Persistent Volume Claims Search for the required PVC using the Filter textbox. Click on the PVC name and navigate to Events Address the events as required or as directed. 9.4. Expanding Persistent Volume Claims OpenShift Data Foundation 4.6 onwards has the ability to expand Persistent Volume Claims providing more flexibility in the management of persistent storage resources. Expansion is supported for the following Persistent Volumes: PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph File System (CephFS) for volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . PVC with ReadWriteOnce (RWO) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Block . PVC with ReadWriteOncePod (RWOP) that is based on Ceph File System (CephFS) or Network File System (NFS) for volume mode Filesystem . PVC with ReadWriteOncePod (RWOP) access that is based on Ceph RADOS Block Devices (RBDs) with volume mode Filesystem . With RWOP access mode, you mount the volume as read-write by a single pod on a single node. Note PVC expansion is not supported for OSD, MON and encrypted PVCs. Prerequisites Administrator access to OpenShift Web Console. Procedure In OpenShift Web Console, navigate to Storage -> Persistent Volume Claims . Click the Action Menu (...) to the Persistent Volume Claim you want to expand. Click Expand PVC : Select the new size of the Persistent Volume Claim, then click Expand : To verify the expansion, navigate to the PVC's details page and verify the Capacity field has the correct size requested. Note When expanding PVCs based on Ceph RADOS Block Devices (RBDs), if the PVC is not already attached to a pod the Condition type is FileSystemResizePending in the PVC's details page. Once the volume is mounted, filesystem resize succeeds and the new size is reflected in the Capacity field. 9.5. Dynamic provisioning 9.5.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any intimate knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. Storage plug-ins might support static provisioning, dynamic provisioning or both provisioning types. 9.5.2. Dynamic provisioning in OpenShift Data Foundation Red Hat OpenShift Data Foundation is software-defined storage that is optimised for container environments. It runs as an operator on OpenShift Container Platform to provide highly integrated and simplified persistent storage management for containers. OpenShift Data Foundation supports a variety of storage types, including: Block storage for databases Shared file storage for continuous integration, messaging, and data aggregation Object storage for archival, backup, and media storage Version 4 uses Red Hat Ceph Storage to provide the file, block, and object storage that backs persistent volumes, and Rook.io to manage and orchestrate provisioning of persistent volumes and claims. NooBaa provides object storage, and its Multicloud Gateway allows object federation across multiple cloud environments (available as a Technology Preview). In OpenShift Data Foundation 4, the Red Hat Ceph Storage Container Storage Interface (CSI) driver for RADOS Block Device (RBD) and Ceph File System (CephFS) handles the dynamic provisioning requests. When a PVC request comes in dynamically, the CSI driver has the following options: Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on Ceph RBDs with volume mode Block . Create a PVC with ReadWriteOnce (RWO) access that is based on Ceph RBDs with volume mode Filesystem . Create a PVC with ReadWriteOnce (RWO) and ReadWriteMany (RWX) access that is based on CephFS for volume mode Filesystem . Create a PVC with ReadWriteOncePod (RWOP) access that is based on CephFS,NFS and RBD. With RWOP access mode, you mount the volume as read-write by a single pod on a single node. The judgment of which driver (RBD or CephFS) to use is based on the entry in the storageclass.yaml file. 9.5.3. Available dynamic provisioning plug-ins OpenShift Container Platform provides the following provisioner plug-ins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plug-in name Notes OpenStack Cinder kubernetes.io/cinder AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. AWS Elastic File System (EFS) Dynamic provisioning is accomplished through the EFS provisioner pod and not through a provisioner plug-in. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder ServiceAccount requires permissions to create and get Secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Red Hat Virtualization csi.ovirt.org Important Any chosen provisioner plug-in also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. Chapter 10. Reclaiming space on target volumes The deleted files or chunks of zero data sometimes take up storage space on the Ceph cluster resulting in inaccurate reporting of the available storage space. The reclaim space operation removes such discrepancies by executing the following operations on the target volume: fstrim - This operation is used on volumes that are in Filesystem mode and only if the volume is mounted to a pod at the time of execution of reclaim space operation. rbd sparsify - This operation is used when the volume is not attached to any pods and reclaims the space occupied by chunks of 4M-sized zeroed data. Note Only the Ceph RBD volumes support the reclaim space operation. The reclaim space operation involves a performance penalty when it is being executed. You can use one of the following methods to reclaim the space: Enabling reclaim space operation using annotating PersistentVolumeClaims (Recommended method to use for enabling reclaim space operation) Enabling reclaim space operation using ReclaimSpaceJob Enabling reclaim space operation using ReclaimSpaceCronJob 10.1. Enabling reclaim space operation by annotating PersistentVolumeClaims Use this procedure to automatically invoke the reclaim space operation to annotate persistent volume claim (PVC) based on a given schedule. Note The schedule value is in the same format as the Kubernetes CronJobs which sets the and/or interval of the recurring operation request. Recommended schedule interval is @weekly . If the schedule interval value is empty or in an invalid format, then the default schedule value is set to @weekly . Do not schedule multiple ReclaimSpace operations @weekly or at the same time. Minimum supported interval between each scheduled operation is at least 24 hours. For example, @daily (At 00:00 every day) or 0 3 * * * (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when the workload input/output is expected to be low. ReclaimSpaceCronJob is recreated when the schedule is modified. It is automatically deleted when the annotation is removed. Procedure Get the PVC details. Add annotation reclaimspace.csiaddons.openshift.io/schedule=@monthly to the PVC to create reclaimspacecronjob . Verify that reclaimspacecronjob is created in the format, "<pvc-name>-xxxxxxx" . Modify the schedule to run this job automatically. Verify that the schedule for reclaimspacecronjob has been modified. 10.2. Enabling reclaim space operation using ReclaimSpaceJob ReclaimSpaceJob is a namespaced custom resource (CR) designed to invoke reclaim space operation on the target volume. This is a one time method that immediately starts the reclaim space operation. You have to repeat the creation of ReclaimSpaceJob CR to repeat the reclaim space operation when required. Note Recommended interval between the reclaim space operations is weekly . Ensure that the minimum interval between each operation is at least 24 hours . Schedule the reclaim space operation during off-peak, maintenance window, or when the workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation: where, target Indicates the volume target on which the operation is performed. persistentVolumeClaim Name of the PersistentVolumeClaim . backOfflimit Specifies the maximum number of retries before marking the reclaim space operation as failed . The default value is 6 . The allowed maximum and minimum values are 60 and 0 respectively. retryDeadlineSeconds Specifies the duration in which the operation might retire in seconds and it is relative to the start time. The value must be a positive integer. The default value is 600 seconds and the allowed maximum value is 1800 seconds. timeout Specifies the timeout in seconds for the grpc request sent to the CSI driver. If the timeout value is not specified, it defaults to the value of global reclaimspace timeout. Minimum allowed value for timeout is 60. Delete the custom resource after completion of the operation. 10.3. Enabling reclaim space operation using ReclaimSpaceCronJob ReclaimSpaceCronJob invokes the reclaim space operation based on the given schedule such as daily, weekly, and so on. You have to create ReclaimSpaceCronJob only once for a persistent volume claim. The CSI-addons controller creates a ReclaimSpaceJob at the requested time and interval with the schedule attribute. Note Recommended schedule interval is @weekly . Minimum interval between each scheduled operation should be at least 24 hours. For example, @daily (At 00:00 every day) or "0 3 * * *" (At 3:00 every day). Schedule the ReclaimSpace operation during off-peak, maintenance window, or the interval when workload input/output is expected to be low. Procedure Create and apply the following custom resource for reclaim space operation where, concurrencyPolicy Describes the changes when a new ReclaimSpaceJob is scheduled by the ReclaimSpaceCronJob , while a ReclaimSpaceJob is still running. The default Forbid prevents starting a new job whereas Replace can be used to delete the running job potentially in a failure state and create a new one. failedJobsHistoryLimit Specifies the number of failed ReclaimSpaceJobs that are kept for troubleshooting. jobTemplate Specifies the ReclaimSpaceJob.spec structure that describes the details of the requested ReclaimSpaceJob operation. successfulJobsHistoryLimit Specifies the number of successful ReclaimSpaceJob operations. schedule Specifieds the and/or interval of the recurring operation request and it is in the same format as the Kubernetes CronJobs . Delete the ReclaimSpaceCronJob custom resource when execution of reclaim space operation is no longer needed or when the target PVC is deleted. 10.4. Customising timeouts required for Reclaim Space Operation Depending on the RBD volume size and its data pattern, Reclaim Space Operation might fail with the context deadline exceeded error. You can avoid this by increasing the timeout value. The following example shows the failed status by inspecting -o yaml of the corresponding ReclaimSpaceJob : Example You can also set custom timeouts at global level by creating the following configmap : Example Restart the csi-addons operator pod. All Reclaim Space Operations started after the above configmap creation use the customized timeout. ' :leveloffset: +1 Chapter 11. Finding and cleaning stale subvolumes (Technology Preview) Sometimes stale subvolumes don't have a respective k8s reference attached. These subvolumes are of no use and can be deleted. You can find and delete stale subvolumes using the ODF CLI tool. Important Deleting stale subvolumes using the ODF CLI tool is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Find the stale subvolumes by using the --stale flag with the subvolumes command: Example output: Delete the stale subvolumes: Replace <subvolumes> with a comma separated list of subvolumes from the output of the first command. The subvolumes must be of the same filesystem and subvolumegroup. Replace <filesystem> and <subvolumegroup> with the filesystem and subvolumegroup from the output of the first command. For example: Example output: Chapter 12. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 12.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) -> Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions -> Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage -> Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 12.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage -> Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 12.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage -> Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) -> Delete Volume Snapshot . From Volume Snapshots page Click Storage -> Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) -> Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage -> Volume Snapshots and ensure that the deleted volume snapshot is not listed. Chapter 13. Volume cloning A clone is a duplicate of an existing storage volume that is used as any standard volume. You create a clone of a volume to make a point in time copy of the data. A persistent volume claim (PVC) cannot be cloned with a different size. You can create up to 512 clones per PVC for both CephFS and RADOS Block Device (RBD). 13.1. Creating a clone Prerequisites Source PVC must be in Bound state and must not be in use. Note Do not create a clone of a PVC if a Pod is using it. Doing so might cause data corruption because the PVC is not quiesced (paused). Procedure Click Storage -> Persistent Volume Claims from the OpenShift Web Console. To create a clone, do one of the following: Beside the desired PVC, click Action menu (...) -> Clone PVC . Click on the PVC that you want to clone and click Actions -> Clone PVC . Enter a Name for the clone. Select the access mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Enter the required size of the clone. Select the storage class in which you want to create the clone. The storage class can be any RBD storage class and it need not necessarily be the same as the parent PVC. Click Clone . You are redirected to the new PVC details page. Wait for the cloned PVC status to become Bound . The cloned PVC is now available to be consumed by the pods. This cloned PVC is independent of its dataSource PVC. Chapter 14. Managing container storage interface (CSI) component placements Each cluster consists of a number of dedicated nodes such as infra and storage nodes. However, an infra node with a custom taint will not be able to use OpenShift Data Foundation Persistent Volume Claims (PVCs) on the node. So, if you want to use such nodes, you can set tolerations to bring up csi-plugins on the nodes. Procedure Edit the configmap to add the toleration for the custom taint. Remember to save before exiting the editor. Display the configmap to check the added toleration. Example output of the added toleration for the taint, nodetype=infra:NoSchedule : Note Ensure that all non-string values in the Tolerations value field has double quotation marks. For example, the values true which is of type boolean, and 1 which is of type int must be input as "true" and "1". Restart the rook-ceph-operator if the csi-cephfsplugin- * and csi-rbdplugin- * pods fail to come up on their own on the infra nodes. Example : Verification step Verify that the csi-cephfsplugin- * and csi-rbdplugin- * pods are running on the infra nodes. Chapter 15. Using 2-way replication with CephFS To reduce storage overhead with CephFS when data resiliency is not a primary concern, you can opt for using 2-way replication (replica-2). This reduces the amount of storage space used and decreases the level of fault tolerance. There are two ways to use replica-2 for CephFS: Edit the existing default pool to replica-2 and use it with the default CephFS storageclass . Add an additional CephFS data pool with replica-2 . 15.1. Editing the existing default CephFS data pool to replica-2 Use this procedure to edit the existing default CephFS pool to replica-2 and use it with the default CephFS storageclass. Procedure Patch the storagecluster to change default CephFS data pool to replica-2. Check the pool details. 15.2. Adding an additional CephFS data pool with replica-2 Use this procedure to add an additional CephFS data pool with replica-2. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and OpenShift Data Foundation cluster is in Ready state. Procedure Click Storage -> StorageClasses -> Create Storage Class . Select CephFS Provisioner . Under Storage Pool , click Create new storage pool . Fill in the Create Storage Pool fields. Under Data protection policy , select 2-way Replication . Confirm Storage Pool creation In the Storage Class creation form, choose the newly created Storage Pool. Confirm the Storage Class creation. Verification Click Storage -> Data Foundation . In the Storage systems tab, select the new storage system. The Details tab of the storage system reflect the correct volume and device types you chose during creation Chapter 16. Creating exports using NFS This section describes how to create exports using NFS that can then be accessed externally from the OpenShift cluster. Follow the instructions below to create exports and access them externally from the OpenShift Cluster: Section 16.1, "Enabling the NFS feature" Section 16.2, "Creating NFS exports" Section 16.3, "Consuming NFS exports in-cluster" Section 16.4, "Consuming NFS exports externally from the OpenShift cluster" 16.1. Enabling the NFS feature To use NFS feature, you need to enable it in the storage cluster using the command-line interface (CLI) after the cluster is created. You can also enable the NFS feature while creating the storage cluster using the user interface. Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. The OpenShift Data Foundation installation includes a CephFilesystem. Procedure Run the following command to enable the NFS feature from CLI: Verification steps NFS installation and configuration is complete when the following conditions are met: The CephNFS resource named ocs-storagecluster-cephnfs has a status of Ready . Check if all the csi-nfsplugin-* pods are running: Output has multiple pods. For example: 16.2. Creating NFS exports NFS exports are created by creating a Persistent Volume Claim (PVC) against the ocs-storagecluster-ceph-nfs StorageClass. You can create NFS PVCs two ways: Create NFS PVC using a yaml. The following is an example PVC. Note volumeMode: Block will not work for NFS volumes. <desired_name> Specify a name for the PVC, for example, my-nfs-export . The export is created once the PVC reaches the Bound state. Create NFS PVCs from the OpenShift Container Platform web console. Prerequisites Ensure that you are logged into the OpenShift Container Platform web console and the NFS feature is enabled for the storage cluster. Procedure In the OpenShift Web Console, click Storage -> Persistent Volume Claims Set the Project to openshift-storage . Click Create PersistentVolumeClaim . Specify Storage Class , ocs-storagecluster-ceph-nfs . Specify the PVC Name , for example, my-nfs-export . Select the required Access Mode . Specify a Size as per application requirement. Select Volume mode as Filesystem . Note: Block mode is not supported for NFS PVCs Click Create and wait until the PVC is in Bound status. 16.3. Consuming NFS exports in-cluster Kubernetes application pods can consume NFS exports created by mounting a previously created PVC. You can mount the PVC one of two ways: Using a YAML: Below is an example pod that uses the example PVC created in Section 16.2, "Creating NFS exports" : <pvc_name> Specify the PVC you have previously created, for example, my-nfs-export . Using the OpenShift Container Platform web console. Procedure On the OpenShift Container Platform web console, navigate to Workloads -> Pods . Click Create Pod to create a new application pod. Under the metadata section add a name. For example, nfs-export-example , with namespace as openshift-storage . Under the spec: section, add containers: section with image and volumeMounts sections: For example: Under the spec: section, add volumes: section to add the NFS PVC as a volume for the application pod: For example: 16.4. Consuming NFS exports externally from the OpenShift cluster NFS clients outside of the OpenShift cluster can mount NFS exports created by a previously-created PVC. Procedure After the nfs flag is enabled, singe-server CephNFS is deployed by Rook. You need to fetch the value of the ceph_nfs field for the nfs-ganesha server to use in the step: For example: Expose the NFS server outside of the OpenShift cluster by creating a Kubernetes LoadBalancer Service. The example below creates a LoadBalancer Service and references the NFS server created by OpenShift Data Foundation. Replace <my-nfs> with the value you got in step 1. Collect connection information. The information external clients need to connect to an export comes from the Persistent Volume (PV) created for the PVC, and the status of the LoadBalancer Service created in the step. Get the share path from the PV. Get the name of the PV associated with the NFS export's PVC: Replace <pvc_name> with your own PVC name. For example: Use the PV name obtained previously to get the NFS export's share path: Get an ingress address for the NFS server. A service's ingress status may have multiple addresses. Choose the one desired to use for external clients. In the example below, there is only a single address: the host name ingress-id.somedomain.com . Connect the external client using the share path and ingress address from the steps. The following example mounts the export to the client's directory path /export/mount/path : If this does not work immediately, it could be that the Kubernetes environment is still taking time to configure the network resources to allow ingress to the NFS server. Chapter 17. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 18. Enabling faster client IO or recovery IO during OSD backfill During a maintenance window, you may want to favor either client IO or recovery IO. Favoring recovery IO over client IO will significantly reduce OSD recovery time. The valid recovery profile options are balanced , high_client_ops , and high_recovery_ops . Set the recovery profile using the following procedure. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Check the current recovery profile: Modify the recovery profile: Replace option with either balanced , high_client_ops , or high_recovery_ops . Verify the updated recovery profile: Chapter 19. Setting Ceph OSD full thresholds You can set Ceph OSD full thresholds using the ODF CLI tool or by updating the StorageCluster CR. 19.1. Setting Ceph OSD full thresholds using the ODF CLI tool You can set Ceph OSD full thresholds temporarily by using the ODF CLI tool. This is necessary in cases when the cluster gets into a full state and the thresholds need to be immediately increased. Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Use the set command to adjust Ceph full thresholds. The set command supports the subcommands full , backfillfull , and nearfull . See the following examples for how to use each subcommand. full This subcommand allows updating the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, set Ceph OSD full ratio to 0.9 and then add capacity: For instructions to add capacity for you specific use case, see the Scaling storage guide . If OSDs continue to be in stuck , pending , or do not come up at all: Stop all IOs. Increase the full ratio to 0.92 : Wait for the cluster rebalance to happen. Once cluster rebalance is complete, change the full ratio back to its original value of 0.85: backfillfull This subcommand allows updating the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, to set backfillfull to 0.85 : nearfull This subcommand allows updating the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, to set nearfull to 0.8 : 19.2. Setting Ceph OSD full thresholds by updating the StorageCluster CR You can set Ceph OSD full thresholds by updating the StorageCluster CR. Use this procedure if you want to override the default settings. Procedure You can update the StorageCluster CR to change the settings for full , backfillfull , and nearfull . full Use this following command to update the Ceph OSD full ratio in case Ceph prevents the IO operation on OSDs that reached the specified capacity. The default is 0.85 . Note If the value is set too close to 1.0 , the cluster becomes unrecoverable if the OSDs are full and there is nowhere to grow. For example, to set Ceph OSD full ratio to 0.9 : backfillfull Use the following command to set the Ceph OSDd backfillfull ratio in case Ceph denies backfilling to the OSD that reached the capacity specified. The default value is 0.80 . Note If the value is set too close to 1.0 , the OSDs become full and the cluster is not able to backfill. For example, set backfill full to 0.85 : nearfull Use the following command to set the Ceph OSD nearfull ratio in case Ceph returns the nearfull OSDs message when the cluster reaches the capacity specified. The default value is 0.75 . For example, set nearfull to 0.8 :
[ "cat <<EOF | oc create -f - apiVersion: v1 kind: ServiceAccount metadata: name: ceph-csi-vault-sa EOF", "apiVersion: v1 kind: ServiceAccount metadata: name: rbd-csi-vault-token-review --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review rules: - apiGroups: [\"authentication.k8s.io\"] resources: [\"tokenreviews\"] verbs: [\"create\", \"get\", \"list\"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-csi-vault-token-review subjects: - kind: ServiceAccount name: rbd-csi-vault-token-review namespace: openshift-storage roleRef: kind: ClusterRole name: rbd-csi-vault-token-review apiGroup: rbac.authorization.k8s.io", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rbd-csi-vault-token-review-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: \"rbd-csi-vault-token-review\" type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret rbd-csi-vault-token-review-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "vault auth enable kubernetes vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault write \"auth/kubernetes/role/csi-kubernetes\" bound_service_account_names=\"ceph-csi-vault-sa\" bound_service_account_namespaces=<tenant_namespace> policies=<policy_name_in_vault>", "apiVersion: v1 data: vault-tenant-sa: |- { \"encryptionKMSType\": \"vaulttenantsa\", \"vaultAddress\": \"<https://hostname_or_ip_of_vault_server:port>\", \"vaultTLSServerName\": \"<vault TLS server name>\", \"vaultAuthPath\": \"/v1/auth/kubernetes/login\", \"vaultAuthNamespace\": \"<vault auth namespace name>\" \"vaultNamespace\": \"<vault namespace name>\", \"vaultBackendPath\": \"<vault backend path name>\", \"vaultCAFromSecret\": \"<secret containing CA cert>\", \"vaultClientCertFromSecret\": \"<secret containing client cert>\", \"vaultClientCertKeyFromSecret\": \"<secret containing client private key>\", \"tenantSAName\": \"<service account name in the tenant namespace>\" } metadata: name: csi-kms-connection-details", "encryptionKMSID: 1-vault", "kind: ConfigMap apiVersion: v1 metadata: name: csi-kms-connection-details [...] data: 1-vault: |- { \"encryptionKMSType\": \"vaulttokens\", \"kmsServiceName\": \"1-vault\", [...] \"vaultBackend\": \"kv-v2\" } 2-vault: |- { \"encryptionKMSType\": \"vaulttenantsa\", [...] \"vaultBackend\": \"kv\" }", "--- apiVersion: v1 kind: ConfigMap metadata: name: ceph-csi-kms-config data: vaultAddress: \"<vault_address:port>\" vaultBackendPath: \"<backend_path>\" vaultTLSServerName: \"<vault_tls_server_name>\" vaultNamespace: \"<vault_namespace>\"", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephNonResilientPools/enable\", \"value\": true }]'", "oc get storagecluster", "NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-storagecluster 10m Ready 2024-02-05T13:56:15Z 4.17.0", "oc get cephblockpools", "NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-east-1a Ready ocs-storagecluster-cephblockpool-us-east-1b Ready ocs-storagecluster-cephblockpool-us-east-1c Ready", "oc get storageclass", "NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 104m gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m gp3-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 104m ocs-storagecluster-ceph-non-resilient-rbd openshift-storage.rbd.csi.ceph.com Delete WaitForFirstConsumer true 46m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate true 52m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate true 52m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 50m", "oc get pods | grep osd", "rook-ceph-osd-0-6dc76777bc-snhnm 2/2 Running 0 9m50s rook-ceph-osd-1-768bdfdc4-h5n7k 2/2 Running 0 9m48s rook-ceph-osd-2-69878645c4-bkdlq 2/2 Running 0 9m37s rook-ceph-osd-3-64c44d7d76-zfxq9 2/2 Running 0 5m23s rook-ceph-osd-4-654445b78f-nsgjb 2/2 Running 0 5m23s rook-ceph-osd-5-5775949f57-vz6jp 2/2 Running 0 5m22s rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0x6t87-59swf 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klwr7-bk45t 0/1 Completed 0 10m rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-0mk2cz-jx7zv 0/1 Completed 0 10m", "oc get cephblockpools", "NAME PHASE ocs-storagecluster-cephblockpool Ready ocs-storagecluster-cephblockpool-us-south-1 Ready ocs-storagecluster-cephblockpool-us-south-2 Ready ocs-storagecluster-cephblockpool-us-south-3 Ready", "oc get pods -n openshift-storage -l app=rook-ceph-osd | grep 'CrashLoopBackOff\\|Error'", "failed_osd_id=0 #replace with the ID of the failed OSD", "failure_domain_label=USD(oc get storageclass ocs-storagecluster-ceph-non-resilient-rbd -o yaml | grep domainLabel |head -1 |awk -F':' '{print USD2}')", "failure_domain_value=USD\"(oc get pods USDfailed_osd_id -oyaml |grep topology-location-zone |awk '{print USD2}')\"", "replica1-pool-name= \"ocs-storagecluster-cephblockpool-USDfailure_domain_value\"", "toolbox=USD(oc get pod -l app=rook-ceph-tools -n openshift-storage -o jsonpath='{.items[*].metadata.name}') rsh USDtoolbox -n openshift-storage", "ceph osd pool rm <replica1-pool-name> <replica1-pool-name> --yes-i-really-really-mean-it", "oc delete pod -l rook-ceph-operator -n openshift-storage", "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=<MCG Accesskey> --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=<MCG Secretkey> --namespace openshift-image-registry", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\": {\"managementState\": \"Managed\"}}'", "oc describe noobaa", "oc edit configs.imageregistry.operator.openshift.io -n openshift-image-registry apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: [..] name: cluster spec: [..] storage: s3: bucket: <Unique-bucket-name> region: us-east-1 (Use this region as default) regionEndpoint: https://<Endpoint-name>:<port> virtualHostedStyle: false", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry", "oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-56d78bc5fb-bxcgv 2/2 Running 0 44d image-pruner-1605830400-29r7k 0/1 Completed 0 10h image-registry-b6c8f4596-ln88h 1/1 Running 0 17d node-ca-2nxvz 1/1 Running 0 44d node-ca-dtwjd 1/1 Running 0 44d node-ca-h92rj 1/1 Running 0 44d node-ca-k9bkd 1/1 Running 0 44d node-ca-stkzc 1/1 Running 0 44d node-ca-xn8h4 1/1 Running 0 44d", "oc describe pod <image-registry-name>", "oc describe pod image-registry-b6c8f4596-ln88h Environment: REGISTRY_STORAGE_S3_REGIONENDPOINT: http://s3.openshift-storage.svc REGISTRY_STORAGE: s3 REGISTRY_STORAGE_S3_BUCKET: bucket-registry-mcg REGISTRY_STORAGE_S3_REGION: us-east-1 REGISTRY_STORAGE_S3_ENCRYPT: true REGISTRY_STORAGE_S3_VIRTUALHOSTEDSTYLE: false REGISTRY_STORAGE_S3_USEDUALSTACK: true REGISTRY_STORAGE_S3_ACCESSKEY: <set to the key 'REGISTRY_STORAGE_S3_ACCESSKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_STORAGE_S3_SECRETKEY: <set to the key 'REGISTRY_STORAGE_S3_SECRETKEY' in secret 'image-registry-private-configuration'> Optional: false REGISTRY_HTTP_ADDR: :5000 REGISTRY_HTTP_NET: tcp REGISTRY_HTTP_SECRET: 57b943f691c878e342bac34e657b702bd6ca5488d51f839fecafa918a79a5fc6ed70184cab047601403c1f383e54d458744062dcaaa483816d82408bb56e686f REGISTRY_LOG_LEVEL: info REGISTRY_OPENSHIFT_QUOTA_ENABLED: true REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR: inmemory REGISTRY_STORAGE_DELETE_ENABLED: true REGISTRY_OPENSHIFT_METRICS_ENABLED: true REGISTRY_OPENSHIFT_SERVER_ADDR: image-registry.openshift-image-registry.svc:5000 REGISTRY_HTTP_TLS_CERTIFICATE: /etc/secrets/tls.crt REGISTRY_HTTP_TLS_KEY: /etc/secrets/tls.key", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>", "oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>", "apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label> [...]", "oc get clusterresourcequota -A oc describe clusterresourcequota -A", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.200.0/24\", \"routes\": [ {\"dst\": \"NODE_IP_CIDR\"} ] } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'", "get csv USD(oc get csv -n openshift-storage | grep rook-ceph-operator | awk '{print USD1}') -n openshift-storage -o jsonpath='{.metadata.annotations.externalClusterScript}' | base64 --decode >ceph-external-cluster-details-exporter.py", "python3 ceph-external-cluster-details-exporter.py --upgrade --run-as-user= ocs-client-name --rgw-pool-prefix rgw-pool-prefix", "python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name rbd-block-pool-name --monitoring-endpoint ceph-mgr-prometheus-exporter-endpoint --monitoring-endpoint-port ceph-mgr-prometheus-exporter-port --run-as-user ocs-client-name --rgw-endpoint rgw-endpoint --rgw-pool-prefix rgw-pool-prefix", "caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool=default.rgw.meta, allow r pool=.rgw.root, allow rw pool=default.rgw.control, allow rx pool=default.rgw.log, allow x pool=default.rgw.buckets.index", "[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}} ]", "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule", "Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: mypd persistentVolumeClaim: claimName: myclaim", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: mypd persistentVolumeClaim: claimName: myclaim", "oc get pvc data-pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO ocs-storagecluster-ceph-rbd 20h", "oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@monthly\"", "persistentvolumeclaim/data-pvc annotated", "oc get reclaimspacecronjobs.csiaddons.openshift.io", "NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @monthly 3s", "oc annotate pvc data-pvc \"reclaimspace.csiaddons.openshift.io/schedule=@weekly\" --overwrite=true", "persistentvolumeclaim/data-pvc annotated", "oc get reclaimspacecronjobs.csiaddons.openshift.io", "NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 @weekly 3s", "apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceJob metadata: name: sample-1 spec: target: persistentVolumeClaim: pvc-1 timeout: 360", "apiVersion: csiaddons.openshift.io/v1alpha1 kind: ReclaimSpaceCronJob metadata: name: reclaimspacecronjob-sample spec: jobTemplate: spec: target: persistentVolumeClaim: data-pvc timeout: 360 schedule: '@weekly' concurrencyPolicy: Forbid", "Status: Completion Time: 2023-03-08T18:56:18Z Conditions: Last Transition Time: 2023-03-08T18:56:18Z Message: Failed to make controller request: context deadline exceeded Observed Generation: 1 Reason: failed Status: True Type: Failed Message: Maximum retry limit reached Result: Failed Retries: 6 Start Time: 2023-03-08T18:33:55Z", "apiVersion: v1 kind: ConfigMap metadata: name: csi-addons-config namespace: openshift-storage data: \"reclaim-space-timeout\": \"6m\"", "delete po -n openshift-storage -l \"app.kubernetes.io/name=csi-addons\"", "odf subvolume ls --stale", "Filesystem Subvolume Subvolumegroup State ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110004 csi stale ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110005 csi stale", "odf subvolume delete <subvolumes> <filesystem> <subvolumegroup>", "odf subvolume delete csi-vol-427774b4-340b-11ed-8d66-0242ac110004,csi-vol-427774b4-340b-11ed-8d66-0242ac110005 ocs-storagecluster csi", "Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted", "oc edit configmap rook-ceph-operator-config -n openshift-storage", "oc get configmap rook-ceph-operator-config -n openshift-storage -o yaml", "apiVersion: v1 data: [...] CSI_PLUGIN_TOLERATIONS: | - key: nodetype operator: Equal value: infra effect: NoSchedule - key: node.ocs.openshift.io/storage operator: Equal value: \"true\" effect: NoSchedule [...] kind: ConfigMap metadata: [...]", "oc delete -n openshift-storage pod <name of the rook_ceph_operator pod>", "oc delete -n openshift-storage pod rook-ceph-operator-5446f9b95b-jrn2j pod \"rook-ceph-operator-5446f9b95b-jrn2j\" deleted", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephFilesystems/dataPoolSpec/replicated/size\", \"value\": 2 }]' storagecluster.ocs.openshift.io/ocs-storagecluster patched", "oc get cephfilesystem ocs-storagecluster-cephfilesystem -o=jsonpath='{.spec.dataPools}' | jq [ { \"application\": \"\", \"deviceClass\": \"ssd\", \"erasureCoded\": { \"codingChunks\": 0, \"dataChunks\": 0 }, \"failureDomain\": \"zone\", \"mirroring\": {}, \"quotas\": {}, \"replicated\": { \"replicasPerFailureDomain\": 1, \"size\": 2, \"targetSizeRatio\": 0.49 }, \"statusCheck\": { \"mirror\": {} } } ]", "ceph osd pool ls | grep filesystem ocs-storagecluster-cephfilesystem-metadata ocs-storagecluster-cephfilesystem-data0", "oc --namespace openshift-storage patch storageclusters.ocs.openshift.io ocs-storagecluster --type merge --patch '{\"spec\": {\"nfs\":{\"enable\": true}}}'", "-n openshift-storage describe cephnfs ocs-storagecluster-cephnfs", "-n openshift-storage get pod | grep csi-nfsplugin", "csi-nfsplugin-47qwq 2/2 Running 0 10s csi-nfsplugin-77947 2/2 Running 0 10s csi-nfsplugin-ct2pm 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-2rm2w 2/2 Running 0 10s csi-nfsplugin-provisioner-f85b75fbb-8nj5h 2/2 Running 0 10s", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <desired_name> spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-nfs", "apiVersion: v1 kind: Pod metadata: name: nfs-export-example spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: <pvc_name> readOnly: false", "apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: <volume_name> mountPath: /var/lib/www/html", "apiVersion: v1 kind: Pod metadata: name: nfs-export-example namespace: openshift-storage spec: containers: - name: web-server image: nginx volumeMounts: - name: nfs-export-pvc mountPath: /var/lib/www/html", "volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name>", "volumes: - name: nfs-export-pvc persistentVolumeClaim: claimName: my-nfs-export", "oc get pods -n openshift-storage | grep rook-ceph-nfs", "oc describe pod <name of the rook-ceph-nfs pod> | grep ceph_nfs", "oc describe pod rook-ceph-nfs-ocs-storagecluster-cephnfs-a-7bb484b4bf-bbdhs | grep ceph_nfs ceph_nfs=my-nfs", "apiVersion: v1 kind: Service metadata: name: rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer namespace: openshift-storage spec: ports: - name: nfs port: 2049 type: LoadBalancer externalTrafficPolicy: Local selector: app: rook-ceph-nfs ceph_nfs: <my-nfs> instance: a", "oc get pvc <pvc_name> --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d", "get pvc pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.volumeName}' pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d", "oc get pv pvc-39c5c467-d9d3-4898-84f7-936ea52fd99d --output jsonpath='{.spec.csi.volumeAttributes.share}' /0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215", "oc -n openshift-storage get service rook-ceph-nfs-ocs-storagecluster-cephnfs-load-balancer --output jsonpath='{.status.loadBalancer.ingress}' [{\"hostname\":\"ingress-id.somedomain.com\"}]", "mount -t nfs4 -o proto=tcp ingress-id.somedomain.com:/0001-0011-openshift-storage-0000000000000001-ba9426ab-d61b-11ec-9ffd-0a580a800215 /export/mount/path", "odf get recovery-profile", "odf set recovery-profile <option>", "odf get recovery-profile", "odf set full 0.9", "odf set full 0.92", "odf set full 0.85", "odf set backfillfull 0.85", "odf set nearfull 0.8", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/fullRatio\", \"value\": 0.90 }]'", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/backfillFullRatio\", \"value\": 0.85 }]'", "oc patch storagecluster ocs-storagecluster -n openshift-storage --type json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/managedResources/cephCluster/nearFullRatio\", \"value\": 0.8 }]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/managing_and_allocating_storage_resources/managing-persistent-volume-claims_rhodf
Chapter 44. l2gw
Chapter 44. l2gw This chapter describes the commands under the l2gw command. 44.1. l2gw connection create Create l2gateway-connection Usage: Table 44.1. Positional arguments Value Summary <GATEWAY-NAME/UUID> Descriptive name for logical gateway. <NETWORK-NAME/UUID> Network name or uuid. Table 44.2. Command arguments Value Summary -h, --help Show this help message and exit --default-segmentation-id SEG_ID Default segmentation-id that will be applied to the interfaces for which segmentation id was not specified in l2-gateway-create command. Table 44.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 44.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.2. l2gw connection delete Delete a given l2gateway-connection Usage: Table 44.7. Positional arguments Value Summary <L2_GATEWAY_CONNECTIONS> Id(s) of l2_gateway_connections(s) to delete. Table 44.8. Command arguments Value Summary -h, --help Show this help message and exit 44.3. l2gw connection list List l2gateway-connections Usage: Table 44.9. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 44.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 44.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 44.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.4. l2gw connection show Show information of a given l2gateway-connection Usage: Table 44.14. Positional arguments Value Summary <L2_GATEWAY_CONNECTION> Id of l2_gateway_connection to look up. Table 44.15. Command arguments Value Summary -h, --help Show this help message and exit Table 44.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 44.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.5. l2gw create Create l2gateway resource Usage: Table 44.20. Positional arguments Value Summary <GATEWAY-NAME> Descriptive name for logical gateway. Table 44.21. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --device name=name,interface_names=INTERFACE-DETAILS Device name and interface-names of l2gateway. INTERFACE-DETAILS is of form "<interface_name1>;[<inte rface_name2>][|<seg_id1>[#<seg_id2>]]" (--device option can be repeated) Table 44.22. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 44.23. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.24. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.25. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.6. l2gw delete Delete a given l2gateway Usage: Table 44.26. Positional arguments Value Summary <L2_GATEWAY> Id(s) or name(s) of l2_gateway to delete. Table 44.27. Command arguments Value Summary -h, --help Show this help message and exit 44.7. l2gw list List l2gateway that belongs to a given tenant Usage: Table 44.28. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 44.29. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 44.30. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 44.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.32. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.8. l2gw show Show information of a given l2gateway Usage: Table 44.33. Positional arguments Value Summary <L2_GATEWAY> Id or name of l2_gateway to look up. Table 44.34. Command arguments Value Summary -h, --help Show this help message and exit Table 44.35. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 44.36. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.37. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.38. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 44.9. l2gw update Update a given l2gateway Usage: Table 44.39. Positional arguments Value Summary <L2_GATEWAY> Id or name of l2_gateway to update. Table 44.40. Command arguments Value Summary -h, --help Show this help message and exit --name name Descriptive name for logical gateway. --device name=name,interface_names=INTERFACE-DETAILS Device name and interface-names of l2gateway. INTERFACE-DETAILS is of form "<interface_name1>;[<inte rface_name2>][|<seg_id1>[#<seg_id2>]]" (--device option can be repeated) Table 44.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 44.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 44.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 44.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack l2gw connection create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--default-segmentation-id SEG_ID] <GATEWAY-NAME/UUID> <NETWORK-NAME/UUID>", "openstack l2gw connection delete [-h] <L2_GATEWAY_CONNECTIONS> [<L2_GATEWAY_CONNECTIONS> ...]", "openstack l2gw connection list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--project <project>] [--project-domain <project-domain>]", "openstack l2gw connection show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <L2_GATEWAY_CONNECTION>", "openstack l2gw create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--project-domain <project-domain>] [--device name=name,interface_names=INTERFACE-DETAILS] <GATEWAY-NAME>", "openstack l2gw delete [-h] <L2_GATEWAY> [<L2_GATEWAY> ...]", "openstack l2gw list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--project <project>] [--project-domain <project-domain>]", "openstack l2gw show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <L2_GATEWAY>", "openstack l2gw update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name name] [--device name=name,interface_names=INTERFACE-DETAILS] <L2_GATEWAY>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/l2gw
9.4. Durations
9.4. Durations Durations are used to calculate a value for end when one is not supplied to in_range operations. They contain the same fields as date_spec objects but without the limitations (ie. you can have a duration of 19 months). Like date_specs , any field not supplied is ignored.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/_durations
Release Notes
Release Notes Red Hat Insights 1-latest Release Notes for Red Hat Insights Red Hat Insights Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_insights_overview/1-latest/html/release_notes/index
Chapter 8. Storage and File Systems
Chapter 8. Storage and File Systems This chapter outlines supported file systems and configuration options that affect application performance for both I/O and file systems in Red Hat Enterprise Linux 7. Section 8.1, "Considerations" discusses the I/O and file system related factors that affect performance. Section 8.2, "Monitoring and Diagnosing Performance Problems" teaches you how to use Red Hat Enterprise Linux 7 tools to diagnose performance problems related to I/O or file system configuration details. Section 8.4, "Configuration Tools" discusses the tools and strategies you can use to solve I/O and file system related performance problems in Red Hat Enterprise Linux 7. 8.1. Considerations The appropriate settings for storage and file system performance are highly dependent on the purpose of the storage. I/O and file system performance can be affected by any of the following factors: Data write or read patterns Data alignment with underlying geometry Block size File system size Journal size and location Recording access times Ensuring data reliability Pre-fetching data Pre-allocating disk space File fragmentation Resource contention Read this chapter to gain an understanding of the formatting and mount options that affect file system throughput, scalability, responsiveness, resource usage, and availability. 8.1.1. I/O Schedulers The I/O scheduler determines when and for how long I/O operations run on a storage device. It is also known as the I/O elevator. Red Hat Enterprise Linux 7 provides three I/O schedulers. deadline The default I/O scheduler for all block devices, except for SATA disks. Deadline attempts to provide a guaranteed latency for requests from the point at which requests reach the I/O scheduler. This scheduler is suitable for most use cases, but particularly those in which read operations occur more often than write operations. Queued I/O requests are sorted into a read or write batch and then scheduled for execution in increasing LBA order. Read batches take precedence over write batches by default, as applications are more likely to block on read I/O. After a batch is processed, deadline checks how long write operations have been starved of processor time and schedules the read or write batch as appropriate. The number of requests to handle per batch, the number of read batches to issue per write batch, and the amount of time before requests expire are all configurable; see Section 8.4.4, "Tuning the Deadline Scheduler" for details. cfq The default scheduler only for devices identified as SATA disks. The Completely Fair Queueing scheduler, cfq , divides processes into three separate classes: real time, best effort, and idle. Processes in the real time class are always performed before processes in the best effort class, which are always performed before processes in the idle class. This means that processes in the real time class can starve both best effort and idle processes of processor time. Processes are assigned to the best effort class by default. cfq uses historical data to anticipate whether an application will issue more I/O requests in the near future. If more I/O is expected, cfq idles to wait for the new I/O, even if I/O from other processes is waiting to be processed. Because of this tendency to idle, the cfq scheduler should not be used in conjunction with hardware that does not incur a large seek penalty unless it is tuned for this purpose. It should also not be used in conjunction with other non-work-conserving schedulers, such as a host-based hardware RAID controller, as stacking these schedulers tends to cause a large amount of latency. cfq behavior is highly configurable; see Section 8.4.5, "Tuning the CFQ Scheduler" for details. noop The noop I/O scheduler implements a simple FIFO (first-in first-out) scheduling algorithm. Requests are merged at the generic block layer through a simple last-hit cache. This can be the best scheduler for CPU-bound systems using fast storage. For details on setting a different default I/O scheduler, or specifying a different scheduler for a particular device, see Section 8.4, "Configuration Tools" . 8.1.2. File Systems Read this section for details about supported file systems in Red Hat Enterprise Linux 7, their recommended use cases, and the format and mount options available to file systems in general. Detailed tuning recommendations for these file systems are available in Section 8.4.7, "Configuring File Systems for Performance" . 8.1.2.1. XFS XFS is a robust and highly scalable 64-bit file system. It is the default file system in Red Hat Enterprise Linux 7. XFS uses extent-based allocation, and features a number of allocation schemes, including pre-allocation and delayed allocation, both of which reduce fragmentation and aid performance. It also supports metadata journaling, which can facilitate crash recovery. XFS can be defragmented and enlarged while mounted and active, and Red Hat Enterprise Linux 7 supports several XFS-specific backup and restore utilities. As of Red Hat Enterprise Linux 7.0 GA, XFS is supported to a maximum file system size of 500 TB, and a maximum file offset of 8 EB (sparse files). For details about administering XFS, see the Red Hat Enterprise Linux 7 Storage Administration Guide . For assistance tuning XFS for a specific purpose, see Section 8.4.7.1, "Tuning XFS" . 8.1.2.2. Ext4 Ext4 is a scalable extension of the ext3 file system. Its default behavior is optimal for most work loads. However, it is supported only to a maximum file system size of 50 TB, and a maximum file size of 16 TB. For details about administering ext4, see the Red Hat Enterprise Linux 7 Storage Administration Guide . For assistance tuning ext4 for a specific purpose, see Section 8.4.7.2, "Tuning ext4" . 8.1.2.3. Btrfs (Technology Preview) The default file system for Red Hat Enterprise Linux 7 is XFS. Btrfs (B-tree file system), a relatively new copy-on-write (COW) file system, is shipped as a Technology Preview . Some of the unique Btrfs features include: The ability to take snapshots of specific files, volumes or sub-volumes rather than the whole file system; supporting several versions of redundant array of inexpensive disks (RAID); back referencing map I/O errors to file system objects; transparent compression (all files on the partition are automatically compressed); checksums on data and meta-data. Although Btrfs is considered a stable file system, it is under constant development, so some functionality, such as the repair tools, are basic compared to more mature file systems. Currently, selecting Btrfs is suitable when advanced features (such as snapshots, compression, and file data checksums) are required, but performance is relatively unimportant. If advanced features are not required, the risk of failure and comparably weak performance over time make other file systems preferable. Another drawback, compared to other file systems, is the maximum supported file system size of 50 TB. For more information, see Section 8.4.7.3, "Tuning Btrfs" , and the chapter on Btrfs in the Red Hat Enterprise Linux 7 Storage Administration Guide . 8.1.2.4. GFS2 Global File System 2 (GFS2) is part of the High Availability Add-On that provides clustered file system support to Red Hat Enterprise Linux 7. GFS2 provides a consistent file system image across all servers in a cluster, which allows servers to read from and write to a single shared file system. GFS2 is supported to a maximum file system size of 100 TB. For details on administering GFS2, see the Global File System 2 guide or the Red Hat Enterprise Linux 7 Storage Administration Guide . For information on tuning GFS2 for a specific purpose, see Section 8.4.7.4, "Tuning GFS2" . 8.1.3. Generic Tuning Considerations for File Systems This section covers tuning considerations common to all file systems. For tuning recommendations specific to your file system, see Section 8.4.7, "Configuring File Systems for Performance" . 8.1.3.1. Considerations at Format Time Some file system configuration decisions cannot be changed after the device is formatted. This section covers the options available to you for decisions that must be made before you format your storage device. Size Create an appropriately-sized file system for your workload. Smaller file systems have proportionally shorter backup times and require less time and memory for file system checks. However, if your file system is too small, its performance will suffer from high fragmentation. Block size The block is the unit of work for the file system. The block size determines how much data can be stored in a single block, and therefore the smallest amount of data that is written or read at one time. The default block size is appropriate for most use cases. However, your file system will perform better and store data more efficiently if the block size (or the size of multiple blocks) is the same as or slightly larger than amount of data that is typically read or written at one time. A small file will still use an entire block. Files can be spread across multiple blocks, but this can create additional runtime overhead. Additionally, some file systems are limited to a certain number of blocks, which in turn limits the maximum size of the file system. Block size is specified as part of the file system options when formatting a device with the mkfs command. The parameter that specifies the block size varies with the file system; see the mkfs man page for your file system for details. For example, to see the options available when formatting an XFS file system, execute the following command. Geometry File system geometry is concerned with the distribution of data across a file system. If your system uses striped storage, like RAID, you can improve performance by aligning data and metadata with the underlying storage geometry when you format the device. Many devices export recommended geometry, which is then set automatically when the devices are formatted with a particular file system. If your device does not export these recommendations, or you want to change the recommended settings, you must specify geometry manually when you format the device with mkfs . The parameters that specify file system geometry vary with the file system; see the mkfs man page for your file system for details. For example, to see the options available when formatting an ext4 file system, execute the following command. External journals Journaling file systems document the changes that will be made during a write operation in a journal file prior to the operation being executed. This reduces the likelihood that a storage device will become corrupted in the event of a system crash or power failure, and speeds up the recovery process. Metadata-intensive workloads involve very frequent updates to the journal. A larger journal uses more memory, but reduces the frequency of write operations. Additionally, you can improve the seek time of a device with a metadata-intensive workload by placing its journal on dedicated storage that is as fast as, or faster than, the primary storage. Warning Ensure that external journals are reliable. Losing an external journal device will cause file system corruption. External journals must be created at format time, with journal devices being specified at mount time. For details, see the mkfs and mount man pages. 8.1.3.2. Considerations at Mount Time This section covers tuning decisions that apply to most file systems and can be specified as the device is mounted. Barriers File system barriers ensure that file system metadata is correctly written and ordered on persistent storage, and that data transmitted with fsync persists across a power outage. On versions of Red Hat Enterprise Linux, enabling file system barriers could significantly slow applications that relied heavily on fsync , or created and deleted many small files. In Red Hat Enterprise Linux 7, file system barrier performance has been improved such that the performance effects of disabling file system barriers are negligible (less than 3%). For further information, see the Red Hat Enterprise Linux 7 Storage Administration Guide . Access Time Every time a file is read, its metadata is updated with the time at which access occurred ( atime ). This involves additional write I/O. In most cases, this overhead is minimal, as by default Red Hat Enterprise Linux 7 updates the atime field only when the access time was older than the times of last modification ( mtime ) or status change ( ctime ). However, if updating this metadata is time consuming, and if accurate access time data is not required, you can mount the file system with the noatime mount option. This disables updates to metadata when a file is read. It also enables nodiratime behavior, which disables updates to metadata when a directory is read. Read-ahead Read-ahead behavior speeds up file access by pre-fetching data that is likely to be needed soon and loading it into the page cache, where it can be retrieved more quickly than if it were on disk. The higher the read-ahead value, the further ahead the system pre-fetches data. Red Hat Enterprise Linux attempts to set an appropriate read-ahead value based on what it detects about your file system. However, accurate detection is not always possible. For example, if a storage array presents itself to the system as a single LUN, the system detects the single LUN, and does not set the appropriate read-ahead value for an array. Workloads that involve heavy streaming of sequential I/O often benefit from high read-ahead values. The storage-related tuned profiles provided with Red Hat Enterprise Linux 7 raise the read-ahead value, as does using LVM striping, but these adjustments are not always sufficient for all workloads. The parameters that define read-ahead behavior vary with the file system; see the mount man page for details. 8.1.3.3. Maintenance Regularly discarding blocks that are not in use by the file system is a recommended practice for both solid-state disks and thinly-provisioned storage. There are two methods of discarding unused blocks: batch discard and online discard. Batch discard This type of discard is part of the fstrim command. It discards all unused blocks in a file system that match criteria specified by the administrator. Red Hat Enterprise Linux 7 supports batch discard on XFS and ext4 formatted devices that support physical discard operations (that is, on HDD devices where the value of /sys/block/ devname /queue/discard_max_bytes is not zero, and SSD devices where the value of /sys/block/ devname /queue/discard_granularity is not 0 ). Online discard This type of discard operation is configured at mount time with the discard option, and runs in real time without user intervention. However, online discard only discards blocks that are transitioning from used to free. Red Hat Enterprise Linux 7 supports online discard on XFS and ext4 formatted devices. Red Hat recommends batch discard except where online discard is required to maintain performance, or where batch discard is not feasible for the system's workload. Pre-allocation Pre-allocation marks disk space as being allocated to a file without writing any data into that space. This can be useful in limiting data fragmentation and poor read performance. Red Hat Enterprise Linux 7 supports pre-allocating space on XFS, ext4, and GFS2 devices at mount time; see the mount man page for the appropriate parameter for your file system. Applications can also benefit from pre-allocating space by using the fallocate(2) glibc call.
[ "man mkfs.xfs", "man mkfs.ext4", "man mkfs", "man mount", "man mount" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/chap-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Storage_and_File_Systems
Machine management
Machine management OpenShift Container Platform 4.13 Adding and maintaining cluster machines Red Hat OpenShift Documentation Team
[ "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags", "spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp", "spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role>-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "providerSpec: value: spotMarketOptions: {}", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-52-50.us-east-2.compute.internal Ready worker 3d17h v1.26.0 ip-10-0-58-24.us-east-2.compute.internal Ready control-plane,master 3d17h v1.26.0 ip-10-0-68-148.us-east-2.compute.internal Ready worker 3d17h v1.26.0 ip-10-0-68-68.us-east-2.compute.internal Ready control-plane,master 3d17h v1.26.0 ip-10-0-72-170.us-east-2.compute.internal Ready control-plane,master 3d17h v1.26.0 ip-10-0-74-50.us-east-2.compute.internal Ready worker 3d17h v1.26.0", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE preserve-dsoc12r4-ktjfc-worker-us-east-2a 1 1 1 1 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b 2 2 2 2 3d11h", "oc get machines -n openshift-machine-api | grep worker", "preserve-dsoc12r4-ktjfc-worker-us-east-2a-dts8r Running m5.xlarge us-east-2 us-east-2a 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-dkv7w Running m5.xlarge us-east-2 us-east-2b 3d11h preserve-dsoc12r4-ktjfc-worker-us-east-2b-k58cw Running m5.xlarge us-east-2 us-east-2b 3d11h", "oc get machineset preserve-dsoc12r4-ktjfc-worker-us-east-2a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.instanceType preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json \"g4dn.xlarge\"", "oc -n openshift-machine-api get preserve-dsoc12r4-ktjfc-worker-us-east-2a -o json | diff preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json -", "10c10 < \"name\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\", --- > \"name\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\", 21c21 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 31c31 < \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"preserve-dsoc12r4-ktjfc-worker-us-east-2a\" 60c60 < \"instanceType\": \"g4dn.xlarge\", --- > \"instanceType\": \"m5.xlarge\",", "oc create -f preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a.json", "machineset.machine.openshift.io/preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a created", "oc -n openshift-machine-api get machinesets | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a 1 1 1 1 4m21s", "oc -n openshift-machine-api get machines | grep gpu", "preserve-dsoc12r4-ktjfc-worker-gpu-us-east-2a running g4dn.xlarge us-east-2 us-east-2a 4m36s", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: spotVMOptions: {}", "oc edit machineset <machine-set-name>", "providerSpec: value: osDisk: diskSettings: 1 ephemeralStorageLocation: Local 2 cachingType: ReadOnly 3 managedDisk: storageAccountType: Standard_LRS 4", "oc create -f <machine-set-config>.yaml", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc edit machineset <machine-set-name>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc create -f <machine-set-name>.yaml", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 6h9m myclustername-worker-centralus2 1 1 1 1 6h9m myclustername-worker-centralus3 1 1 1 1 6h9m", "oc get machineset -n openshift-machine-api myclustername-worker-centralus1 -o yaml > machineset-azure.yaml", "cat machineset-azure.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"0\" machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T14:08:19Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"23601\" uid: acd56e0c-7612-473a-ae37-8704f34b80de spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "cp machineset-azure.yaml machineset-azure-gpu.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/GPU: \"1\" machine.openshift.io/memoryMb: \"28672\" machine.openshift.io/vCPU: \"4\" creationTimestamp: \"2023-02-06T20:27:12Z\" generation: 1 labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: myclustername-nc4ast4-gpu-worker-centralus1 namespace: openshift-machine-api resourceVersion: \"166285\" uid: 4eedce7f-6a57-4abe-b529-031140f02ffa spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 template: metadata: labels: machine.openshift.io/cluster-api-cluster: myclustername machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api diagnostics: {} image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/myclustername-rg/providers/Microsoft.Compute/galleries/gallery_myclustername_n6n4r/images/myclustername-gen2/versions/latest sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: centralus managedIdentity: myclustername-identity metadata: creationTimestamp: null networkResourceGroup: myclustername-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: myclustername resourceGroup: myclustername-rg spotVMOptions: {} subnet: myclustername-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_NC4as_T4_v3 vnet: myclustername-vnet zone: \"1\" status: availableReplicas: 1 fullyLabeledReplicas: 1 observedGeneration: 1 readyReplicas: 1 replicas: 1", "diff machineset-azure.yaml machineset-azure-gpu.yaml", "14c14 < name: myclustername-worker-centralus1 --- > name: myclustername-nc4ast4-gpu-worker-centralus1 23c23 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 30c30 < machine.openshift.io/cluster-api-machineset: myclustername-worker-centralus1 --- > machine.openshift.io/cluster-api-machineset: myclustername-nc4ast4-gpu-worker-centralus1 67c67 < vmSize: Standard_D4s_v3 --- > vmSize: Standard_NC4as_T4_v3", "oc create -f machineset-azure-gpu.yaml", "machineset.machine.openshift.io/myclustername-nc4ast4-gpu-worker-centralus1 created", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE clustername-n6n4r-nc4ast4-gpu-worker-centralus1 1 1 1 1 122m clustername-n6n4r-worker-centralus1 1 1 1 1 8h clustername-n6n4r-worker-centralus2 1 1 1 1 8h clustername-n6n4r-worker-centralus3 1 1 1 1 8h", "oc get machines -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE myclustername-master-0 Running Standard_D8s_v3 centralus 2 6h40m myclustername-master-1 Running Standard_D8s_v3 centralus 1 6h40m myclustername-master-2 Running Standard_D8s_v3 centralus 3 6h40m myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running centralus 1 21m myclustername-worker-centralus1-rbh6b Running Standard_D4s_v3 centralus 1 6h38m myclustername-worker-centralus2-dbz7w Running Standard_D4s_v3 centralus 2 6h38m myclustername-worker-centralus3-p9b8c Running Standard_D4s_v3 centralus 3 6h38m", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-master-0 Ready control-plane,master 6h39m v1.26.0 myclustername-master-1 Ready control-plane,master 6h41m v1.26.0 myclustername-master-2 Ready control-plane,master 6h39m v1.26.0 myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Ready worker 14m v1.26.0 myclustername-worker-centralus1-rbh6b Ready worker 6h29m v1.26.0 myclustername-worker-centralus2-dbz7w Ready worker 6h29m v1.26.0 myclustername-worker-centralus3-p9b8c Ready worker 6h31m v1.26.0", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc create -f machineset-azure-gpu.yaml", "get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m myclustername-worker-centralus1 1 1 1 1 8h myclustername-worker-centralus2 1 1 1 1 8h myclustername-worker-centralus3 1 1 1 1 8h", "oc get machineset -n openshift-machine-api | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1 1 1 1 1 121m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-nc4ast4-gpu-worker-centralus1-w9bqn Running Standard_NC4as_T4_v3 centralus 1 21m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 11 providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 12 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: type: <pd-disk-type> 1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "providerSpec: value: preemptible: true", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "providerSpec: value: machineType: a2-highgpu-1g 1 onHostMaintenance: Terminate 2 restartPolicy: Always 3", "providerSpec: value: gpus: - count: 1 1 type: nvidia-tesla-p100 2 machineType: n1-standard-1 3 onHostMaintenance: Terminate 4 restartPolicy: Always 5", "machineType: a2-highgpu-1g onHostMaintenance: Terminate", "{ \"apiVersion\": \"machine.openshift.io/v1beta1\", \"kind\": \"MachineSet\", \"metadata\": { \"annotations\": { \"machine.openshift.io/GPU\": \"0\", \"machine.openshift.io/memoryMb\": \"16384\", \"machine.openshift.io/vCPU\": \"4\" }, \"creationTimestamp\": \"2023-01-13T17:11:02Z\", \"generation\": 1, \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\" }, \"name\": \"myclustername-2pt9p-worker-gpu-a\", \"namespace\": \"openshift-machine-api\", \"resourceVersion\": \"20185\", \"uid\": \"2daf4712-733e-4399-b4b4-d43cb1ed32bd\" }, \"spec\": { \"replicas\": 1, \"selector\": { \"matchLabels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"template\": { \"metadata\": { \"labels\": { \"machine.openshift.io/cluster-api-cluster\": \"myclustername-2pt9p\", \"machine.openshift.io/cluster-api-machine-role\": \"worker\", \"machine.openshift.io/cluster-api-machine-type\": \"worker\", \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" } }, \"spec\": { \"lifecycleHooks\": {}, \"metadata\": {}, \"providerSpec\": { \"value\": { \"apiVersion\": \"machine.openshift.io/v1beta1\", \"canIPForward\": false, \"credentialsSecret\": { \"name\": \"gcp-cloud-credentials\" }, \"deletionProtection\": false, \"disks\": [ { \"autoDelete\": true, \"boot\": true, \"image\": \"projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64\", \"labels\": null, \"sizeGb\": 128, \"type\": \"pd-ssd\" } ], \"kind\": \"GCPMachineProviderSpec\", \"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\", \"metadata\": { \"creationTimestamp\": null }, \"networkInterfaces\": [ { \"network\": \"myclustername-2pt9p-network\", \"subnetwork\": \"myclustername-2pt9p-worker-subnet\" } ], \"preemptible\": true, \"projectID\": \"myteam\", \"region\": \"us-central1\", \"serviceAccounts\": [ { \"email\": \"[email protected]\", \"scopes\": [ \"https://www.googleapis.com/auth/cloud-platform\" ] } ], \"tags\": [ \"myclustername-2pt9p-worker\" ], \"userDataSecret\": { \"name\": \"worker-user-data\" }, \"zone\": \"us-central1-a\" } } } } }, \"status\": { \"availableReplicas\": 1, \"fullyLabeledReplicas\": 1, \"observedGeneration\": 1, \"readyReplicas\": 1, \"replicas\": 1 } }", "oc get nodes", "NAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.26.0 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.26.0 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.26.0 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.26.0 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.26.0 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.26.0 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.26.0", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h", "oc get machines -n openshift-machine-api | grep worker", "myclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h", "oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json>", "jq .spec.template.spec.providerSpec.value.machineType ocp_4.13_machineset-a2-highgpu-1g.json \"a2-highgpu-1g\"", "\"machineType\": \"a2-highgpu-1g\", \"onHostMaintenance\": \"Terminate\",", "oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.13_machineset-a2-highgpu-1g.json -", "15c15 < \"name\": \"myclustername-2pt9p-worker-gpu-a\", --- > \"name\": \"myclustername-2pt9p-worker-a\", 25c25 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 34c34 < \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-gpu-a\" --- > \"machine.openshift.io/cluster-api-machineset\": \"myclustername-2pt9p-worker-a\" 59,60c59 < \"machineType\": \"a2-highgpu-1g\", < \"onHostMaintenance\": \"Terminate\", --- > \"machineType\": \"n2-standard-4\",", "oc create -f ocp_4.13_machineset-a2-highgpu-1g.json", "machineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created", "oc -n openshift-machine-api get machinesets | grep gpu", "myclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m", "oc -n openshift-machine-api get machines | grep gpu", "myclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d", "oc get pods -n openshift-nfd", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12d", "oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'", "Roles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: powervs-credentials image: name: rhcos-<infrastructure_id> 11 type: Name keyPairName: <infrastructure_id>-key kind: PowerVSMachineProviderConfig memoryGiB: 32 network: regex: ^DHCPSERVER[0-9a-z]{32}_PrivateUSD type: RegEx processorType: Shared processors: \"0.5\" serviceInstance: id: <ibm_power_vs_service_instance_id> type: ID 12 systemType: s922 userDataSecret: name: <role>-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> name: <infrastructure_id>-<role>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <role> 8 machine.openshift.io/cluster-api-machine-type: <role> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 10 spec: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 11 kind: OpenstackProviderSpec networks: 12 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 13 primarySubnet: <rhosp_subnet_UUID> 14 securityGroups: - filter: {} name: <infrastructure_id>-worker 15 serverMetadata: Name: <infrastructure_id>-worker 16 openshiftClusterID: <infrastructure_id> 17 tags: - openshiftClusterID=<infrastructure_id> 18 trunk: true userDataSecret: name: worker-user-data 19 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> kind: OpenstackProviderSpec networks: - subnets: - UUID: <machines_subnet_UUID> ports: - networkID: <radio_network_UUID> 1 nameSuffix: radio fixedIPs: - subnetID: <radio_subnet_UUID> 2 tags: - sriov - radio vnicType: direct 3 portSecurity: false 4 - networkID: <uplink_network_UUID> 5 nameSuffix: uplink fixedIPs: - subnetID: <uplink_subnet_UUID> 6 tags: - sriov - uplink vnicType: direct 7 portSecurity: false 8 primarySubnet: <machines_subnet_UUID> securityGroups: - filter: {} name: <infrastructure_id>-<node_role> serverMetadata: Name: <infrastructure_id>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: true userDataSecret: name: <node_role>-user-data availabilityZone: <optional_openstack_availability_zone>", "oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> name: <infrastructure_id>-<node_role> namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <node_role> machine.openshift.io/cluster-api-machine-type: <node_role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<node_role> spec: metadata: {} providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> kind: OpenstackProviderSpec ports: - allowedAddressPairs: 1 - ipAddress: <API_VIP_port_IP> - ipAddress: <ingress_VIP_port_IP> fixedIPs: - subnetID: <machines_subnet_UUID> 2 nameSuffix: nodes networkID: <machines_network_UUID> 3 securityGroups: - <compute_security_group_UUID> 4 - networkID: <SRIOV_network_UUID> nameSuffix: sriov fixedIPs: - subnetID: <SRIOV_subnet_UUID> tags: - sriov vnicType: direct portSecurity: False primarySubnet: <machines_subnet_UUID> serverMetadata: Name: <infrastructure_ID>-<node_role> openshiftClusterID: <infrastructure_id> tags: - openshiftClusterID=<infrastructure_id> trunk: false userDataSecret: name: worker-user-data", "networks: - subnets: - uuid: <machines_subnet_UUID> portSecurityEnabled: false portSecurityEnabled: false securityGroups: []", "openstack port set --enable-port-security --security-group <infrastructure_id>-<node_role> <main_port_ID>", "oc label node <NODE_NAME> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 10 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 11 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 12 datastore: <vcenter_datastore_name> 13 folder: <vcenter_vm_folder_path> 14 resourcepool: <vsphere_resource_pool> 15 server: <vcenter_server_ip> 16", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "oc get secret -n openshift-machine-api vsphere-cloud-credentials -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "<vcenter-server>.password=<openshift-user-password> <vcenter-server>.username=<openshift-user>", "oc create secret generic vsphere-cloud-credentials -n openshift-machine-api --from-literal=<vcenter-server>.username=<openshift-user> --from-literal=<vcenter-server>.password=<openshift-user-password>", "oc get secret -n openshift-machine-api worker-user-data -o go-template='{{range USDk,USDv := .data}}{{printf \"%s: \" USDk}}{{if not USDv}}{{USDv}}{{else}}{{USDv | base64decode}}{{end}}{{\"\\n\"}}{{end}}'", "disableTemplating: false userData: 1 { \"ignition\": { }, }", "oc create secret generic worker-user-data -n openshift-machine-api --from-file=<installation_directory>/worker.ign", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 16384 network: devices: - networkName: \"<vm_network_name>\" numCPUs: 4 numCoresPerSocket: 4 snapshot: \"\" template: <vm_template_name> 2 userDataSecret: name: worker-user-data 3 workspace: datacenter: <vcenter_datacenter_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_address> 4", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <role> 6 machine.openshift.io/cluster-api-machine-type: <role> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/<role>: \"\" 9 providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 hostSelector: {} image: checksum: http:/172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2.<md5sum> 10 url: http://172.22.0.3:6181/images/rhcos-<version>.<architecture>.qcow2 11 kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines", "spec: deletePolicy: <delete_policy> replicas: <desired_replica_count>", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m", "oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h", "oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s", "oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s", "NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s", "oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A", "oc get machineset -o yaml", "oc delete machineset <machineset-name>", "oc get nodes", "oc get machine -n openshift-machine-api", "NAME PHASE TYPE REGION ZONE AGE mycluster-5kbsp-master-0 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-master-1 Running m6i.xlarge us-west-1 us-west-1b 4h55m mycluster-5kbsp-master-2 Running m6i.xlarge us-west-1 us-west-1a 4h55m mycluster-5kbsp-worker-us-west-1a-fmx8t Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1a-m889l Running m6i.xlarge us-west-1 us-west-1a 4h51m mycluster-5kbsp-worker-us-west-1b-c8qzm Running m6i.xlarge us-west-1 us-west-1b 4h51m", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: name: mycluster-5kbsp-worker-us-west-1a-fmx8t status: phase: Running 1", "oc get machine -n openshift-machine-api", "oc delete machine <machine> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: <hook_name> 1 owner: <hook_owner> 2", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preTerminate: - name: <hook_name> 1 owner: <hook_owner> 2", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: 1 - name: MigrateImportantApp owner: my-app-migration-controller preTerminate: 2 - name: BackupFileSystem owner: my-backup-controller - name: CloudProviderSpecialCase owner: my-custom-storage-detach-controller 3 - name: WaitForStorageDetach owner: my-custom-storage-detach-controller", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2", "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: <gpu_type> 7 min: 0 8 max: 16 9 logVerbosity: 4 10 scaleDown: 11 enabled: true 12 delayAfterAdd: 10m 13 delayAfterDelete: 5m 14 delayAfterFailure: 30s 15 unneededTime: 5m 16 utilizationThreshold: \"0.4\" 17", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: nvidia-t4 1", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "oc get MachineAutoscaler -n openshift-machine-api", "NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m", "oc get MachineAutoscaler/<machine_autoscaler_name> \\ 1 -n openshift-machine-api -o yaml> <machine_autoscaler_name_backup>.yaml 2", "oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api", "machineautoscaler.autoscaling.openshift.io \"compute-us-east-1a\" deleted", "oc get MachineAutoscaler -n openshift-machine-api", "oc get ClusterAutoscaler", "NAME AGE default 42m", "oc get ClusterAutoscaler/default \\ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2", "oc delete ClusterAutoscaler/default", "clusterautoscaler.autoscaling.openshift.io \"default\" deleted", "oc get ClusterAutoscaler", "No resources found", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<zone> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 credentialsSecret: name: alibabacloud-credentials imageId: <image_id> 11 instanceType: <instance_type> 12 kind: AlibabaCloudMachineProviderConfig ramRoleName: <infrastructure_id>-role-worker 13 regionId: <region> 14 resourceGroup: 15 id: <resource_group_id> type: ID securityGroups: - tags: 16 - Key: Name Value: <infrastructure_id>-sg-<role> type: Tags systemDisk: 17 category: cloud_essd size: <disk_size> tag: 18 - Key: kubernetes.io/cluster/<infrastructure_id> Value: owned userDataSecret: name: <user_data_secret> 19 vSwitch: tags: 20 - Key: Name Value: <infrastructure_id>-vswitch-<zone> type: Tags vpcId: \"\" zoneId: <zone> 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "spec: template: spec: providerSpec: value: securityGroups: - tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 2 Value: ocp - Key: Name Value: <infrastructure_id>-sg-<role> 3 type: Tags", "spec: template: spec: providerSpec: value: tag: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp", "spec: template: spec: providerSpec: value: vSwitch: tags: - Key: kubernetes.io/cluster/<infrastructure_id> 1 Value: owned - Key: GISV 2 Value: ocp - Key: sigs.k8s.io/cloud-provider-alibaba/origin 3 Value: ocp - Key: Name Value: <infrastructure_id>-vswitch-<zone> 4 type: Tags", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: infra 6 machine.openshift.io/cluster-api-machine-type: infra 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 providerSpec: value: ami: id: ami-046fe691f52a953f9 10 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 11 instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 12 region: <region> 13 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 14 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 15 tags: - name: kubernetes.io/cluster/<infrastructure_id> 16 value: owned - name: <custom_tag_name> 17 value: <custom_tag_value> 18 userDataSecret: name: worker-user-data taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: infra 2 machine.openshift.io/cluster-api-machine-type: infra name: <infrastructure_id>-infra-<region> 3 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: infra machine.openshift.io/cluster-api-machine-type: infra machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> spec: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-machineset: <machineset_name> node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 4 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/galleries/gallery_<infrastructure_id>/images/<infrastructure_id>-gen2/versions/latest 5 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 6 managedIdentity: <infrastructure_id>-identity metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg sshPrivateKey: \"\" sshPublicKey: \"\" tags: - name: <custom_tag_name> 7 value: <custom_tag_value> subnet: <infrastructure_id>-<role>-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4s_v3 vnet: <infrastructure_id>-vnet zone: \"1\" 8 taints: 9 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 taints: 12 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: machine.openshift.io/v1beta1 availabilitySet: <availability_set> 13 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 14 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 15 managedIdentity: <infrastructure_id>-identity 16 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 17 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 18 19 userDataSecret: name: worker-user-data 20 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 21 zone: \"1\" 22", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-<infra>-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<region> 10 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: ibmcloudproviderconfig.openshift.io/v1beta1 credentialsSecret: name: ibmcloud-credentials image: <infrastructure_id>-rhcos 11 kind: IBMCloudMachineProviderSpec primaryNetworkInterface: securityGroups: - <infrastructure_id>-sg-cluster-wide - <infrastructure_id>-sg-openshift-net subnet: <infrastructure_id>-subnet-compute-<zone> 12 profile: <instance_profile> 13 region: <region> 14 resourceGroup: <resource_group> 15 userDataSecret: name: <role>-user-data 16 vpc: <vpc_name> 17 zone: <zone> 18 taints: 19 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: 6 - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 7 - key: node-role.kubernetes.io/infra effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> name: <infrastructure_id>-<infra>-<zone> 3 namespace: openshift-machine-api annotations: 4 machine.openshift.io/memoryMb: \"16384\" machine.openshift.io/vCPU: \"4\" spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<infra>-<zone> spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: machine.openshift.io/v1 bootType: \"\" 5 categories: 6 - key: <category_name> value: <category_value> cluster: 7 type: uuid uuid: <cluster_uuid> credentialsSecret: name: nutanix-credentials image: name: <infrastructure_id>-rhcos 8 type: name kind: NutanixMachineProviderConfig memorySize: 16Gi 9 project: 10 type: name name: <project_name> subnets: - type: uuid uuid: <subnet_uuid> systemDiskSize: 120Gi 11 userDataSecret: name: <user_data_secret> 12 vcpuSockets: 4 13 vcpusPerSocket: 1 14 taints: 15 - key: node-role.kubernetes.io/infra effect: NoSchedule", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 Selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 sparse: <boolean_value> 16 format: <raw_or_cow> 17 cpu: 18 sockets: <number_of_sockets> 19 cores: <number_of_cores> 20 threads: <number_of_threads> 21 memory_mb: <memory_size> 22 guaranteed_memory_mb: <memory_size> 23 os_disk: 24 size_gb: <disk_size> 25 storage_domain_id: <storage_domain_UUID> 26 network_interfaces: 27 vnic_profile_id: <vnic_profile_id> 28 credentialsSecret: name: ovirt-credentials 29 kind: OvirtMachineProviderSpec type: <workload_type> 30 auto_pinning_policy: <auto_pinning_policy> 31 hugepages: <hugepages> 32 affinityGroupsNames: - compute 33 userDataSecret: name: worker-user-data", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "oc label node <node_name> <label>", "oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=", "cat infra.mcp.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2", "oc create -f infra.mcp.yaml", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d", "cat infra.mc.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra", "oc create -f infra.mc.yaml", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m", "oc describe nodes <node_name>", "describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoSchedule", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoSchedule value: reserved", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved", "tolerations: - effect: NoSchedule 1 key: node-role.kubernetes.io/infra 2 value: reserved 3 - effect: NoExecute 4 key: node-role.kubernetes.io/infra 5 operator: Equal 6 value: reserved 7", "spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get ingresscontroller default -n openshift-ingress-operator -o yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default", "oc edit ingresscontroller default -n openshift-ingress-operator", "spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pod -n openshift-ingress -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>", "oc get node <node_name> 1", "NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.26.0", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:", "oc edit configs.imageregistry.operator.openshift.io/cluster", "spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc get pods -o wide -n openshift-image-registry", "oc describe node <node_name>", "oc edit configmap cluster-monitoring-config -n openshift-monitoring", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute", "watch 'oc get pod -n openshift-monitoring -o wide'", "oc delete pod -n openshift-monitoring <pod>", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-8.4*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "------------------------------------------------------------------------------------------------------------ | DescribeImages | +---------------------------+-----------------------------------------------------+------------------------+ | 2021-03-18T14:23:11.000Z | RHEL-8.4.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | | 2021-03-18T14:38:28.000Z | RHEL-8.4.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | | 2021-05-18T19:06:34.000Z | RHEL-8.4.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | | 2021-05-18T20:09:47.000Z | RHEL-8.4.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | +---------------------------+-----------------------------------------------------+------------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.13-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com [new_workers] mycluster-rhel8-2.example.com mycluster-rhel8-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "aws cloudformation describe-stacks --stack-name <name>", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "oc get machine -n openshift-machine-api -l machine.openshift.io/cluster-api-machine-role=master", "NAME PHASE TYPE REGION ZONE AGE <infrastructure_id>-master-0 Running m6i.xlarge us-west-1 us-west-1a 5h19m <infrastructure_id>-master-1 Running m6i.xlarge us-west-1 us-west-1b 5h19m <infrastructure_id>-master-2 Running m6i.xlarge us-west-1 us-west-1a 5h19m", "No resources found in openshift-machine-api namespace.", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: replicas: 3 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 1 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 2 strategy: type: RollingUpdate 3 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 4 <platform_failure_domains> 5 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> 6 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 7", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc create -f <control_plane_machine_set>.yaml", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster 1 namespace: openshift-machine-api spec: replicas: 3 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <cluster_id> 3 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master state: Active 4 strategy: type: RollingUpdate 5 template: machineType: machines_v1beta1_machine_openshift_io machines_v1beta1_machine_openshift_io: failureDomains: platform: <platform> 6 <platform_failure_domains> 7 metadata: labels: machine.openshift.io/cluster-api-cluster: <cluster_id> machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master spec: providerSpec: value: <platform_provider_spec> 8", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "providerSpec: value: ami: id: ami-<ami_id_string> 1 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: 2 encrypted: true iops: 0 kmsKey: arn: \"\" volumeSize: 120 volumeType: gp3 credentialsSecret: name: aws-cloud-credentials 3 deviceIndex: 0 iamInstanceProfile: id: <cluster_id>-master-profile 4 instanceType: m6i.xlarge 5 kind: AWSMachineProviderConfig 6 loadBalancers: 7 - name: <cluster_id>-int type: network - name: <cluster_id>-ext type: network metadata: creationTimestamp: null metadataServiceOptions: {} placement: 8 region: <region> 9 securityGroups: - filters: - name: tag:Name values: - <cluster_id>-master-sg 10 subnet: {} 11 userDataSecret: name: master-user-data 12", "failureDomains: aws: - placement: availabilityZone: <aws_zone_a> 1 subnet: 2 filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_a> 3 type: Filters 4 - placement: availabilityZone: <aws_zone_b> 5 subnet: filters: - name: tag:Name values: - <cluster_id>-private-<aws_zone_b> 6 type: Filters platform: AWS 7", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.machines_v1beta1_machine_openshift_io.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get ControlPlaneMachineSet/cluster", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet metadata: name: cluster namespace: openshift-machine-api spec: template: spec: providerSpec: value: apiVersion: machine.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials 1 deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 2 labels: null sizeGb: 200 type: pd-ssd kind: GCPMachineProviderSpec 3 machineType: e2-standard-4 metadata: creationTimestamp: null metadataServiceOptions: {} networkInterfaces: - network: <cluster_id>-network subnetwork: <cluster_id>-master-subnet projectID: <project_name> 4 region: <region> 5 serviceAccounts: 6 - email: <cluster_id>-m@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform shieldedInstanceConfig: {} tags: - <cluster_id>-master targetPools: - <cluster_id>-api userDataSecret: name: master-user-data 7 zone: \"\" 8", "failureDomains: gcp: - zone: <gcp_zone_a> 1 - zone: <gcp_zone_b> 2 - zone: <gcp_zone_c> - zone: <gcp_zone_d> platform: GCP 3", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials 1 namespace: openshift-machine-api diagnostics: {} image: 2 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<cluster_id>-rg/providers/Microsoft.Compute/galleries/gallery_<cluster_id>/images/<cluster_id>-gen2/versions/412.86.20220930 3 sku: \"\" version: \"\" internalLoadBalancer: <cluster_id>-internal 4 kind: AzureMachineProviderSpec 5 location: <region> 6 managedIdentity: <cluster_id>-identity metadata: creationTimestamp: null name: <cluster_id> networkResourceGroup: <cluster_id>-rg osDisk: 7 diskSettings: {} diskSizeGB: 1024 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <cluster_id> 8 resourceGroup: <cluster_id>-rg subnet: <cluster_id>-master-subnet 9 userDataSecret: name: master-user-data 10 vmSize: Standard_D8s_v3 vnet: <cluster_id>-vnet zone: \"\" 11", "failureDomains: azure: 1 - zone: \"1\" - zone: \"2\" - zone: \"3\" platform: Azure 2", "providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials 1 diskGiB: 120 2 kind: VSphereMachineProviderSpec 3 memoryMiB: 16384 4 metadata: creationTimestamp: null network: 5 devices: - networkName: <vm_network_name> numCPUs: 4 6 numCoresPerSocket: 4 7 snapshot: \"\" template: <vm_template_name> 8 userDataSecret: name: master-user-data 9 workspace: datacenter: <vcenter_datacenter_name> 10 datastore: <vcenter_datastore_name> 11 folder: <path_to_vcenter_vm_folder> 12 resourcePool: <vsphere_resource_pool> 13 server: <vcenter_server_ip> 14", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc delete machine -n openshift-machine-api <control_plane_machine_name> 1", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "providerSpec: value: instanceType: <compatible_aws_instance_type> 1", "providerSpec: value: metadataServiceOptions: authentication: Required 1", "providerSpec: placement: tenancy: dedicated", "providerSpec: value: loadBalancers: - name: lk4pj-ext 1 type: network 2 - name: lk4pj-int type: network", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "providerSpec: value: image: offer: rh-ocp-worker publisher: redhat resourceID: \"\" sku: rh-ocp-worker type: MarketplaceWithPlan version: 4.8.2021122100", "providerSpec: diagnostics: boot: storageAccountType: AzureManaged 1", "providerSpec: diagnostics: boot: storageAccountType: CustomerManaged 1 customerManaged: storageAccountURI: https://<storage-account>.blob.core.windows.net 2", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.userData | base64decode}}' | jq > userData.txt 2", "\"storage\": { \"disks\": [ 1 { \"device\": \"/dev/disk/azure/scsi1/lun0\", 2 \"partitions\": [ 3 { \"label\": \"lun0p1\", 4 \"sizeMiB\": 1024, 5 \"startMiB\": 0 } ] } ], \"filesystems\": [ 6 { \"device\": \"/dev/disk/by-partlabel/lun0p1\", \"format\": \"xfs\", \"path\": \"/var/lib/lun0p1\" } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var/lib/lun0p1\\nWhat=/dev/disk/by-partlabel/lun0p1\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", 8 \"enabled\": true, \"name\": \"var-lib-lun0p1.mount\" } ] }", "oc -n openshift-machine-api get secret <role>-user-data \\ 1 --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc -n openshift-machine-api create secret generic <role>-user-data-x5 \\ 1 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "oc --namespace openshift-machine-api edit controlplanemachineset.machine.openshift.io cluster", "apiVersion: machine.openshift.io/v1beta1 kind: ControlPlaneMachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 dataDisks: 3 - nameSuffix: ultrassd lun: 0 diskSizeGB: 4 deletionPolicy: Delete cachingType: None managedDisk: storageAccountType: UltraSSD_LRS userDataSecret: name: <role>-user-data-x5 4", "oc get machines", "oc debug node/<node-name> -- chroot /host lsblk", "StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.", "failed to create vm <machine_name>: failure sending request for machine <machine_name>: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code=\"BadRequest\" Message=\"Storage Account type 'UltraSSD_LRS' is not supported <more_information_about_why>.\"", "providerSpec: value: osDisk: diskSizeGB: 128 managedDisk: diskEncryptionSet: id: /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.Compute/diskEncryptionSets/<disk_encryption_set_name> storageAccountType: Premium_LRS", "providerSpec: value: acceleratedNetworking: true 1 vmSize: <azure-vm-size> 2", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: type: pd-ssd 1", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: confidentialCompute: Enabled 1 onHostMaintenance: Terminate 2 machineType: n2d-standard-8 3", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: shieldedInstanceConfig: 1 integrityMonitoring: Enabled 2 secureBoot: Disabled 3 virtualizedTrustedPlatformModule: Enabled 4", "gcloud kms keys add-iam-policy-binding <key_name> --keyring <key_ring_name> --location <key_ring_location> --member \"serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com\" --role roles/cloudkms.cryptoKeyEncrypterDecrypter", "apiVersion: machine.openshift.io/v1 kind: ControlPlaneMachineSet spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key 1 keyRing: openshift-encrpytion-ring 2 location: global 3 projectID: openshift-gcp-project 4 kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com 5", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: spec: lifecycleHooks: preDrain: - name: EtcdQuorumOperator 1 owner: clusteroperator/etcd 2", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api", "oc edit machine <control_plane_machine_name>", "oc edit controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get machines -l machine.openshift.io/cluster-api-machine-role==master -n openshift-machine-api -o wide", "oc edit machine <control_plane_machine_name>", "oc delete controlplanemachineset.machine.openshift.io cluster -n openshift-machine-api", "oc get controlplanemachineset.machine.openshift.io cluster --namespace openshift-machine-api", "oc get infrastructure cluster -o jsonpath='{.status.infrastructureName}'", "apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api", "oc create -f <cluster_resource_file>.yaml", "oc get cluster", "NAME PHASE AGE VERSION <cluster_name> Provisioning 4h6m", "apiVersion: infrastructure.cluster.x-k8s.io/<version> 1 kind: <infrastructure_kind> 2 metadata: name: <cluster_name> 3 namespace: openshift-cluster-api spec: 4", "oc create -f <infrastructure_resource_file>.yaml", "oc get <infrastructure_kind>", "NAME CLUSTER READY <cluster_name> <cluster_name> true", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <machine_template_kind> 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3", "oc create -f <machine_template_resource_file>.yaml", "oc get <machine_template_kind>", "NAME AGE <template_name> 77m", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: 3", "oc create -f <machine_set_resource_file>.yaml", "oc get machineset -n openshift-cluster-api 1", "NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <machine_set_name> <cluster_name> 1 1 1 17m", "oc get machine -n openshift-cluster-api 1", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_set_name>-<string_id> <cluster_name> <ip_address>.<region>.compute.internal <provider_id> Running 8m23s", "oc get node", "NAME STATUS ROLES AGE VERSION <ip_address_1>.<region>.compute.internal Ready worker 5h14m v1.28.5 <ip_address_2>.<region>.compute.internal Ready master 5h19m v1.28.5 <ip_address_3>.<region>.compute.internal Ready worker 7m v1.28.5", "oc get <machine_template_kind> 1", "NAME AGE <template_name> 77m", "oc get <machine_template_kind> <template_name> -o yaml > <template_name>.yaml", "oc apply -f <modified_template_name>.yaml 1", "oc get machinesets.cluster.x-k8s.io -n openshift-cluster-api", "NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION <compute_machine_set_name_1> <cluster_name> 1 1 1 26m <compute_machine_set_name_2> <cluster_name> 1 1 1 26m", "oc edit machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-cluster-api spec: replicas: 2 1", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h", "oc annotate machines.cluster.x-k8s.io/<machine_name_original_1> -n openshift-cluster-api cluster.x-k8s.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api -l cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 4h <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioned 55s <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Provisioning 55s", "oc scale --replicas=2 \\ 1 machinesets.cluster.x-k8s.io <machine_set_name> -n openshift-cluster-api", "oc describe machines.cluster.x-k8s.io <machine_name_updated_1> -n openshift-cluster-api", "oc get machines.cluster.x-k8s.io -n openshift-cluster-api cluster.x-k8s.io/set-name=<machine_set_name>", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_original_1> <cluster_name> <original_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_original_2> <cluster_name> <original_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m", "NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION <machine_name_updated_1> <cluster_name> <updated_1_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m <machine_name_updated_2> <cluster_name> <updated_2_ip>.<region>.compute.internal aws:///us-east-2a/i-04e7b2cbd61fd2075 Running 18m", "apiVersion: cluster.x-k8s.io/v1beta1 kind: Cluster metadata: name: <cluster_name> 1 namespace: openshift-cluster-api spec: infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: <infrastructure_kind> 2 name: <cluster_name> namespace: openshift-cluster-api", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSCluster 1 metadata: name: <cluster_name> 2 namespace: openshift-cluster-api spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 region: <region> 4", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 uncompressedUserData: true iamInstanceProfile: # instanceType: m5.large cloudInit: insecureSkipSecretsManager: true ami: id: # subnet: filters: - name: tag:Name values: - # additionalSecurityGroups: - filters: - name: tag:Name values: - #", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: AWSMachineTemplate 4 name: <template_name> 5", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPCluster 1 metadata: name: <cluster_name> 2 spec: controlPlaneEndpoint: 3 host: <control_plane_endpoint_address> port: 6443 network: name: <cluster_name>-network project: <project> 4 region: <region> 5", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 1 metadata: name: <template_name> 2 namespace: openshift-cluster-api spec: template: spec: 3 rootDeviceType: pd-ssd rootDeviceSize: 128 instanceType: n1-standard-4 image: projects/rhcos-cloud/global/images/rhcos-411-85-202203181601-0-gcp-x86-64 subnet: <cluster_name>-worker-subnet serviceAccounts: email: <service_account_email_address> scopes: - https://www.googleapis.com/auth/cloud-platform additionalLabels: kubernetes-io-cluster-<cluster_name>: owned additionalNetworkTags: - <cluster_name>-worker ipForwarding: Disabled", "apiVersion: cluster.x-k8s.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> 1 namespace: openshift-cluster-api spec: clusterName: <cluster_name> 2 replicas: 1 selector: matchLabels: test: example template: metadata: labels: test: example spec: bootstrap: dataSecretName: worker-user-data 3 clusterName: <cluster_name> infrastructureRef: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: GCPMachineTemplate 4 name: <template_name> 5 failureDomain: <failure_domain> 6", "oc delete machine.machine.openshift.io <machine_name>", "oc delete machine.cluster.x-k8s.io <machine_name>", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc apply -f healthcheck.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api annotations: machine.openshift.io/remediation-strategy: external-baremetal 2 spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 3 machine.openshift.io/cluster-api-machine-type: <role> 4 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 5 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 6 status: \"False\" - type: \"Ready\" timeout: \"300s\" 7 status: \"Unknown\" maxUnhealthy: \"40%\" 8 nodeStartupTimeout: \"10m\" 9", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> remediationTemplate: apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate name: metal3-remediation-template namespace: openshift-machine-api unhealthyConditions: - type: \"Ready\" timeout: \"300s\"", "apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 kind: Metal3RemediationTemplate metadata: name: metal3-remediation-template namespace: openshift-machine-api spec: template: spec: strategy: type: Reboot retryLimit: 1 timeout: 5m0s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/machine_management/index
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 6.40-43 Tues May 29 2018 David Le Sage Updates for 6.4.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/appe-revision_history
Chapter 1. Overview of performance monitoring options
Chapter 1. Overview of performance monitoring options The following are some of the performance monitoring and configuration tools available in Red Hat Enterprise Linux 8: Performance Co-Pilot ( pcp ) is used for monitoring, visualizing, storing, and analyzing system-level performance measurements. It allows the monitoring and management of real-time data, and logging and retrieval of historical data. Red Hat Enterprise Linux 8 provides several tools that can be used from the command line to monitor a system outside run level 5 . The following are the built-in command line tools: top is provided by the procps-ng package. It gives a dynamic view of the processes in a running system. It displays a variety of information, including a system summary and a list of tasks currently being managed by the Linux kernel. ps is provided by the procps-ng package. It captures a snapshot of a select group of active processes. By default, the examined group is limited to processes that are owned by the current user and associated with the terminal where the ps command is executed. Virtual memory statistics ( vmstat ) is provided by the procps-ng package. It provides instant reports of your system's processes, memory, paging, block input/output, interrupts, and CPU activity. System activity reporter ( sar ) is provided by the sysstat package. It collects and reports information about system activity that has occurred so far on the current day. perf uses hardware performance counters and kernel trace-points to track the impact of other commands and applications on a system. bcc-tools is used for BPF Compiler Collection (BCC). It provides over 100 eBPF scripts that monitor kernel activities. For more information about each of this tool, see the man page describing how to use it and what functions it performs. turbostat is provided by the kernel-tools package. It reports on processor topology, frequency, idle power-state statistics, temperature, and power usage on the Intel 64 processors. iostat is provided by the sysstat package. It monitors and reports on system IO device loading to help administrators make decisions about how to balance IO load between physical disks. irqbalance distributes hardware interrupts across processors to improve system performance. ss prints statistical information about sockets, allowing administrators to assess device performance over time. Red Hat recommends using ss over netstat in Red Hat Enterprise Linux 8. numastat is provided by the numactl package. By default, numastat displays per-node NUMA hit an miss system statistics from the kernel memory allocator. Optimal performance is indicated by high numa_hit values and low numa_miss values. numad is an automatic NUMA affinity management daemon. It monitors NUMA topology and resource usage within a system that dynamically improves NUMA resource allocation, management, and therefore system performance. SystemTap monitors and analyzes operating system activities, especially the kernel activities. valgrind analyzes applications by running it on a synthetic CPU and instrumenting existing application code as it is executed. It then prints commentary that clearly identifies each process involved in application execution to a user-specified file, file descriptor, or network socket. It is also useful for finding memory leaks. pqos is provided by the intel-cmt-cat package. It monitors and controls CPU cache and memory bandwidth on recent Intel processors. Additional resources pcp , top , ps , vmstat , sar , perf , iostat , irqbalance , ss , numastat , numad , valgrind , and pqos man pages on your system /usr/share/doc/ directory What exactly is the meaning of value "await" reported by iostat? Red Hat Knowledgebase article Monitoring performance with Performance Co-Pilot
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/overview-of-performance-monitoring-options_monitoring-and-managing-system-status-and-performance
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/getting_started_with_red_hat_build_of_openjdk_11/making-open-source-more-inclusive
Chapter 4. Configuring addresses and queues
Chapter 4. Configuring addresses and queues 4.1. Addresses, queues, and routing types In AMQ Broker, the addressing model comprises three main concepts; addresses , queues , and routing types . An address represents a messaging endpoint. Within the configuration, a typical address is given a unique name, one or more queues, and a routing type. A queue is associated with an address. There can be multiple queues per address. Once an incoming message is matched to an address, the message is sent on to one or more of its queues, depending on the routing type configured. Queues can be configured to be automatically created and deleted. You can also configure an address (and hence its associated queues) as durable . Messages in a durable queue can survive a crash or restart of the broker, as long as the messages in the queue are also persistent. By contrast, messages in a non-durable queue do not survive a crash or restart of the broker, even if the messages themselves are persistent. A routing type determines how messages are sent to the queues associated with an address. In AMQ Broker, you can configure an address with two different routing types, as shown in the table. If you want your messages routed to... Use this routing type... A single queue within the matching address, in a point-to-point manner anycast Every queue within the matching address, in a publish-subscribe manner multicast Note An address must have at least one defined routing type. It is possible to define more than one routing type per address, but this is not recommended. If an address does have both routing types defined, and the client does not show a preference for either one, the broker defaults to the multicast routing type. Additional resources For more information about configuring: Point-to-point messaging using the anycast routing type, see Section 4.3, "Configuring addresses for point-to-point messaging" Publish-subscribe messaging using the multicast routing type, see Section 4.4, "Configuring addresses for publish-subscribe messaging" 4.1.1. Address and queue naming requirements Be aware of the following requirements when you configure addresses and queues: To ensure that a client can connect to a queue, regardless of which wire protocol the client uses, your address and queue names should not include any of the following characters: & :: , ? > The number sign ( # ) and asterisk ( * ) characters are reserved for wildcard expressions and should not be used in address and queue names. For more information, see Section 4.2.1, "AMQ Broker wildcard syntax" . Address and queue names should not include spaces. To separate words in an address or queue name, use the configured delimiter character. The default delimiter character is a period ( . ). For more information, see Section 4.2.1, "AMQ Broker wildcard syntax" . 4.2. Applying address settings to sets of addresses In AMQ Broker, you can apply the configuration specified in an address-setting element to a set of addresses by using a wildcard expression to represent the matching address name. The following sections describe how to use wildcard expressions. 4.2.1. AMQ Broker wildcard syntax AMQ Broker uses a specific syntax for representing wildcards in address settings. Wildcards can also be used in security settings, and when creating consumers. A wildcard expression contains words delimited by a period ( . ). The number sign ( # ) and asterisk ( * ) characters also have special meaning and can take the place of a word, as follows: The number sign character means "match any sequence of zero or more words". Use this at the end of your expression. The asterisk character means "match a single word". Use this anywhere within your expression. Matching is not done character by character, but at each delimiter boundary. For example, an address-setting element that is configured to match queues with my in their name would not match with a queue named myqueue . When more than one address-setting element matches an address, the broker overlays configurations, using the configuration of the least specific match as the baseline. Literal expressions are more specific than wildcards, and an asterisk ( * ) is more specific than a number sign ( # ). For example, both my.destination and my.* match the address my.destination . In this case, the broker first applies the configuration found under my.* , since a wildcard expression is less specific than a literal. , the broker overlays the configuration of the my.destination address setting element, which overwrites any configuration shared with my.* . For example, given the following configuration, a queue associated with my.destination has max-delivery-attempts set to 3 and last-value-queue set to false . <address-setting match="my.*"> <max-delivery-attempts>3</max-delivery-attempts> <last-value-queue>true</last-value-queue> </address-setting> <address-setting match="my.destination"> <last-value-queue>false</last-value-queue> </address-setting> The examples in the following table illustrate how wildcards are used to match a set of addresses. Example Description # The default address-setting used in broker.xml . Matches every address. You can continue to apply this catch-all, or you can add a new address-setting for each address or group of addresses as the need arises. news.europe.# Matches news.europe , news.europe.sport , news.europe.politics.fr , but not news.usa or europe . news.* Matches news.europe and news.usa , but not news.europe.sport . news.*.sport Matches news.europe.sport and news.usa.sport , but not news.europe.fr.sport . 4.2.2. Configuring the broker wildcard syntax The following procedure show how to customize the syntax used for wildcard addresses. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <wildcard-addresses> section to the configuration, as in the example below. <configuration> <core> ... <wildcard-addresses> // <enabled>true</enabled> // <delimiter>,</delimiter> // <any-words>@</any-words> // <single-word>USD</single-word> </wildcard-addresses> ... </core> </configuration> enabled When set to true , instruct the broker to use your custom settings. delimiter Provide a custom character to use as the delimiter instead of the default, which is . . any-words The character provided as the value for any-words is used to mean 'match any sequence of zero or more words' and will replace the default # . Use this character at the end of your expression. single-word The character provided as the value for single-word is used to mean 'match a single word' and will replaced the default * . Use this character anywhere within your expression. 4.3. Configuring addresses for point-to-point messaging Point-to-point messaging is a common scenario in which a message sent by a producer has only one consumer. AMQP and JMS message producers and consumers can make use of point-to-point messaging queues, for example. To ensure that the queues associated with an address receive messages in a point-to-point manner, you define an anycast routing type for the given address element in your broker configuration. When a message is received on an address using anycast , the broker locates the queue associated with the address and routes the message to it. A consumer might then request to consume messages from that queue. If multiple consumers connect to the same queue, messages are distributed between the consumers equally, provided that the consumers are equally able to handle them. The following figure shows an example of point-to-point messaging. 4.3.1. Configuring basic point-to-point messaging The following procedure shows how to configure an address with a single queue for point-to-point messaging. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Wrap an anycast configuration element around the chosen queue element of an address . Ensure that the values of the name attribute for both the address and queue elements are the same. For example: <configuration ...> <core ...> ... <address name="my.anycast.destination"> <anycast> <queue name="my.anycast.destination"/> </anycast> </address> </core> </configuration> 4.3.2. Configuring point-to-point messaging for multiple queues You can define more than one queue on an address that uses an anycast routing type. The broker distributes messages sent to an anycast address evenly across all associated queues. By specifying a Fully Qualified Queue Name (FQQN), you can connect a client to a specific queue. If more than one consumer connects to the same queue, the broker distributes messages evenly between the consumers. The following figure shows an example of point-to-point messaging using two queues. The following procedure shows how to configure point-to-point messaging for an address that has multiple queues. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Wrap an anycast configuration element around the queue elements in the address element. For example: <configuration ...> <core ...> ... <address name="my.anycast.destination"> <anycast> <queue name="q1"/> <queue name="q2"/> </anycast> </address> </core> </configuration> If you have a configuration such as that shown above mirrored across multiple brokers in a cluster, the cluster can load-balance point-to-point messaging in a way that is opaque to producers and consumers. The exact behavior depends on how the message load balancing policy is configured for the cluster. Additional resources For more information about: Specifying Fully Qualified Queue Names, see Section 4.9, "Specifying a fully qualified queue name" . How to configure message load balancing for a broker cluster, see Section 14.1.1, "How broker clusters balance message load" . 4.4. Configuring addresses for publish-subscribe messaging In a publish-subscribe scenario, messages are sent to every consumer subscribed to an address. JMS topics and MQTT subscriptions are two examples of publish-subscribe messaging. To ensure that the queues associated with an address receive messages in a publish-subscribe manner, you define a multicast routing type for the given address element in your broker configuration. When a message is received on an address with a multicast routing type, the broker routes a copy of the message to each queue associated with the address. To reduce the overhead of copying, each queue is sent only a reference to the message, and not a full copy. The following figure shows an example of publish-subscribe messaging. The following procedure shows how to configure an address for publish-subscribe messaging. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add an empty multicast configuration element to the address. <configuration ...> <core ...> ... <address name="my.multicast.destination"> <multicast/> </address> </core> </configuration> (Optional) Add one or more queue elements to the address and wrap the multicast element around them. This step is typically not needed since the broker automatically creates a queue for each subscription requested by a client. <configuration ...> <core ...> ... <address name="my.multicast.destination"> <multicast> <queue name="client123.my.multicast.destination"/> <queue name="client456.my.multicast.destination"/> </multicast> </address> </core> </configuration> 4.5. Configuring an address for both point-to-point and publish-subscribe messaging You can also configure an address with both point-to-point and publish-subscribe semantics. Configuring an address that uses both point-to-point and publish-subscribe semantics is not typically recommended. However, it can be useful when you want, for example, a JMS queue named orders and a JMS topic also named orders . The different routing types make the addresses appear to be distinct for client connections. In this situation, messages sent by a JMS queue producer use the anycast routing type. Messages sent by a JMS topic producer use the multicast routing type. When a JMS topic consumer connects to the broker, it is attached to its own subscription queue. A JMS queue consumer, however, is attached to the anycast queue. The following figure shows an example of point-to-point and publish-subscribe messaging used together. The following procedure shows how to configure an address for both point-to-point and publish-subscribe messaging. Note The behavior in this scenario is dependent on the protocol being used. For JMS, there is a clear distinction between topic and queue producers and consumers, which makes the logic straightforward. Other protocols like AMQP do not make this distinction. A message being sent via AMQP is routed by both anycast and multicast and consumers default to anycast . For more information, see Chapter 3, Configuring messaging protocols in network connections . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Wrap an anycast configuration element around the queue elements in the address element. For example: <configuration ...> <core ...> ... <address name="orders"> <anycast> <queue name="orders"/> </anycast> </address> </core> </configuration> Add an empty multicast configuration element to the address. <configuration ...> <core ...> ... <address name="orders"> <anycast> <queue name="orders"/> </anycast> <multicast/> </address> </core> </configuration> Note Typically, the broker creates subscription queues on demand, so there is no need to list specific queue elements inside the multicast element. 4.6. Adding a routing type to an acceptor configuration Normally, if a message is received by an address that uses both anycast and multicast , one of the anycast queues receives the message and all of the multicast queues. However, clients can specify a special prefix when connecting to an address to specify whether to connect using anycast or multicast . The prefixes are custom values that are designated using the anycastPrefix and multicastPrefix parameters within the URL of an acceptor in the broker configuration. The following procedure shows how to configure prefixes for a given acceptor. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given acceptor, to configure an anycast prefix, add anycastPrefix to the configured URL. Set a custom value. For example: <configuration ...> <core ...> ... <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?protocols=AMQP;anycastPrefix=anycast://</acceptor> </acceptors> ... </core> </configuration> Based on the preceding configuration, the acceptor is configured to use anycast:// for the anycast prefix. Client code can specify anycast://<my.destination>/ if the client needs to send a message to only one of the anycast queues. For a given acceptor, to configure a multicast prefix, add multicastPrefix to the configured URL. Set a custom value. For example: <configuration ...> <core ...> ... <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name="artemis">tcp://0.0.0.0:61616?protocols=AMQP;multicastPrefix=multicast://</acceptor> </acceptors> ... </core> </configuration> Based on the preceding configuration, the acceptor is configured to use multicast:// for the multicast prefix. Client code can specify multicast://<my.destination>/ if the client needs the message sent to only the multicast queues. 4.7. Configuring subscription queues In most cases, it is not necessary to manually create subscription queues because protocol managers create subscription queues automatically when clients first request to subscribe to an address. See Section 4.8.3, "Protocol managers and addresses" for more information. For durable subscriptions, the generated queue name is usually a concatenation of the client ID and the address. The following sections show how to manually create subscription queues, when required. 4.7.1. Configuring a durable subscription queue When a queue is configured as a durable subscription, the broker saves messages for any inactive subscribers and delivers them to the subscribers when they reconnect. Therefore, a client is guaranteed to receive each message delivered to the queue after subscribing to it. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the durable configuration element to a chosen queue. Set a value of true . <configuration ...> <core ...> ... <address name="my.durable.address"> <multicast> <queue name="q1"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration> Note Because queues are durable by default, including the durable element and setting the value to true is not strictly necessary to create a durable queue. However, explicitly including the element enables you to later change the behavior of the queue to non-durable, if necessary. 4.7.2. Configuring a non-shared durable subscription queue The broker can be configured to prevent more than one consumer from connecting to a queue at any one time. Therefore, subscriptions to queues configured this way are regarded as "non-shared". Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the durable configuration element to each chosen queue. Set a value of true . <configuration ...> <core ...> ... <address name="my.non.shared.durable.address"> <multicast> <queue name="orders1"> <durable>true</durable> </queue> <queue name="orders2"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration> Note Because queues are durable by default, including the durable element and setting the value to true is not strictly necessary to create a durable queue. However, explicitly including the element enables you to later change the behavior of the queue to non-durable, if necessary. Add the max-consumers attribute to each chosen queue. Set a value of 1 . <configuration ...> <core ...> ... <address name="my.non.shared.durable.address"> <multicast> <queue name="orders1" max-consumers="1"> <durable>true</durable> </queue> <queue name="orders2" max-consumers="1"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration> 4.7.3. Configuring a non-durable subscription queue Non-durable subscriptions are usually managed by the relevant protocol manager, which creates and deletes temporary queues. However, if you want to manually create a queue that behaves like a non-durable subscription queue, you can use the purge-on-no-consumers attribute on the queue. When purge-on-no-consumers is set to true , the queue does not start receiving messages until a consumer is connected. In addition, when the last consumer is disconnected from the queue, the queue is purged (that is, its messages are removed). The queue does not receive any further messages until a new consumer is connected to the queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add the purge-on-no-consumers attribute to each chosen queue. Set a value of true . <configuration ...> <core ...> ... <address name="my.non.durable.address"> <multicast> <queue name="orders1" purge-on-no-consumers="true"/> </multicast> </address> </core> </configuration> 4.8. Creating and deleting addresses and queues automatically You can configure the broker to automatically create addresses and queues, and to delete them after they are no longer in use. This saves you from having to pre-configure each address before a client can connect to it. 4.8.1. Configuration options for automatic queue creation and deletion The following table lists the configuration elements available when configuring an address-setting element to automatically create and delete queues and addresses. If you want the address-setting to... Add this configuration... Create addresses when a client sends a message to or attempts to consume a message from a queue mapped to an address that does not exist. auto-create-addresses Create a queue when a client sends a message to or attempts to consume a message from a queue. auto-create-queues Delete an automatically created address when it no longer has any queues. auto-delete-addresses Delete an automatically created queue when the queue has 0 consumers and 0 messages. auto-delete-queues Use a specific routing type if the client does not specify one. default-address-routing-type 4.8.2. Configuring automatic creation and deletion of addresses and queues The following procedure shows how to configure automatic creation and deletion of addresses and queues. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Configure an address-setting for automatic creation and deletion. The following example uses all of the configuration elements mentioned in the table. <configuration ...> <core ...> ... <address-settings> <address-setting match="activemq.#"> <auto-create-addresses>true</auto-create-addresses> <auto-delete-addresses>true</auto-delete-addresses> <auto-create-queues>true</auto-create-queues> <auto-delete-queues>true</auto-delete-queues> <default-address-routing-type>ANYCAST</default-address-routing-type> </address-setting> </address-settings> ... </core> </configuration> address-setting The configuration of the address-setting element is applied to any address or queue that matches the wildcard address activemq.# . auto-create-addresses When a client requests to connect to an address that does not yet exist, the broker creates the address. auto-delete-addresses An automatically created address is deleted when it no longer has any queues associated with it. auto-create-queues When a client requests to connect to a queue that does not yet exist, the broker creates the queue. auto-delete-queues An automatically created queue is deleted when it no longer has any consumers or messages. default-address-routing-type If the client does not specify a routing type when connecting, the broker uses ANYCAST when delivering messages to an address. The default value is MULTICAST . Additional resources For more information about: The wildcard syntax that you can use when configuring addresses, see Section 4.2, "Applying address settings to sets of addresses" . Routing types, see Section 4.1, "Addresses, queues, and routing types" . 4.8.3. Protocol managers and addresses A component called a protocol manager maps protocol-specific concepts to concepts used in the AMQ Broker address model; queues and routing types. In certain situations, a protocol manager might automatically create queues on the broker. For example, when a client sends an MQTT subscription packet with the addresses /house/room1/lights and /house/room2/lights , the MQTT protocol manager understands that the two addresses require multicast semantics. Therefore, the protocol manager first looks to ensure that multicast is enabled for both addresses. If not, it attempts to dynamically create them. If successful, the protocol manager then creates special subscription queues for each subscription requested by the client. Each protocol behaves slightly differently. The table below describes what typically happens when subscribe frames to various types of queue are requested. If the queue is of this type... The typical action for a protocol manager is to... Durable subscription queue Look for the appropriate address and ensures that multicast semantics is enabled. It then creates a special subscription queue with the client ID and the address as its name and multicast as its routing type. The special name allows the protocol manager to quickly identify the required client subscription queues should the client disconnect and reconnect at a later date. When the client unsubscribes the queue is deleted. Temporary subscription queue Look for the appropriate address and ensures that multicast semantics is enabled. It then creates a queue with a random (read UUID) name under this address with multicast routing type. When the client disconnects the queue is deleted. Point-to-point queue Look for the appropriate address and ensures that anycast routing type is enabled. If it is, it aims to locate a queue with the same name as the address. If it does not exist, it looks for the first queue available. It this does not exist then it automatically creates the queue (providing auto create is enabled). The queue consumer is bound to this queue. If the queue is auto created, it is automatically deleted once there are no consumers and no messages in it. 4.9. Specifying a fully qualified queue name Internally, the broker maps a client's request for an address to specific queues. The broker decides on behalf of the client to which queues to send messages, or from which queue to receive messages. However, more advanced use cases might require that the client specifies a queue name directly. In these situations the client can use a fully qualified queue name (FQQN). An FQQN includes both the address name and the queue name, separated by a :: . The following procedure shows how to specify an FQQN when connecting to an address with multiple queues. Prerequisites You have an address configured with two or more queues, as shown in the example below. <configuration ...> <core ...> ... <addresses> <address name="my.address"> <anycast> <queue name="q1" /> <queue name="q2" /> </anycast> </address> </addresses> </core> </configuration> Procedure In the client code, use both the address name and the queue name when requesting a connection from the broker. Use two colons, :: , to separate the names. For example: String FQQN = "my.address::q1"; Queue q1 session.createQueue(FQQN); MessageConsumer consumer = session.createConsumer(q1); 4.10. Configuring sharded queues A common pattern for processing of messages across a queue where only partial ordering is required is to use queue sharding . This means that you define an anycast address that acts as a single logical queue, but which is backed by many underlying physical queues. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add an address element and set the name attribute. For example: <configuration ...> <core ...> ... <addresses> <address name="my.sharded.address"></address> </addresses> </core> </configuration> Add the anycast routing type and include the desired number of sharded queues. In the example below, the queues q1 , q2 , and q3 are added as anycast destinations. <configuration ...> <core ...> ... <addresses> <address name="my.sharded.address"> <anycast> <queue name="q1" /> <queue name="q2" /> <queue name="q3" /> </anycast> </address> </addresses> </core> </configuration> Based on the preceding configuration, messages sent to my.sharded.address are distributed equally across q1 , q2 and q3 . Clients are able to connect directly to a specific physical queue when using a Fully Qualified Queue Name (FQQN). and receive messages sent to that specific queue only. To tie particular messages to a particular queue, clients can specify a message group for each message. The broker routes grouped messages to the same queue, and one consumer processes them all. Additional resources For more information about: Fully Qualified Queue Names, see Section 4.9, "Specifying a fully qualified queue name" Message grouping, see Using message groups in the AMQ Core Protocol JMS documentation. 4.11. Configuring last value queues A last value queue is a type of queue that discards messages in the queue when a newer message with the same last value key value is placed in the queue. Through this behavior, last value queues retain only the last values for messages of the same key. A simple use case for a last value queue is for monitoring stock prices, where only the latest value for a particular stock is of interest. Note If a message without a configured last value key is sent to a last value queue, the broker handles this message as a "normal" message. Such messages are not purged from the queue when a new message with a configured last value key arrives. You can configure last value queues individually, or for all of the queues associated with a set of addresses. The following procedures show how to configure last value queues in these ways. 4.11.1. Configuring last value queues individually The following procedure shows to configure last value queues individually. Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given queue, add the last-value-key key and specify a custom value. For example: <address name="my.address"> <multicast> <queue name="prices1" last-value-key="stock_ticker"/> </multicast> </address> Alternatively, you can configure a last value queue that uses the default last value key name of _AMQ_LVQ_NAME . To do this, add the last-value key to a given queue. Set the value to true . For example: <address name="my.address"> <multicast> <queue name="prices1" last-value="true"/> </multicast> </address> 4.11.2. Configuring last value queues for addresses The following procedure shows to configure last value queues for an address or set of addresses. Open the <broker_instance_dir> /etc/broker.xml configuration file. In the address-setting element, for a matching address, add default-last-value-key . Specify a custom value. For example: <address-setting match="lastValue"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting> Based on the preceding configuration, all queues associated with the lastValue address use a last value key of stock_ticker . By default, the value of default-last-value-key is not set. To configure last value queues for a set of addresses, you can specify an address wildcard. For example: <address-setting match="lastValue.*"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting> Alternatively, you can configure all queues associated with an address or set of addresses to use the default last value key name of _AMQ_LVQ_NAME . To do this, add default-last-value-queue instead of default-last-value-key . Set the value to true . For example: <address-setting match="lastValue"> <default-last-value-queue>true</default-last-value-queue> </address-setting> Additional resources For more information about the wildcard syntax that you can use when configuring addresses, see Section 4.2, "Applying address settings to sets of addresses" . 4.11.3. Example of last value queue behavior This example shows the behavior of a last value queue. In your broker.xml configuration file, suppose that you have added configuration that looks like the following: <address name="my.address"> <multicast> <queue name="prices1" last-value-key="stock_ticker"/> </multicast> </address> The preceding configuration creates a queue called prices1 , with a last value key of stock_ticker . Now, suppose that a client sends two messages. Each message has the same value of ATN for the property stock_ticker . Each message has a different value for a property called stock_price . Each message is sent to the same queue, prices1 . TextMessage message = session.createTextMessage("First message with last value property set"); message.setStringProperty("stock_ticker", "ATN"); message.setStringProperty("stock_price", "36.83"); producer.send(message); TextMessage message = session.createTextMessage("Second message with last value property set"); message.setStringProperty("stock_ticker", "ATN"); message.setStringProperty("stock_price", "37.02"); producer.send(message); When two messages with the same value for the stock_ticker last value key (in this case, ATN ) arrive to the prices1 queue , only the latest message remains in the queue, with the first message being purged. At the command line, you can enter the following lines to validate this behavior: TextMessage messageReceived = (TextMessage)messageConsumer.receive(5000); System.out.format("Received message: %s\n", messageReceived.getText()); In this example, the output you see is the second message, since both messages use the same value for the last value key and the second message was received in the queue after the first. 4.11.4. Enforcing non-destructive consumption for last value queues When a consumer connects to a queue, the normal behavior is that messages sent to that consumer are acquired exclusively by the consumer. When the consumer acknowledges receipt of the messages, the broker removes the messages from the queue. As an alternative to the normal consumption behaviour, you can configure a queue to enforce non-destructive consumption. In this case, when a queue sends a message to a consumer, the message can still be received by other consumers. In addition, the message remains in the queue even when a consumer has consumed it. When you enforce this non-destructive consumption behavior, the consumers are known as queue browsers . Enforcing non-destructive consumption is a useful configuration for last value queues, because it ensures that the queue always holds the latest value for a particular last value key. The following procedure shows how to enforce non-destructive consumption for a last value queue. Prerequisites You have already configured last-value queues individually, or for all queues associated with an address or set of addresses. For more information, see: Section 4.11.1, "Configuring last value queues individually" Section 4.11.2, "Configuring last value queues for addresses" Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. If you previously configured a queue individually as a last value queue, add the non-destructive key. Set the value to true . For example: <address name="my.address"> <multicast> <queue name="orders1" last-value-key="stock_ticker" non-destructive="true" /> </multicast> </address> If you previously configured an address or set of addresses for last value queues, add the default-non-destructive key. Set the value to true . For example: <address-setting match="lastValue"> <default-last-value-key>stock_ticker </default-last-value-key> <default-non-destructive>true</default-non-destructive> </address-setting> Note By default, the value of default-non-destructive is false . 4.12. Moving expired messages to an expiry address For a queue other than a last value queue, if you have only non-destructive consumers, the broker never deletes messages from the queue, causing the queue size to increase over time. To prevent this unconstrained growth in queue size, you can configure when messages expire and specify an address to which the broker moves expired messages. 4.12.1. Configuring message expiry The following procedure shows how to configure message expiry. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In the core element, set the message-expiry-scan-period to specify how frequently the broker scans for expired messages. <configuration ...> <core ...> ... <message-expiry-scan-period>1000</message-expiry-scan-period> ... Based on the preceding configuration, the broker scans queues for expired messages every 1000 milliseconds. In the address-setting element for a matching address or set of addresses, specify an expiry address. Also, set a message expiration time. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> <expiry-delay>10</expiry-delay> ... </address-setting> ... <address-settings> <configuration ...> expiry-address Expiry address for the matching address or addresses. In the preceding example, the broker sends expired messages for the stocks address to an expiry address called ExpiryAddress . expiry-delay Expiration time, in milliseconds, that the broker applies to messages that are using the default expiration time. By default, messages have an expiration time of 0 , meaning that they don't expire. For messages with an expiration time greater than the default, expiry-delay has no effect. For example, suppose you set expiry-delay on an address to 10 , as shown in the preceding example. If a message with the default expiration time of 0 arrives to a queue at this address, then the broker changes the expiration time of the message from 0 to 10 . However, if another message that is using an expiration time of 20 arrives, then its expiration time is unchanged. If you set expiry-delay to -1 , this feature is disabled. By default, expiry-delay is set to -1 . Alternatively, instead of specifying a value for expiry-delay , you can specify minimum and maximum expiry delay values. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> <min-expiry-delay>10</min-expiry-delay> <max-expiry-delay>100</max-expiry-delay> ... </address-setting> ... <address-settings> <configuration ...> min-expiry-delay Minimum expiration time, in milliseconds, that the broker applies to messages. max-expiry-delay Maximum expiration time, in milliseconds, that the broker applies to messages. The broker applies the values of min-expiry-delay and max-expiry-delay as follows: For a message with the default expiration time of 0 , the broker sets the expiration time to the specified value of max-expiry-delay . If you have not specified a value for max-expiry-delay , the broker sets the expiration time to the specified value of min-expiry-delay . If you have not specified a value for min-expiry-delay , the broker does not change the expiration time of the message. For a message with an expiration time above the value of max-expiry-delay , the broker sets the expiration time to the specified value of max-expiry-delay . For a message with an expiration time below the value of min-expiry-delay , the broker sets the expiration time to the specified value of min-expiry-delay . For a message with an expiration between the values of min-expiry-delay and max-expiry-delay , the broker does not change the expiration time of the message. If you specify a value for expiry-delay (that is, other than the default value of -1 ), this overrides any values that you specify for min-expiry-delay and max-expiry-delay . The default value for both min-expiry-delay and max-expiry-delay is -1 (that is, disabled). In the addresses element of your configuration file, configure the address previously specified for expiry-address . Define a queue at this address. For example: <addresses> ... <address name="ExpiryAddress"> <anycast> <queue name="ExpiryQueue"/> </anycast> </address> ... </addresses> The preceding example configuration associates an expiry queue, ExpiryQueue , with the expiry address, ExpiryAddress . 4.12.2. Creating expiry resources automatically A common use case is to segregate expired messages according to their original addresses. For example, you might choose to route expired messages from an address called stocks to an expiry queue called EXP.stocks . Likewise, you might route expired messages from an address called orders to an expiry queue called EXP.orders . This type of routing pattern makes it easy to track, inspect, and administer expired messages. However, a pattern such as this is difficult to implement in an environment that uses mainly automatically-created addresses and queues. In this type of environment, an administrator does not want the extra effort required to manually create addresses and queues to hold expired messages. As a solution, you can configure the broker to automatically create resources (that is, addressees and queues) to handle expired messages for a given address or set of addresses. The following procedure shows an example. Prerequisites You have already configured an expiry address for a given address or set of addresses. For more information, see Section 4.12.1, "Configuring message expiry" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Locate the <address-setting> element that you previously added to the configuration file to define an expiry address for a matching address or set of addresses. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> ... </address-setting> ... <address-settings> <configuration ...> In the <address-setting> element, add configuration items that instruct the broker to automatically create expiry resources (that is, addresses and queues) and how to name these resources. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="stocks"> ... <expiry-address>ExpiryAddress</expiry-address> <auto-create-expiry-resources>true</auto-create-expiry-resources> <expiry-queue-prefix>EXP.</expiry-queue-prefix> <expiry-queue-suffix></expiry-queue-suffix> ... </address-setting> ... <address-settings> <configuration ...> auto-create-expiry-resources Specifies whether the broker automatically creates an expiry address and queue to receive expired messages. The default value is false . If the parameter value is set to true , the broker automatically creates an <address> element that defines an expiry address and an associated expiry queue. The name value of the automatically-created <address> element matches the name value specified for <expiry-address> . The automatically-created expiry queue has the multicast routing type. By default, the broker names the expiry queue to match the address to which expired messages were originally sent, for example, stocks . The broker also defines a filter for the expiry queue that uses the _AMQ_ORIG_ADDRESS property. This filter ensures that the expiry queue receives only messages sent to the corresponding original address. expiry-queue-prefix Prefix that the broker applies to the name of the automatically-created expiry queue. The default value is EXP. When you define a prefix value or keep the default value, the name of the expiry queue is a concatenation of the prefix and the original address, for example, EXP.stocks . expiry-queue-suffix Suffix that the broker applies to the name of an automatically-created expiry queue. The default value is not defined (that is, the broker applies no suffix). You can directly access the expiry queue using either the queue name by itself (for example, when using the AMQ Broker Core Protocol JMS client) or using the fully qualified queue name (for example, when using another JMS client). Note Because the expiry address and queue are automatically created, any address settings related to deletion of automatically-created addresses and queues also apply to these expiry resources. Additional resources For more information about address settings used to configure automatic deletion of automatically-created addresses and queues, see Section 4.8.2, "Configuring automatic creation and deletion of addresses and queues" . 4.13. Moving undelivered messages to a dead letter address If delivery of a message to a client is unsuccessful, you might not want the broker to make ongoing attempts to deliver the message. To prevent infinite delivery attempts, you can define a dead letter address and one or more asscociated dead letter queues . After a specified number of delivery attempts, the broker removes an undelivered message from its original queue and sends the message to the configured dead letter address. A system administrator can later consume undelivered messages from a dead letter queue to inspect the messages. If you do not configure a dead letter address for a given queue, the broker permanently removes undelivered messages from the queue after the specified number of delivery attempts. Undelivered messages that are consumed from a dead letter queue have the following properties: _AMQ_ORIG_ADDRESS String property that specifies the original address of the message _AMQ_ORIG_QUEUE String property that specifies the original queue of the message 4.13.1. Configuring a dead letter address The following procedure shows how to configure a dead letter address and an associated dead letter queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In an <address-setting> element that matches your queue name(s), set values for the dead letter address name and the maximum number of delivery attempts. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="exampleQueue"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> ... <address-settings> <configuration ...> match Address to which the broker applies the configuration in this address-setting section. You can specify a wildcard expression for the match attribute of the <address-setting> element. Using a wildcard expression is useful if you want to associate the dead letter settings configured in the <address-setting> element with a matching set of addresses. dead-letter-address Name of the dead letter address. In this example, the broker moves undelivered messages from the queue exampleQueue to the dead letter address, DLA . max-delivery-attempts Maximum number of delivery attempts made by the broker before it moves an undelivered message to the configured dead letter address. In this example, the broker moves undelivered messages to the dead letter address after three unsuccessful delivery attempts. The default value is 10 . If you want the broker to make an infinite number of redelivery attempts, specify a value of -1 . In the addresses section, add an address element for the dead letter address, DLA . To associate a dead letter queue with the dead letter address, specify a name value for queue . For example: <configuration ...> <core ...> ... <addresses> <address name="DLA"> <anycast> <queue name="DLQ" /> </anycast> </address> ... </addresses> </core> </configuration> In the preceding configuration, you associate a dead letter queue named DLQ with the dead letter address, DLA . Additional resources For more information about using wildcards in address settings, see Section 4.2, "Applying address settings to sets of addresses" . 4.13.2. Creating dead letter queues automatically A common use case is to segregate undelivered messages according to their original addresses. For example, you might choose to route undelivered messages from an address called stocks to a dead letter queue called DLA.stocks that has an associated dead letter queue called DLQ.stocks . Likewise, you might route undelivered messages from an address called orders to a dead letter address called DLA.orders . This type of routing pattern makes it easy to track, inspect, and administrate undelivered messages. However, a pattern such as this is difficult to implement in an environment that uses mainly automatically-created addresses and queues. It is likely that a system administrator for this type of environment does not want the additional effort required to manually create addresses and queues to hold undelivered messages. As a solution, you can configure the broker to automatically create addressees and queues to handle undelivered messages, as shown in the procedure that follows. Prerequisites You have already configured a dead letter address for a queue or set of queues. For more information, see Section 4.13.1, "Configuring a dead letter address" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Locate the <address-setting> element that you previously added to define a dead letter address for a matching queue or set of queues. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="exampleQueue"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> ... <address-settings> <configuration ...> In the <address-setting> element, add configuration items that instruct the broker to automatically create dead letter resources (that is, addresses and queues) and how to name these resources. For example: <configuration ...> <core ...> ... <address-settings> ... <address-setting match="exampleQueue"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> <auto-create-dead-letter-resources>true</auto-create-dead-letter-resources> <dead-letter-queue-prefix>DLQ.</dead-letter-queue-prefix> <dead-letter-queue-suffix></dead-letter-queue-suffix> </address-setting> ... <address-settings> <configuration ...> auto-create-dead-letter-resources Specifies whether the broker automatically creates a dead letter address and queue to receive undelivered messages. The default value is false . If auto-create-dead-letter-resources is set to true , the broker automatically creates an <address> element that defines a dead letter address and an associated dead letter queue. The name of the automatically-created <address> element matches the name value that you specify for <dead-letter-address> . The dead letter queue that the broker defines in the automatically-created <address> element has the multicast routing type . By default, the broker names the dead letter queue to match the original address of the undelivered message, for example, stocks . The broker also defines a filter for the dead letter queue that uses the _AMQ_ORIG_ADDRESS property. This filter ensures that the dead letter queue receives only messages sent to the corresponding original address. dead-letter-queue-prefix Prefix that the broker applies to the name of an automatically-created dead letter queue. The default value is DLQ. When you define a prefix value or keep the default value, the name of the dead letter queue is a concatenation of the prefix and the original address, for example, DLQ.stocks . dead-letter-queue-suffix Suffix that the broker applies to an automatically-created dead letter queue. The default value is not defined (that is, the broker applies no suffix). 4.14. Annotations and properties on expired or undelivered AMQP messages Before the broker moves an expired or undelivered AMQP message to an expiry or dead letter queue that you have configured, the broker applies annotations and properties to the message. A client can create a filter based on these properties or annotations, to select particular messages to consume from the expiry or dead letter queue. Note The properties that the broker applies are internal properties These properties are are not exposed to clients for regular use, but can be specified by a client in a filter. The following table shows the annotations and internal properties that the broker applies to expired or undelivered AMQP messages. Annotation name Internal property name Description x-opt-ORIG-MESSAGE-ID _AMQ_ORIG_MESSAGE_ID Original message ID, before the message was moved to an expiry or dead letter queue. x-opt-ACTUAL-EXPIRY _AMQ_ACTUAL_EXPIRY Message expiry time, specified as the number of milliseconds since the last epoch started. x-opt-ORIG-QUEUE _AMQ_ORIG_QUEUE Original queue name of the expired or undelivered message. x-opt-ORIG-ADDRESS _AMQ_ORIG_ADDRESS Original address name of the expired or undelivered message. Additional resources For an example of configuring an AMQP client to filter AMQP messages based on annotations, see Section 13.3, "Filtering AMQP Messages Based on Properties on Annotations" . 4.15. Disabling queues If you manually define a queue in your broker configuration, the queue is enabled by default. However, there might be a case where you want to define a queue so that clients can subscribe to it, but are not ready to use the queue for message routing. Alternatively, there might be a situation where you want to stop message flow to a queue, but still keep clients bound to the queue. In these cases, you can disable the queue. The following example shows how to disable a queue that you have defined in your broker configuration. Prerequisites You should be familiar with how to define an address and associated queue in your broker configuration. For more information, see Chapter 4, Configuring addresses and queues . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a queue that you previously defined, add the enabled attribute. To disable the queue, set the value of this attribute to false . For example: <addresses> <address name="orders"> <multicast> <queue name="orders" enabled="false"/> </multicast> </address> </addresses> The default value of the enabled property is true . When you set the value to false , message routing to the queue is disabled. Note If you disable all queues on an address, any messages sent to that address are silently dropped. 4.16. Limiting the number of consumers connected to a queue Limit the number of consumers connected to a particular queue by using the max-consumers attribute. Create an exclusive consumer by setting max-consumers flag to 1 . The default value is -1 , which sets an unlimited number of consumers. The following procedure shows how to set a limit on the number of consumers that can connect to a queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given queue, add the max-consumers key and set a value. <configuration ...> <core ...> ... <addresses> <address name="foo"> <anycast> <queue name="q3" max-consumers="20"/> </anycast> </address> </addresses> </core> </configuration> Based on the preceding configuration, only 20 consumers can connect to queue q3 at the same time. To create an exclusive consumer, set max-consumers to 1 . <configuration ...> <core ...> ... <address name="foo"> <anycast> <queue name="q3" max-consumers="1"/> </anycast> </address> </core> </configuration> To allow an unlimited number of consumers, set max-consumers to -1 . <configuration ...> <core ...> ... <address name="foo"> <anycast> <queue name="q3" max-consumers="-1"/> </anycast> </address> </core> </configuration> 4.17. Configuring exclusive queues Exclusive queues are special queues that route all messages to only one consumer at a time. This configuration is useful when you want all messages to be processed serially by the same consumer. If there are multiple consumers for a queue, only one consumer will receive messages. If that consumer disconnects from the queue, another consumer is chosen. 4.17.1. Configuring exclusive queues individually The following procedure shows to how to individually configure a given queue as exclusive. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For a given queue, add the exclusive key. Set the value to true . <configuration ...> <core ...> ... <address name="my.address"> <multicast> <queue name="orders1" exclusive="true"/> </multicast> </address> </core> </configuration> 4.17.2. Configuring exclusive queues for addresses The following procedure shows how to configure an address or set of addresses so that all associated queues are exclusive. Open the <broker_instance_dir> /etc/broker.xml configuration file. In the address-setting element, for a matching address, add the default-exclusive-queue key. Set the value to true . <address-setting match="myAddress"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting> Based on the preceding configuration, all queues associated with the myAddress address are exclusive. By default, the value of default-exclusive-queue is false . To configure exclusive queues for a set of addresses, you can specify an address wildcard. For example: <address-setting match="myAddress.*"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting> Additional resources For more information about the wildcard syntax that you can use when configuring addresses, see Section 4.2, "Applying address settings to sets of addresses" . 4.18. Applying specific address settings to temporary queues When using JMS, for example, the broker creates temporary queues by assigning a universally unique identifier (UUID) as both the address name and the queue name. The default <address-setting match="#"> applies the configured address settings to all queues, including temporary ones. If you want to apply specific address settings to temporary queues only, you can optionally specify a temporary-queue-namespace as described below. You can then specify address settings that match the namespace and the broker applies those settings to all temporary queues. When a temporary queue is created and a temporary queue namespace exists, the broker prepends the temporary-queue-namespace value and the configured delimiter (default . ) to the address name. It uses that to reference the matching address settings. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a temporary-queue-namespace value. For example: <temporary-queue-namespace>temp-example</temporary-queue-namespace> Add an address-setting element with a match value that corresponds to the temporary queues namespace. For example: <address-settings> <address-setting match="temp-example.#"> <enable-metrics>false</enable-metrics> </address-setting> </address-settings> This example disables metrics in all temporary queues created by the broker. Note Specifying a temporary queue namespace does not affect temporary queues. For example, the namespace does not change the names of temporary queues. The namespace is used to reference the temporary queues. Additional resources For more information about using wildcards in address settings, see Section 4.2, "Applying address settings to sets of addresses" . 4.19. Configuring ring queues Generally, queues in AMQ Broker use first-in, first-out (FIFO) semantics. This means that the broker adds messages to the tail of the queue and removes them from the head. A ring queue is a special type of queue that holds a specified, fixed number of messages. The broker maintains the fixed queue size by removing the message at the head of the queue when a new message arrives but the queue already holds the specified number of messages. For example, consider a ring queue configured with a size of 3 and a producer that sequentially sends messages A , B , C , and D . Once message C arrives to the queue, the number of messages in the queue has reached the configured ring size. At this point, message A is at the head of the queue, while message C is at the tail. When message D arrives to the queue, the broker adds the message to the tail of the queue. To maintain the fixed queue size, the broker removes the message at the head of the queue (that is, message A ). Message B is now at the head of the queue. 4.19.1. Configuring ring queues The following procedure shows how to configure a ring queue. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. To define a default ring size for all queues on matching addresses that don't have an explicit ring size set, specify a value for default-ring-size in the address-setting element. For example: <address-settings> <address-setting match="ring.#"> <default-ring-size>3</default-ring-size> </address-setting> </address-settings> The default-ring-size parameter is especially useful for defining the default size of auto-created queues. The default value of default-ring-size is -1 (that is, no size limit). To define a ring size on a specific queue, add the ring-size key to the queue element. Specify a value. For example: <addresses> <address name="myRing"> <anycast> <queue name="myRing" ring-size="5" /> </anycast> </address> </addresses> Note You can update the value of ring-size while the broker is running. The broker dynamically applies the update. If the new ring-size value is lower than the value, the broker does not immediately delete messages from the head of the queue to enforce the new size. New messages sent to the queue still force the deletion of older messages, but the queue does not reach its new, reduced size until it does so naturally, through the normal consumption of messages by clients. 4.19.2. Troubleshooting ring queues This section describes situations in which the behavior of a ring queue appears to differ from its configuration. In-delivery messages and rollbacks When a message is in delivery to a consumer, the message is in an "in-between" state, where the message is technically no longer on the queue, but is also not yet acknowledged. A message remains in an in-delivery state until acknowledged by the consumer. Messages that remain in an in-delivery state cannot be removed from the ring queue. Because the broker cannot remove in-delivery messages, a client can send more messages to a ring queue than the ring size configuration seems to allow. For example, consider this scenario: A producer sends three messages to a ring queue configured with ring-size="3" . All messages are immediately dispatched to a consumer. At this point, messageCount = 3 and deliveringCount = 3 . The producer sends another message to the queue. The message is then dispatched to the consumer. Now, messageCount = 4 and deliveringCount = 4 . The message count of 4 is greater than the configured ring size of 3 . However, the broker is obliged to allow this situation because it cannot remove the in-delivery messages from the queue. Now, suppose that the consumer is closed without acknowledging any of the messages. In this case, the four in-delivery, unacknowledged messages are canceled back to the broker and added to the head of the queue in the reverse order from which they were consumed. This action puts the queue over its configured ring size. Because a ring queue prefers messages at the tail of the queue over messages at the head, the queue discards the first message sent by the producer, because this was the last message added back to the head of the queue. Transaction or core session rollbacks are treated in the same way. If you are using the core client directly, or using an AMQ Core Protocol JMS client, you can minimize the number of messages in delivery by reducing the value of the consumerWindowSize parameter (1024 * 1024 bytes by default). Scheduled messages When a scheduled message is sent to a queue, the message is not immediately added to the tail of the queue like a normal message. Instead, the broker holds the scheduled message in an intermediate buffer and schedules the message for delivery onto the head of the queue, according to the details of the message. However, scheduled messages are still reflected in the message count of the queue. As with in-delivery messages, this behavior can make it appear that the broker is not enforcing the ring queue size. For example, consider this scenario: At 12:00, a producer sends a message, A , to a ring queue configured with ring-size="3" . The message is scheduled for 12:05. At this point, messageCount = 1 and scheduledCount = 1 . At 12:01, producer sends message B to the same ring queue. Now, messageCount = 2 and scheduledCount = 1 . At 12:02, producer sends message C to the same ring queue. Now, messageCount = 3 and scheduledCount = 1 . At 12:03, producer sends message D to the same ring queue. Now, messageCount = 4 and scheduledCount = 1 . The message count for the queue is now 4 , one greater than the configured ring size of 3 . However, the scheduled message is not technically on the queue yet (that is, it is on the broker and scheduled to be put on the queue). At the scheduled delivery time of 12:05, the broker puts the message on the head of the queue. However, since the ring queue has already reached its configured size, the scheduled message A is immediately removed. Paged messages Similar to scheduled messages and messages in delivery, paged messages do not count towards the ring queue size enforced by the broker, because messages are actually paged at the address level, not the queue level. A paged message is not technically on a queue, although it is reflected in a queue's messageCount value. It is recommended that you do not use paging for addresses with ring queues. Instead, ensure that the entire address can fit into memory. Or, configure the address-full-policy parameter to a value of DROP , BLOCK or FAIL . Additional resources The broker creates internal instances of ring queues when you configure retroactive addresses. To learn more, see Section 4.20, "Configuring retroactive addresses" . 4.20. Configuring retroactive addresses Configuring an address as retroactive enables you to preserve messages sent to that address, including when there are no queues yet bound to the address. When queues are later created and bound to the address, the broker retroactively distributes messages to those queues. If an address is not configured as retroactive and does not yet have a queue bound to it, the broker discards messages sent to that address. When you configure a retroactive address, the broker creates an internal instance of a type of queue known as a ring queue . A ring queue is a special type of queue that holds a specified, fixed number of messages. Once the queue has reached the specified size, the message that arrives to the queue forces the oldest message out of the queue. When you configure a retroactive address, you indirectly specify the size of the internal ring queue. By default, the internal queue uses the multicast routing type. The internal ring queue used by a retroactive address is exposed via the management API. You can inspect metrics and perform other common management operations, such as emptying the queue. The ring queue also contributes to the overall memory usage of the address, which affects behavior such as message paging. The following procedure shows how to configure an address as retroactive. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Specify a value for the retroactive-message-count parameter in the address-setting element. The value you specify defines the number of messages you want the broker to preserve. For example: <configuration> <core> ... <address-settings> <address-setting match="orders"> <retroactive-message-count>100</retroactive-message-count> </address-setting> </address-settings> ... </core> </configuration> Note You can update the value of retroactive-message-count while the broker is running, in either the broker.xml configuration file or the management API. However, if you reduce the value of this parameter, an additional step is required, because retroactive addresses are implemented via ring queues. A ring queue whose ring-size parameter is reduced does not automatically delete messages from the queue to achieve the new ring-size value. This behavior is a safeguard against unintended message loss. In this case, you need to use the management API to manually reduce the number of messages in the ring queue. Additional resources For more information about ring queues, see Section 4.19, "Configuring ring queues" . 4.21. Disabling advisory messages for internally-managed addresses and queues By default, AMQ Broker creates advisory messages about addresses and queues when an OpenWire client is connected to the broker. Advisory messages are sent to internally-managed addresses created by the broker. These addresses appear on the AMQ Management Console within the same display as user-deployed addresses and queues. Although they provide useful information, advisory messages can cause unwanted consequences when the broker manages a large number of destinations. For example, the messages might increase memory usage or strain connection resources. Also, the AMQ Management Console might become cluttered when attempting to display all of the addresses created to send advisory messages. To avoid these situations, you can use the following parameters to configure the behavior of advisory messages on the broker. supportAdvisory Set this option to true to enable creation of advisory messages or false to disable them. The default value is true . suppressInternalManagementObjects Set this option to true to expose the advisory messages to management services such as JMX registry and AMQ Management Console, or false to not expose them. The default value is true . The following procedure shows how to disable advisory messages on the broker. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. For an OpenWire connector, add the supportAdvisory and suppressInternalManagementObjects parameters to the configured URL. Set the values as described earlier in this section. For example: <acceptor name="artemis">tcp://127.0.0.1:61616?protocols=CORE,AMQP,OPENWIRE;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor> 4.22. Federating addresses and queues Federation enables transmission of messages between brokers, without requiring the brokers to be in a common cluster. Brokers can be standalone, or in separate clusters. In addition, the source and target brokers can be in different administrative domains, meaning that the brokers might have different configurations, users, and security setups. The brokers might even be using different versions of AMQ Broker. For example, federation is suitable for reliably sending messages from one cluster to another. This transmission might be across a Wide Area Network (WAN), Regions of a cloud infrastructure, or over the Internet. If connection from a source broker to a target broker is lost (for example, due to network failure), the source broker tries to reestablish the connection until the target broker comes back online. When the target broker comes back online, message transmission resumes. Administrators can use address and queue policies to manage federation. Policy configurations can be matched to specific addresses or queues, or the policies can include wildcard expressions that match configurations to sets of addresses or queues. Therefore, federation can be dynamically applied as queues or addresses are added to- or removed from matching sets. Policies can include multiple expressions that include and/or exclude particular addresses and queues. In addition, multiple policies can be applied to brokers or broker clusters. In AMQ Broker, the two primary federation options are address federation and queue federation . These options are described in the sections that follow. Note A broker can include configuration for federated and local-only components. That is, if you configure federation on a broker, you don't need to federate everything on that broker. 4.22.1. About address federation Address federation is like a full multicast distribution pattern between connected brokers. For example, every message sent to an address on BrokerA is delivered to every queue on that broker. In addition, each of the messages is delivered to BrokerB and all attached queues there. Address federation dynamically links a broker to addresses in remote brokers. For example, if a local broker wants to fetch messages from an address on a remote broker, a queue is automatically created on the remote address. Messages on the remote broker are then consumed to this queue. Finally, messages are copied to the corresponding address on the local broker, as though they were originally published directly to the local address. The remote broker does not need to be reconfigured to allow federation to create an address on it. However, the local broker does need to be granted permissions to the remote address. 4.22.2. Common topologies for address federation Some common topologies for the use of address federation are described below. Symmetric topology In a symmetric topology, a producer and consumer are connected to each broker. Queues and their consumers can receive messages published by either producer. An example of a symmetric topology is shown below. Figure 4.1. Address federation in a symmetric topology When configuring address federation for a symmetric topology, it is important to set the value of the max-hops property of the address policy to 1 . This ensures that messages are copied only once , avoiding cyclic replication. If this property is set to a larger value, consumers will receive multiple copies of the same message. Full mesh topology A full mesh topology is similar to a symmetric setup. Three or more brokers symmetrically federate to each other, creating a full mesh. In this setup, a producer and consumer are connected to each broker. Queues and their consumers can receive messages published by any producer. An example of this topology is shown below. Figure 4.2. Address federation in a full mesh topology As with a symmetric setup, when configuring address federation for a full mesh topology, it is important to set the value of the max-hops property of the address policy to 1 . This ensures that messages are copied only once , avoiding cyclic replication. Ring topology In a ring of brokers, each federated address is upstream to just one other in the ring. An example of this topology is shown below. Figure 4.3. Address federation in a ring topology When you configure federation for a ring topology, to avoid cyclic replication, it is important to set the max-hops property of the address policy to a value of n-1 , where n is the number of nodes in the ring. For example, in the ring topology shown above, the value of max-hops is set to 5 . This ensures that every address in the ring sees the message exactly once . An advantage of a ring topology is that it is cheap to set up, in terms of the number of physical connections that you need to make. However, a drawback of this type of topology is that if a single broker fails, the whole ring fails. Fan-out topology In a fan-out topology, a single master address is linked-to by a tree of federated addresses. Any message published to the master address can be received by any consumer connected to any broker in the tree. The tree can be configured to any depth. The tree can also be extended without the need to re-configure existing brokers in the tree. An example of this topology is shown below. Figure 4.4. Address federation in a fan-out topology When you configure federation for a fan-out topology, ensure that you set the max-hops property of the address policy to a value of n-1 , where n is the number of levels in the tree. For example, in the fan-out topology shown above, the value of max-hops is set to 2 . This ensures that every address in the tree sees the message exactly once . 4.22.3. Support for divert bindings in address federation configuration When configuring address federation, you can add support for divert bindings in the address policy configuration. Adding this support enables the federation to respond to divert bindings to create a federated consumer for a given address on a remote broker. For example, suppose that an address called test.federation.source is included in the address policy, and another address called test.federation.target is not included. Normally, when a queue is created on test.federation.target , this would not cause a federated consumer to be created, because the address is not part of the address policy. However, if you create a divert binding such that test.federation.source is the source address and test.federation.target is the forwarding address, then a durable consumer is created at the forwarding address. The source address still must use the multicast routing type , but the target address can use multicast or anycast . An example use case is a divert that redirects a JMS topic ( multicast address) to a JMS queue ( anycast address). This enables load balancing of messages on the topic for legacy consumers not supporting JMS 2.0 and shared subscriptions. 4.22.4. Configuring federation for a broker cluster The examples in the sections that follow show how to configure address and queue federation between standalone local and remote brokers. For federation between standalone brokers, the name of the federation configuration, as well as the names of any address and queue policies, must be unique between the local and remote brokers. However, if you are configuring federation for brokers in a cluster , there is an additional requirement. For clustered brokers, the names of the federation configuration, as well as the names of any address and queues policies within that configuration, must be the same for every broker in that cluster. Ensuring that brokers in the same cluster use the same federation configuration and address and queue policy names avoids message duplication. For example, if brokers within the same cluster have different federation configuration names, this might lead to a situation where multiple, differently-named forwarding queues are created for the same address, resulting in message duplication for downstream consumers. By contrast, if brokers in the same cluster use the same federation configuration name, this essentially creates replicated, clustered forwarding queues that are load-balanced to the downstream consumers. This avoids message duplication. 4.22.5. Configuring upstream address federation The following example shows how to configure upstream address federation between standalone brokers. In this example, you configure federation from a local (that is, downstream ) broker, to some remote (that is, upstream ) brokers. Prerequisites The following example shows how to configure address federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4, "Configuring federation for a broker cluster" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a new <federations> element that includes a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> name Name of the federation configuration. In this example, the name corresponds to the name of the local broker. user Shared user name for connection to the upstream brokers. password Shared password for connection to the upstream brokers. Note If user and password credentials differ for remote brokers, you can separately specify credentials for those brokers when you add them to the configuration. This is described later in this procedure. Within the federation element, add an <address-policy> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> </address-policy> </federation> </federations> name Name of the address policy. All address policies that are configured on the broker must have unique names. auto-delete During address federation, the local broker dynamically creates a durable queue at the remote address. The value of the auto-delete property specifies whether the remote queue should be deleted once the local broker disconnects and the values of the auto-delete-delay and auto-delete-message-count properties have also been reached. This is a useful option if you want to automate the cleanup of dynamically-created queues. It is also a useful option if you want to prevent a buildup of messages on a remote broker if the local broker is disconnected for a long time. However, you might set this option to false if you want messages to always remain queued for the local broker while it is disconnected, avoiding message loss on the local broker. auto-delete-delay After the local broker has disconnected, the value of this property specifies the amount of time, in milliseconds, before dynamically-created remote queues are eligible to be automatically deleted. auto-delete-message-count After the local broker has been disconnected, the value of this property specifies the maximum number of messages that can still be in a dynamically-created remote queue before that queue is eligible to be automatically deleted. enable-divert-bindings Setting this property to true enables divert bindings to be listened-to for demand. If there is a divert binding with an address that matches the included addresses for the address policy, then any queue bindings that match the forwarding address of the divert will create demand. The default value is false . max-hops Maximum number of hops that a message can make during federation. Particular topologies require specific values for this property. To learn more about these requirements, see Section 4.22.2, "Common topologies for address federation" . transformer-ref Name of a transformer configuration. You might add a transformer configuration if you want to transform messages during federated message transmission. Transformer configuration is described later in this procedure. Within the <address-policy> element, add address-matching patterns to include and exclude addresses from the address policy. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> </federation> </federations> include The value of the address-match property of this element specifies addresses to include in the address policy. You can specify an exact address, for example, queue.bbc.new or queue.usatoday . Or, you can use a wildcard expression to specify a matching set of addresses. In the preceding example, the address policy also includes all address names that start with the string queue.news . exclude The value of the address-match property of this element specifies addresses to exclude from the address policy. You can specify an exact address name or use a wildcard expression to specify a matching set of addresses. In the preceding example, the address policy excludes all address names that start with the string queue.news.sport . (Optional) Within the federation element, add a transformer element to reference a custom transformer implementation. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> name Name of the transformer configuration. This name must be unique on the local broker. This is the name that you specify as a value for the transformer-ref property of the address policy. class-name Name of a user-defined class that implements the org.apache.activemq.artemis.core.server.transformer.Transformer interface. The transformer's transform() method is invoked with the message before the message is transmitted. This enables you to transform the message header or body before it is federated. property Used to hold key-value pairs for specific transformer configuration. Within the federation element, add one or more upstream elements. Each upstream element defines a connection to a remote broker and the policies to apply to that connection. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <upstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref="news-address-federation"/> </upstream> <upstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref="news-address-federation"/> </upstream> <address-policy name="news-address-federation" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" enable-divert-bindings="false" max-hops="1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> static-connectors Contains a list of connector-ref elements that reference connector elements that are defined elsewhere in the broker.xml configuration file of the local broker. A connector defines what transport (TCP, SSL, HTTP, and so on) and server connection parameters (host, port, and so on) to use for outgoing connections. The step of this procedure shows how to add the connectors that are referenced in the static-connectors element. policy-ref Name of the address policy configured on the downstream broker that is applied to the upstream broker. The additional options that you can specify for an upstream element are described below: name Name of the upstream broker configuration. In this example, the names correspond to upstream brokers called eu-east-1 and eu-west-1 . user User name to use when creating the connection to the upstream broker. If not specified, the shared user name that is specified in the configuration of the federation element is used. password Password to use when creating the connection to the upstream broker. If not specified, the shared password that is specified in the configuration of the federation element is used. call-failover-timeout Similar to call-timeout , but used when a call is made during a failover attempt. The default value is -1 , which means that the timeout is disabled. call-timeout Time, in milliseconds, that a federation connection waits for a reply from a remote broker when it transmits a packet that is a blocking call. If this time elapses, the connection throws an exception. The default value is 30000 . check-period Period, in milliseconds, between consecutive "keep-alive" messages that the local broker sends to a remote broker to check the health of the federation connection. If the federation connection is healthy, the remote broker responds to each keep-alive message. If the connection is unhealthy, when the downstream broker fails to receive a response from the upstream broker, a mechanism called a circuit breaker is used to block federated consumers. See the description of the circuit-breaker-timeout parameter for more information. The default value of the check-period parameter is 30000 . circuit-breaker-timeout A single connection between a downstream and upstream broker might be shared by many federated queue and address consumers. In the event that the connection between the brokers is lost, each federated consumer might try to reconnect at the same time. To avoid this, a mechanism called a circuit breaker blocks the consumers. When the specified timeout value elapses, the circuit breaker re-tries the connection. If successful, consumers are unblocked. Otherwise, the circuit breaker is applied again. connection-ttl Time, in milliseconds, that a federation connection stays alive if it stops receiving messages from the remote broker. The default value is 60000 . discovery-group-ref As an alternative to defining static connectors for connections to upstream brokers, this element can be used to specify a discovery group that is already configured elsewhere in the broker.xml configuration file. Specifically, you specify an existing discovery group as a value for the discovery-group-name property of this element. For more information about discovery groups, see Section 14.1.5, "Broker discovery methods" . ha Specifies whether high availability is enabled for the connection to the upstream broker. If the value of this parameter is set to true , the local broker can connect to any available broker in an upstream cluster and automatically fails over to a backup broker if the live upstream broker shuts down. The default value is false . initial-connect-attempts Number of initial attempts that the downstream broker will make to connect to the upstream broker. If this value is reached without a connection being established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. max-retry-interval Maximum time, in milliseconds, between subsequent reconnection attempts when connection to the remote broker fails. The default value is 2000 . reconnect-attempts Number of times that the downstream broker will try to reconnect to the upstream broker if the connection fails. If this value is reached without a connection being re-established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. retry-interval Period, in milliseconds, between subsequent reconnection attempts, if connection to the remote broker has failed. The default value is 500 . retry-interval-multiplier Multiplying factor that is applied to the value of the retry-interval parameter. The default value is 1 . share-connection If there is both a downstream and upstream connection configured for the same broker, then the same connection will be shared, as long as both of the downstream and upstream configurations set the value of this parameter to true . The default value is false . On the local broker, add connectors to the remote brokers. These are the connectors referenced in the static-connectors elements of your federated address configuration. For example: <connectors> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> 4.22.6. Configuring downstream address federation The following example shows how to configure downstream address federation for standalone brokers. Downstream address federation enables you to add configuration on the local broker that one or more remote brokers use to connect back to the local broker. The advantage of this approach is that you can keep all federation configuration on a single broker. This might be a useful approach for a hub-and-spoke topology, for example. Note Downstream address federation reverses the direction of the federation connection versus upstream address configuration. Therefore, when you add remote brokers to your configuration, these become considered as the downstream brokers. The downstream brokers use the connection information in the configuration to connect back to the local broker, which is now considered to be upstream. This is illustrated later in this example, when you add configuration for the remote brokers. Prerequisites You should be familiar with the configuration for upstream address federation. See Section 4.22.5, "Configuring upstream address federation" . The following example shows how to configure address federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4, "Configuring federation for a broker cluster" . Procedure On the local broker, open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <federations> element that includes a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> Add an address policy configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" max-hops="1" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> </federation> ... </federations> If you want to transform messages before transmission, add a transformer configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <address-policy name="news-address-federation" max-hops="1" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> Add a downstream element for each remote broker. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <downstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref="news-address-federation"/> </downstream> <downstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref="news-address-federation"/> </downstream> <address-policy name="news-address-federation" max-hops="1" auto-delete="true" auto-delete-delay="300000" auto-delete-message-count="-1" transformer-ref="news-transformer"> <include address-match="queue.bbc.new" /> <include address-match="queue.usatoday" /> <include address-match="queue.news.#" /> <exclude address-match="queue.news.sport.#" /> </address-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> As shown in the preceding configuration, the remote brokers are now considered to be downstream of the local broker. The downstream brokers use the connection information in the configuration to connect back to the local (that is, upstream ) broker. On the local broker, add connectors and acceptors used by the local and remote brokers to establish the federation connection. For example: <connectors> <connector name="netty-connector">tcp://localhost:61616</connector> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name="netty-acceptor">tcp://localhost:61616</acceptor> </acceptors> connector name="netty-connector" Connector configuration that the local broker sends to the remote broker. The remote broker use this configuration to connect back to the local broker. connector name="eu-west-1-connector" , connector name="eu-east-1-connector" Connectors to remote brokers. The local broker uses these connectors to connect to the remote brokers and share the configuration that the remote brokers need to connect back to the local broker. acceptor name="netty-acceptor" Acceptor on the local broker that corresponds to the connector used by the remote broker to connect back to the local broker. 4.22.7. About queue federation Queue federation provides a way to balance the load of a single queue on a local broker across other, remote brokers. To achieve load balancing, a local broker retrieves messages from remote queues in order to satisfy demand for messages from local consumers. An example is shown below. Figure 4.5. Symmetric queue federation The remote queues do not need to be reconfigured and they do not have to be on the same broker or in the same cluster. All of the configuration needed to establish the remote links and the federated queue is on the local broker. 4.22.7.1. Advantages of queue federation Described below are some reasons you might choose to configure queue federation. Increasing capacity Queue federation can create a "logical" queue that is distributed over many brokers. This logical distributed queue has a much higher capacity than a single queue on a single broker. In this setup, as many messages as possible are consumed from the broker they were originally published to. The system moves messages around in the federation only when load balancing is needed. Deploying multi-region setups In a multi-region setup, you might have a message producer in one region or venue and a consumer in another. However, you should ideally keep producer and consumer connections local to a given region. In this case, you can deploy brokers in each region where producers and consumers are, and use queue federation to move messages over a Wide Area Network (WAN), between regions. An example is shown below. Figure 4.6. Multi-region queue federation Communicating between a secure enterprise LAN and a DMZ In networking security, a demilitarized zone (DMZ) is a physical or logical subnetwork that contains and exposes an enterprise's external-facing services to an untrusted, usually larger, network such as the Internet. The remainder of the enterprise's Local Area Network (LAN) remains isolated from this external network, behind a firewall. In a situation where a number of message producers are in the DMZ and a number of consumers in the secure enterprise LAN, it might not be appropriate to allow the producers to connect to a broker in the secure enterprise LAN. In this case, you could deploy a broker in the DMZ that the producers can publish messages to. Then, the broker in the enterprise LAN can connect to the broker in the DMZ and use federated queues to receive messages from the broker in the DMZ. 4.22.8. Configuring upstream queue federation The following example shows how to configure upstream queue federation for standalone brokers. In this example, you configure federation from a local (that is, downstream ) broker, to some remote (that is, upstream ) brokers. Prerequisites The following example shows how to configure queue federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4, "Configuring federation for a broker cluster" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within a new <federations> element, add a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> name Name of the federation configuration. In this example, the name corresponds to the name of the downstream broker. user Shared user name for connection to the upstream brokers. password Shared password for connection to the upstream brokers. Note If user and password credentials differ for upstream brokers, you can separately specify credentials for those brokers when you add them to the configuration. This is described later in this procedure. Within the federation element, add a <queue-policy> element. Specify values for properties of the <queue-policy> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> </queue-policy> </federation> </federations> name Name of the queue policy. All queue policies that are configured on the broker must have unique names. include-federated When the value of this property is set to false , the configuration does not re-federate an already-federated consumer (that is, a consumer on a federated queue). This avoids a situation where in a symmetric or closed-loop topology, there are no non-federated consumers, and messages flow endlessly around the system. You might set the value of this property to true if you do not have a closed-loop topology. For example, suppose that you have a chain of three brokers, BrokerA , BrokerB , and BrokerC , with a producer at BrokerA and a consumer at BrokerC . In this case, you would want BrokerB to re-federate the consumer to BrokerA . priority-adjustment When a consumer connects to a queue, its priority is used when the upstream (that is federated ) consumer is created. The priority of the federated consumer is adjusted by the value of the priority-adjustment property. The default value of this property is -1 , which ensures that the local consumer get prioritized over the federated consumer during load balancing. However, you can change the value of the priority adjustment as needed. transformer-ref Name of a transformer configuration. You might add a transformer configuration if you want to transform messages during federated message transmission. Transformer configuration is described later in this procedure. Within the <queue-policy> element, add address-matching patterns to include and exclude addresses from the queue policy. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> </federation> </federations> include The value of the address-match property of this element specifies addresses to include in the queue policy. You can specify an exact address, for example, queue.bbc.new or queue.usatoday . Or, you can use a wildcard expression to specify a matching set of addresses. In the preceding example, the queue policy also includes all address names that start with the string queue.news . In combination with the address-match property, you can use the queue-match property to include specific queues on those addresses in the queue policy. Like the address-match property, you can specify an exact queue name, or you can use a wildcard expression to specify a set of queues. In the preceding example, the number sign ( # ) wildcard character means that all queues on each address or set of addresses are included in the queue policy. exclude The value of the address-match property of this element specifies addresses to exclude from the queue policy. You can specify an exact address or use a wildcard expression to specify a matching set of addresses. In the preceding example, the number sign ( # ) wildcard character means that any queues that match the queue-match property across all addresses are excluded. In this case, any queue that ends with the string .local is excluded. This indicates that certain queues are kept as local queues, and not federated. Within the federation element, add a transformer element to reference a custom transformer implementation. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> name Name of the transformer configuration. This name must be unique on the broker in question. You specify this name as a value for the transformer-ref property of the address policy. class-name Name of a user-defined class that implements the org.apache.activemq.artemis.core.server.transformer.Transformer interface. The transformer's transform() method is invoked with the message before the message is transmitted. This enables you to transform the message header or body before it is federated. property Used to hold key-value pairs for specific transformer configuration. Within the federation element, add one or more upstream elements. Each upstream element defines an upstream broker connection and the policies to apply to that connection. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <upstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref="news-queue-federation"/> </upstream> <upstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref="news-queue-federation"/> </upstream> <queue-policy name="news-queue-federation" include-federated="true" priority-adjustment="-5" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> </federations> static-connectors Contains a list of connector-ref elements that reference connector elements that are defined elsewhere in the broker.xml configuration file of the local broker. A connector defines what transport (TCP, SSL, HTTP, and so on) and server connection parameters (host, port, and so on) to use for outgoing connections. The following step of this procedure shows how to add the connectors referenced by the static-connectors elements of your federated queue configuration. policy-ref Name of the queue policy configured on the downstream broker that is applied to the upstream broker. The additional options that you can specify for an upstream element are described below: name Name of the upstream broker configuration. In this example, the names correspond to upstream brokers called eu-east-1 and eu-west-1 . user User name to use when creating the connection to the upstream broker. If not specified, the shared user name that is specified in the configuration of the federation element is used. password Password to use when creating the connection to the upstream broker. If not specified, the shared password that is specified in the configuration of the federation element is used. call-failover-timeout Similar to call-timeout , but used when a call is made during a failover attempt. The default value is -1 , which means that the timeout is disabled. call-timeout Time, in milliseconds, that a federation connection waits for a reply from a remote broker when it transmits a packet that is a blocking call. If this time elapses, the connection throws an exception. The default value is 30000 . check-period Period, in milliseconds, between consecutive "keep-alive" messages that the local broker sends to a remote broker to check the health of the federation connection. If the federation connection is healthy, the remote broker responds to each keep-alive message. If the connection is unhealthy, when the downstream broker fails to receive a response from the upstream broker, a mechanism called a circuit breaker is used to block federated consumers. See the description of the circuit-breaker-timeout parameter for more information. The default value of the check-period parameter is 30000 . circuit-breaker-timeout A single connection between a downstream and upstream broker might be shared by many federated queue and address consumers. In the event that the connection between the brokers is lost, each federated consumer might try to reconnect at the same time. To avoid this, a mechanism called a circuit breaker blocks the consumers. When the specified timeout value elapses, the circuit breaker re-tries the connection. If successful, consumers are unblocked. Otherwise, the circuit breaker is applied again. connection-ttl Time, in milliseconds, that a federation connection stays alive if it stops receiving messages from the remote broker. The default value is 60000 . discovery-group-ref As an alternative to defining static connectors for connections to upstream brokers, this element can be used to specify a discovery group that is already configured elsewhere in the broker.xml configuration file. Specifically, you specify an existing discovery group as a value for the discovery-group-name property of this element. For more information about discovery groups, see Section 14.1.5, "Broker discovery methods" . ha Specifies whether high availability is enabled for the connection to the upstream broker. If the value of this parameter is set to true , the local broker can connect to any available broker in an upstream cluster and automatically fails over to a backup broker if the live upstream broker shuts down. The default value is false . initial-connect-attempts Number of initial attempts that the downstream broker will make to connect to the upstream broker. If this value is reached without a connection being established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. max-retry-interval Maximum time, in milliseconds, between subsequent reconnection attempts when connection to the remote broker fails. The default value is 2000 . reconnect-attempts Number of times that the downstream broker will try to reconnect to the upstream broker if the connection fails. If this value is reached without a connection being re-established, the upstream broker is considered permanently offline. The downstream broker no longer routes messages to the upstream broker. The default value is -1 , which means that there is no limit. retry-interval Period, in milliseconds, between subsequent reconnection attempts, if connection to the remote broker has failed. The default value is 500 . retry-interval-multiplier Multiplying factor that is applied to the value of the retry-interval parameter. The default value is 1 . share-connection If there is both a downstream and upstream connection configured for the same broker, then the same connection will be shared, as long as both of the downstream and upstream configurations set the value of this parameter to true . The default value is false . On the local broker, add connectors to the remote brokers. These are the connectors referenced in the static-connectors elements of your federated address configuration. For example: <connectors> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> 4.22.9. Configuring downstream queue federation The following example shows how to configure downstream queue federation. Downstream queue federation enables you to add configuration on the local broker that one or more remote brokers use to connect back to the local broker. The advantage of this approach is that you can keep all federation configuration on a single broker. This might be a useful approach for a hub-and-spoke topology, for example. Note Downstream queue federation reverses the direction of the federation connection versus upstream queue configuration. Therefore, when you add remote brokers to your configuration, these become considered as the downstream brokers. The downstream brokers use the connection information in the configuration to connect back to the local broker, which is now considered to be upstream. This is illustrated later in this example, when you add configuration for the remote brokers. Prerequisites You should be familiar with the configuration for upstream queue federation. See Section 4.22.8, "Configuring upstream queue federation" . The following example shows how to configure queue federation between standalone brokers. However, you should also be familiar with the requirements for configuring federation for a broker cluster . For more information, see Section 4.22.4, "Configuring federation for a broker cluster" . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Add a <federations> element that includes a <federation> element. For example: <federations> <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> </federation> </federations> Add a queue policy configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" priority-adjustment="-5" include-federated="true" transformer-ref="new-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> </federation> ... </federations> If you want to transform messages before transmission, add a transformer configuration. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <queue-policy name="news-queue-federation" priority-adjustment="-5" include-federated="true" transformer-ref="news-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> Add a downstream element for each remote broker. For example: <federations> ... <federation name="eu-north-1" user="federation_username" password="32a10275cf4ab4e9"> <downstream name="eu-east-1"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref="news-address-federation"/> </downstream> <downstream name="eu-west-1" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref="news-address-federation"/> </downstream> <queue-policy name="news-queue-federation" priority-adjustment="-5" include-federated="true" transformer-ref="new-transformer"> <include queue-match="#" address-match="queue.bbc.new" /> <include queue-match="#" address-match="queue.usatoday" /> <include queue-match="#" address-match="queue.news.#" /> <exclude queue-match="#.local" address-match="#" /> </queue-policy> <transformer name="news-transformer"> <class-name>org.foo.NewsTransformer</class-name> <property key="key1" value="value1"/> <property key="key2" value="value2"/> </transformer> </federation> ... </federations> As shown in the preceding configuration, the remote brokers are now considered to be downstream of the local broker. The downstream brokers use the connection information in the configuration to connect back to the local (that is, upstream ) broker. On the local broker, add connectors and acceptors used by the local and remote brokers to establish the federation connection. For example: <connectors> <connector name="netty-connector">tcp://localhost:61616</connector> <connector name="eu-west-1-connector">tcp://localhost:61616</connector> <connector name="eu-east-1-connector">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name="netty-acceptor">tcp://localhost:61616</acceptor> </acceptors> connector name="netty-connector" Connector configuration that the local broker sends to the remote broker. The remote broker use this configuration to connect back to the local broker. connector name="eu-west-1-connector" , connector name="eu-east-1-connector" Connectors to remote brokers. The local broker uses these connectors to connect to the remote brokers and share the configuration that the remote brokers need to connect back to the local broker. acceptor name="netty-acceptor" Acceptor on the local broker that corresponds to the connector used by the remote broker to connect back to the local broker.
[ "<address-setting match=\"my.*\"> <max-delivery-attempts>3</max-delivery-attempts> <last-value-queue>true</last-value-queue> </address-setting> <address-setting match=\"my.destination\"> <last-value-queue>false</last-value-queue> </address-setting>", "<configuration> <core> <wildcard-addresses> // <enabled>true</enabled> // <delimiter>,</delimiter> // <any-words>@</any-words> // <single-word>USD</single-word> </wildcard-addresses> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.anycast.destination\"> <anycast> <queue name=\"my.anycast.destination\"/> </anycast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.anycast.destination\"> <anycast> <queue name=\"q1\"/> <queue name=\"q2\"/> </anycast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.multicast.destination\"> <multicast/> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.multicast.destination\"> <multicast> <queue name=\"client123.my.multicast.destination\"/> <queue name=\"client456.my.multicast.destination\"/> </multicast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"orders\"> <anycast> <queue name=\"orders\"/> </anycast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"orders\"> <anycast> <queue name=\"orders\"/> </anycast> <multicast/> </address> </core> </configuration>", "<configuration ...> <core ...> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?protocols=AMQP;anycastPrefix=anycast://</acceptor> </acceptors> </core> </configuration>", "<configuration ...> <core ...> <acceptors> <!-- Acceptor for every supported protocol --> <acceptor name=\"artemis\">tcp://0.0.0.0:61616?protocols=AMQP;multicastPrefix=multicast://</acceptor> </acceptors> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.durable.address\"> <multicast> <queue name=\"q1\"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.non.shared.durable.address\"> <multicast> <queue name=\"orders1\"> <durable>true</durable> </queue> <queue name=\"orders2\"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.non.shared.durable.address\"> <multicast> <queue name=\"orders1\" max-consumers=\"1\"> <durable>true</durable> </queue> <queue name=\"orders2\" max-consumers=\"1\"> <durable>true</durable> </queue> </multicast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.non.durable.address\"> <multicast> <queue name=\"orders1\" purge-on-no-consumers=\"true\"/> </multicast> </address> </core> </configuration>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"activemq.#\"> <auto-create-addresses>true</auto-create-addresses> <auto-delete-addresses>true</auto-delete-addresses> <auto-create-queues>true</auto-create-queues> <auto-delete-queues>true</auto-delete-queues> <default-address-routing-type>ANYCAST</default-address-routing-type> </address-setting> </address-settings> </core> </configuration>", "<configuration ...> <core ...> <addresses> <address name=\"my.address\"> <anycast> <queue name=\"q1\" /> <queue name=\"q2\" /> </anycast> </address> </addresses> </core> </configuration>", "String FQQN = \"my.address::q1\"; Queue q1 session.createQueue(FQQN); MessageConsumer consumer = session.createConsumer(q1);", "<configuration ...> <core ...> <addresses> <address name=\"my.sharded.address\"></address> </addresses> </core> </configuration>", "<configuration ...> <core ...> <addresses> <address name=\"my.sharded.address\"> <anycast> <queue name=\"q1\" /> <queue name=\"q2\" /> <queue name=\"q3\" /> </anycast> </address> </addresses> </core> </configuration>", "<address name=\"my.address\"> <multicast> <queue name=\"prices1\" last-value-key=\"stock_ticker\"/> </multicast> </address>", "<address name=\"my.address\"> <multicast> <queue name=\"prices1\" last-value=\"true\"/> </multicast> </address>", "<address-setting match=\"lastValue\"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting>", "<address-setting match=\"lastValue.*\"> <default-last-value-key>stock_ticker</default-last-value-key> </address-setting>", "<address-setting match=\"lastValue\"> <default-last-value-queue>true</default-last-value-queue> </address-setting>", "<address name=\"my.address\"> <multicast> <queue name=\"prices1\" last-value-key=\"stock_ticker\"/> </multicast> </address>", "TextMessage message = session.createTextMessage(\"First message with last value property set\"); message.setStringProperty(\"stock_ticker\", \"ATN\"); message.setStringProperty(\"stock_price\", \"36.83\"); producer.send(message);", "TextMessage message = session.createTextMessage(\"Second message with last value property set\"); message.setStringProperty(\"stock_ticker\", \"ATN\"); message.setStringProperty(\"stock_price\", \"37.02\"); producer.send(message);", "TextMessage messageReceived = (TextMessage)messageConsumer.receive(5000); System.out.format(\"Received message: %s\\n\", messageReceived.getText());", "<address name=\"my.address\"> <multicast> <queue name=\"orders1\" last-value-key=\"stock_ticker\" non-destructive=\"true\" /> </multicast> </address>", "<address-setting match=\"lastValue\"> <default-last-value-key>stock_ticker </default-last-value-key> <default-non-destructive>true</default-non-destructive> </address-setting>", "<configuration ...> <core ...> <message-expiry-scan-period>1000</message-expiry-scan-period>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> <expiry-delay>10</expiry-delay> </address-setting> <address-settings> <configuration ...>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> <min-expiry-delay>10</min-expiry-delay> <max-expiry-delay>100</max-expiry-delay> </address-setting> <address-settings> <configuration ...>", "<addresses> <address name=\"ExpiryAddress\"> <anycast> <queue name=\"ExpiryQueue\"/> </anycast> </address> </addresses>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> </address-setting> <address-settings> <configuration ...>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"stocks\"> <expiry-address>ExpiryAddress</expiry-address> <auto-create-expiry-resources>true</auto-create-expiry-resources> <expiry-queue-prefix>EXP.</expiry-queue-prefix> <expiry-queue-suffix></expiry-queue-suffix> </address-setting> <address-settings> <configuration ...>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"exampleQueue\"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> <address-settings> <configuration ...>", "<configuration ...> <core ...> <addresses> <address name=\"DLA\"> <anycast> <queue name=\"DLQ\" /> </anycast> </address> </addresses> </core> </configuration>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"exampleQueue\"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> </address-setting> <address-settings> <configuration ...>", "<configuration ...> <core ...> <address-settings> <address-setting match=\"exampleQueue\"> <dead-letter-address>DLA</dead-letter-address> <max-delivery-attempts>3</max-delivery-attempts> <auto-create-dead-letter-resources>true</auto-create-dead-letter-resources> <dead-letter-queue-prefix>DLQ.</dead-letter-queue-prefix> <dead-letter-queue-suffix></dead-letter-queue-suffix> </address-setting> <address-settings> <configuration ...>", "<addresses> <address name=\"orders\"> <multicast> <queue name=\"orders\" enabled=\"false\"/> </multicast> </address> </addresses>", "<configuration ...> <core ...> <addresses> <address name=\"foo\"> <anycast> <queue name=\"q3\" max-consumers=\"20\"/> </anycast> </address> </addresses> </core> </configuration>", "<configuration ...> <core ...> <address name=\"foo\"> <anycast> <queue name=\"q3\" max-consumers=\"1\"/> </anycast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"foo\"> <anycast> <queue name=\"q3\" max-consumers=\"-1\"/> </anycast> </address> </core> </configuration>", "<configuration ...> <core ...> <address name=\"my.address\"> <multicast> <queue name=\"orders1\" exclusive=\"true\"/> </multicast> </address> </core> </configuration>", "<address-setting match=\"myAddress\"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting>", "<address-setting match=\"myAddress.*\"> <default-exclusive-queue>true</default-exclusive-queue> </address-setting>", "<temporary-queue-namespace>temp-example</temporary-queue-namespace>", "<address-settings> <address-setting match=\"temp-example.#\"> <enable-metrics>false</enable-metrics> </address-setting> </address-settings>", "<address-settings> <address-setting match=\"ring.#\"> <default-ring-size>3</default-ring-size> </address-setting> </address-settings>", "<addresses> <address name=\"myRing\"> <anycast> <queue name=\"myRing\" ring-size=\"5\" /> </anycast> </address> </addresses>", "<configuration> <core> <address-settings> <address-setting match=\"orders\"> <retroactive-message-count>100</retroactive-message-count> </address-setting> </address-settings> </core> </configuration>", "<acceptor name=\"artemis\">tcp://127.0.0.1:61616?protocols=CORE,AMQP,OPENWIRE;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> </address-policy> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <upstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref=\"news-address-federation\"/> </upstream> <upstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref=\"news-address-federation\"/> </upstream> <address-policy name=\"news-address-federation\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" enable-divert-bindings=\"false\" max-hops=\"1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<connectors> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" max-hops=\"1\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <address-policy name=\"news-address-federation\" max-hops=\"1\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <downstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <downstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <address-policy name=\"news-address-federation\" max-hops=\"1\" auto-delete=\"true\" auto-delete-delay=\"300000\" auto-delete-message-count=\"-1\" transformer-ref=\"news-transformer\"> <include address-match=\"queue.bbc.new\" /> <include address-match=\"queue.usatoday\" /> <include address-match=\"queue.news.#\" /> <exclude address-match=\"queue.news.sport.#\" /> </address-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<connectors> <connector name=\"netty-connector\">tcp://localhost:61616</connector> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name=\"netty-acceptor\">tcp://localhost:61616</acceptor> </acceptors>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> </queue-policy> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <upstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <policy ref=\"news-queue-federation\"/> </upstream> <upstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <policy ref=\"news-queue-federation\"/> </upstream> <queue-policy name=\"news-queue-federation\" include-federated=\"true\" priority-adjustment=\"-5\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<connectors> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" priority-adjustment=\"-5\" include-federated=\"true\" transformer-ref=\"new-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <queue-policy name=\"news-queue-federation\" priority-adjustment=\"-5\" include-federated=\"true\" transformer-ref=\"news-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<federations> <federation name=\"eu-north-1\" user=\"federation_username\" password=\"32a10275cf4ab4e9\"> <downstream name=\"eu-east-1\"> <static-connectors> <connector-ref>eu-east-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <downstream name=\"eu-west-1\" > <static-connectors> <connector-ref>eu-west-connector1</connector-ref> </static-connectors> <transport-connector-ref>netty-connector</transport-connector-ref> <policy ref=\"news-address-federation\"/> </downstream> <queue-policy name=\"news-queue-federation\" priority-adjustment=\"-5\" include-federated=\"true\" transformer-ref=\"new-transformer\"> <include queue-match=\"#\" address-match=\"queue.bbc.new\" /> <include queue-match=\"#\" address-match=\"queue.usatoday\" /> <include queue-match=\"#\" address-match=\"queue.news.#\" /> <exclude queue-match=\"#.local\" address-match=\"#\" /> </queue-policy> <transformer name=\"news-transformer\"> <class-name>org.foo.NewsTransformer</class-name> <property key=\"key1\" value=\"value1\"/> <property key=\"key2\" value=\"value2\"/> </transformer> </federation> </federations>", "<connectors> <connector name=\"netty-connector\">tcp://localhost:61616</connector> <connector name=\"eu-west-1-connector\">tcp://localhost:61616</connector> <connector name=\"eu-east-1-connector\">tcp://localhost:61617</connector> </connectors> <acceptors> <acceptor name=\"netty-acceptor\">tcp://localhost:61616</acceptor> </acceptors>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/assembly-br-configuring-addresses-and-queues_configuring
Chapter 2. Using SystemTap
Chapter 2. Using SystemTap This chapter instructs users how to install SystemTap, and provides an introduction on how to run SystemTap scripts. 2.1. Installation and Setup To deploy SystemTap, SystemTap packages along with the corresponding set of -devel , -debuginfo and -debuginfo-common- arch packages for the kernel need to be installed. To use SystemTap on more than one kernel where a system has multiple kernels installed, install the -devel and -debuginfo packages for each of those kernel versions. These procedures will be discussed in detail in the following sections. Important Many users confuse -debuginfo with -debug packages. Remember that the deployment of SystemTap requires the installation of the -debuginfo package of the kernel, not the -debug version of the kernel. 2.1.1. Installing SystemTap To deploy SystemTap, install the systemtap and systemtap-runtime packages by running the following command as root : 2.1.2. Installing Required Kernel Information Packages SystemTap needs information about the kernel in order to place instrumentation in it (probe it). This information, which allows SystemTap to generate the code for the instrumentation, is contained in the matching kernel-devel , kernel-debuginfo , and kernel-debuginfo-common- arch packages (where arch is the hardware platform of your system, which you can determine by running the uname -m command). While the kernel-devel package is available from the default Red Hat Enterprise Linux repository, the kernel-debuginfo and kernel-debuginfo-common- arch packages are available from the debug repository. To install the required packages, enable the debug repository for your system: In the above command, replace variant with server , workstation , or client , depending on the variant of the Red Hat Enterprise Linux system you are using. To determine the variant, you can use the following command: The version, variant, and architecture of the kernel-devel , kernel-debuginfo , and kernel-debuginfo-common- arch packages must exactly match the kernel to be probed with SystemTap. To determine what kernel your system is currently running, use: For example, if you wish to use SystemTap on kernel version 3.10.0-327.4.4.el7 on an AMD64 or Intel 64 machine, then you need to install the following packages: kernel-debuginfo-3.10.0-327.4.4.el7.x86_64.rpm kernel-debuginfo-common-x86_64-3.10.0-327.4.4.el7.x86_64.rpm kernel-devel-3.10.0-327.4.4.el7.x86_64.rpm To use the yum package manager to install the packages required for running SystemTap on the current kernel, execute the following command as root : 2.1.3. Initial Testing If the kernel to be probed with SystemTap is currently being used, it is possible to immediately test whether the deployment was successful. If a different kernel is to be probed, reboot and load the appropriate kernel. To start the test, run the following command: This command simply instructs SystemTap to print read performed and then exit properly once a virtual file system read is detected. If the SystemTap deployment was successful, you should get output similar to the following: The last three lines of the output (beginning with Pass 5 ) indicate that SystemTap was able to successfully create the instrumentation to probe the kernel, run the instrumentation, detect the event being probed (in this case, a virtual file system read), and execute a valid handler (print text and then close it with no errors).
[ "~]# yum install -y systemtap systemtap-runtime", "~]# subscription-manager repos --enable=rhel-7- variant -debug-rpms", "~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.2 (Maipo)", "uname -r 3.10.0-327.el7.x86_64", "~]# yum install -y kernel-devel-USD(uname -r) kernel-debuginfo-USD(uname -r) kernel-debuginfo-common-USD(uname -m)-USD(uname -r)", "stap -v -e 'probe vfs.read {printf(\"read performed\\n\"); exit()}'", "Pass 1: parsed user script and 45 library script(s) in 340usr/0sys/358real ms. Pass 2: analyzed script: 1 probe(s), 1 function(s), 0 embed(s), 0 global(s) in 290usr/260sys/568real ms. Pass 3: translated to C into \"/tmp/stapiArgLX/stap_e5886fa50499994e6a87aacdc43cd392_399.c\" in 490usr/430sys/938real ms. Pass 4: compiled C into \"stap_e5886fa50499994e6a87aacdc43cd392_399.ko\" in 3310usr/430sys/3714real ms. Pass 5: starting run. read performed Pass 5: run completed in 10usr/40sys/73real ms." ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_beginners_guide/using-systemtap
probe::nfs.aop.readpage
probe::nfs.aop.readpage Name probe::nfs.aop.readpage - NFS client synchronously reading a page Synopsis nfs.aop.readpage Values size number of pages to be read in this execution i_flag file flags file file argument ino inode number i_size file length in bytes dev device identifier rsize read size (in bytes) __page the address of page sb_flag super block flags page_index offset within mapping, can used a page identifier and position identifier in the page frame Description Read the page over, only fires when a async read operation failed
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-aop-readpage
Chapter 52. MongoDB Source
Chapter 52. MongoDB Source Consume documents from MongoDB. If the persistentTailTracking option will be enabled, the consumer will keep track of the last consumed message and on the restart, the consumption will restart from that message. In case of persistentTailTracking enabled, the tailTrackIncreasingField must be provided (by default it is optional). If the persistentTailTracking option won't be enabled, the consumer will consume the whole collection and wait in idle for new documents to consume. 52.1. Configuration Options The following table summarizes the configuration options available for the mongodb-source Kamelet: Property Name Description Type Default Example collection * MongoDB Collection Sets the name of the MongoDB collection to bind to this endpoint. string database * MongoDB Database Sets the name of the MongoDB database to target. string hosts * MongoDB Hosts Comma separated list of MongoDB Host Addresses in host:port format. string password * MongoDB Password User password for accessing MongoDB. string username * MongoDB Username Username for accessing MongoDB. The username must be present in the MongoDB's authentication database (authenticationDatabase). By default, the MongoDB authenticationDatabase is 'admin'. string persistentTailTracking MongoDB Persistent Tail Tracking Enable persistent tail tracking, which is a mechanism to keep track of the last consumed message across system restarts. The time the system is up, the endpoint will recover the cursor from the point where it last stopped slurping records. boolean false tailTrackIncreasingField MongoDB Tail Track Increasing Field Correlation field in the incoming record which is of increasing nature and will be used to position the tailing cursor every time it is generated. string Note Fields marked with an asterisk (*) are mandatory. 52.2. Dependencies At runtime, the mongodb-source Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:mongodb camel:jackson 52.3. Usage This section describes how you can use the mongodb-source . 52.3.1. Knative Source You can use the mongodb-source Kamelet as a Knative source by binding it to a Knative object. mongodb-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" password: "The MongoDB Password" username: "The MongoDB Username" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 52.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 52.3.1.2. Procedure for using the cluster CLI Save the mongodb-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f mongodb-source-binding.yaml 52.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 52.3.2. Kafka Source You can use the mongodb-source Kamelet as a Kafka source by binding it to a Kafka topic. mongodb-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: "The MongoDB Collection" database: "The MongoDB Database" hosts: "The MongoDB Hosts" password: "The MongoDB Password" username: "The MongoDB Username" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 52.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 52.3.2.2. Procedure for using the cluster CLI Save the mongodb-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f mongodb-source-binding.yaml 52.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind mongodb-source -p "source.collection=The MongoDB Collection" -p "source.database=The MongoDB Database" -p "source.hosts=The MongoDB Hosts" -p "source.password=The MongoDB Password" -p "source.username=The MongoDB Username" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 52.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mongodb-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\" password: \"The MongoDB Password\" username: \"The MongoDB Username\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f mongodb-source-binding.yaml", "kamel bind mongodb-source -p \"source.collection=The MongoDB Collection\" -p \"source.database=The MongoDB Database\" -p \"source.hosts=The MongoDB Hosts\" -p \"source.password=The MongoDB Password\" -p \"source.username=The MongoDB Username\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mongodb-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mongodb-source properties: collection: \"The MongoDB Collection\" database: \"The MongoDB Database\" hosts: \"The MongoDB Hosts\" password: \"The MongoDB Password\" username: \"The MongoDB Username\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f mongodb-source-binding.yaml", "kamel bind mongodb-source -p \"source.collection=The MongoDB Collection\" -p \"source.database=The MongoDB Database\" -p \"source.hosts=The MongoDB Hosts\" -p \"source.password=The MongoDB Password\" -p \"source.username=The MongoDB Username\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/mongodb-source
Chapter 14. Updating OpenShift Logging
Chapter 14. Updating OpenShift Logging 14.1. Supported Versions For version compatibility and support information, see Red Hat OpenShift Container Platform Life Cycle Policy To upgrade from cluster logging in OpenShift Container Platform version 4.6 and earlier to OpenShift Logging 5.x, you update the OpenShift Container Platform cluster to version 4.7 or 4.8. Then, you update the following operators: From Elasticsearch Operator 4.x to OpenShift Elasticsearch Operator 5.x From Cluster Logging Operator 4.x to Red Hat OpenShift Logging Operator 5.x To upgrade from a version of OpenShift Logging to the current version, you update OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator to their current versions. 14.2. Updating Logging to the current version To update Logging to the current version, you change the subscriptions for the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator. Important You must update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. You must also update both Operators to the same version. If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To work around this problem, you delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again. Prerequisites The OpenShift Container Platform version is 4.7 or later. The Logging status is healthy: All pods are ready . The Elasticsearch cluster is healthy. Your Elasticsearch and Kibana data is backed up. Procedure Update the OpenShift Elasticsearch Operator: From the web console, click Operators Installed Operators . Select the openshift-Operators-redhat project. Click the OpenShift Elasticsearch Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.x and click Save . Wait for a few seconds, then click Operators Installed Operators . Verify that the OpenShift Elasticsearch Operator version is 5.x.x. Wait for the Status field to report Succeeded . Update the Red Hat OpenShift Logging Operator: From the web console, click Operators Installed Operators . Select the openshift-logging project. Click the Red Hat OpenShift Logging Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.x and click Save . Wait for a few seconds, then click Operators Installed Operators . Verify that the Red Hat OpenShift Logging Operator version is 5.y.z Wait for the Status field to report Succeeded . Check the logging components: Ensure that all Elasticsearch pods are in the Ready status: USD oc get pod -n openshift-logging --selector component=elasticsearch Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m Ensure that the Elasticsearch cluster is healthy: USD oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health { "cluster_name" : "elasticsearch", "status" : "green", } Ensure that the Elasticsearch cron jobs are created: USD oc project openshift-logging USD oc get cronjob NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s Verify that the log store is updated to 5.x and the indices are green : USD oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices Verify that the output includes the app-00000x , infra-00000x , audit-00000x , .security indices. Example 14.1. Sample output with indices in a green status Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0 Verify that the log collector is updated: USD oc get ds collector -o json | grep collector Verify that the output includes a collectort container: "containerName": "collector" Verify that the log visualizer is updated to 5.x using the Kibana CRD: USD oc get kibana kibana -o json Verify that the output includes a Kibana pod with the ready status: Example 14.2. Sample output with a ready Kibana pod [ { "clusterCondition": { "kibana-5fdd766ffd-nb2jj": [ { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" }, { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" } ] }, "deployment": "kibana", "pods": { "failed": [], "notReady": [] "ready": [] }, "replicaSets": [ "kibana-5fdd766ffd" ], "replicas": 1 } ]
[ "oc get pod -n openshift-logging --selector component=elasticsearch", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m", "oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }", "oc project openshift-logging", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s", "oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices", "Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0", "oc get ds collector -o json | grep collector", "\"containerName\": \"collector\"", "oc get kibana kibana -o json", "[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/cluster-logging-upgrading
Chapter 4. Server and rack solutions
Chapter 4. Server and rack solutions Hardware vendors have responded to the enthusiasm around Ceph by providing both optimized server-level and rack-level solution SKUs. Validated through joint testing with Red Hat, these solutions offer predictable price-to-performance ratios for Ceph deployments, with a convenient modular approach to expand Ceph storage for specific workloads. Typical rack-level solutions include: Network switching: Redundant network switching interconnects the cluster and provides access to clients. A minimum of one network switch is recommended. For redundancy purpose, two network switches are used, with each switch partitioned to support two network segments. Ceph MON nodes: The Ceph monitor is a datastore for the health of the entire cluster, and contains the cluster log. A minimum of three monitor nodes are strongly recommended for a cluster quorum in production. Ceph OSD hosts: Ceph OSD hosts house the storage capacity for the cluster, with one or more OSDs running per individual storage device if the device is HDD or SSD. For NVME device, Red Hat recommends to have two or more OSDs running per individual storage device. OSD hosts are selected and configured differently depending on both workload optimization and the data devices installed: HDDs, SSDs, or NVMe SSDs. Red Hat Ceph Storage: Many vendors provide a capacity-based subscription for Red Hat Ceph Storage bundled with both server and rack-level solution SKUs. Note Red Hat recommends to review the Red Hat Ceph Storage:Supported Configurations article prior to committing to any server and rack solution. Contact Red Hat support for any additional assistance. IOPS-optimized solutions With the growing use of flash storage, organizations increasingly host IOPS-intensive workloads on Ceph storage clusters to let them emulate high-performance public cloud solutions with private cloud storage. These workloads commonly involve structured data from MySQL-, MariaDB-, or PostgreSQL-based applications. Typical servers include the following elements: CPU: 6 cores per NVMe SSD, assuming a 2 GHz CPU. RAM: 16 GB baseline, plus 5 GB per OSD. Networking: 10 Gigabit Ethernet, GbE, per 2 OSDs. OSD media: High-performance, high-endurance enterprise NVMe SSDs. OSDs: Two per NVMe SSD. Bluestore WAL/DB : High-performance, high-endurance enterprise NVMe SSD, co-located with OSDs. Controller: Native PCIe bus. Note For Non-NVMe SSDs, for CPU , use two cores per SSD OSD. Table 4.1. Solutions SKUs for IOPS-optimized Ceph Workloads, by cluster size. Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] SYS-5038MR-OSD006P N/A N/A [a] See Supermicro(R) Total Solution for Ceph for details. Throughput-optimized Solutions Throughput-optimized Ceph solutions are usually centered around semi-structured or unstructured data. Large-block sequential I/O is typical. Storage media on OSD hosts is commonly HDDs with write journals on SSD-based volumes. Typical server elements include: CPU: 0.5 cores per HDD, assuming a 2 GHz CPU. RAM: 16 GB baseline, plus 5 GB per OSD. Networking: 10 GbE per 12 OSDs each for client and cluster-facing networks. OSD media: 7,200 RPM enterprise HDDs. OSDs: One per HDD. Bluestore WAL/DB: High-endurance, high-performance enterprise serial-attached SCSI (SAS) or NVMe SSDs. Host bus adapter (HBA): Just a bunch of disks (JBOD). Several vendors provide pre-configured server and rack-level solutions for throughput-optimized Ceph workloads. Red Hat has conducted extensive testing and evaluation of servers from Supermicro and Quanta Cloud Technologies (QCT). Table 4.2. Rack-level SKUs for Ceph OSDs, MONs, and top-of-rack (TOR) switches. Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] SRS-42E112-Ceph-03 SRS-42E136-Ceph-03 SRS-42E136-Ceph-03 Table 4.3. Individual OSD Servers Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] SSG-6028R-OSD072P SSG-6048-OSD216P SSG-6048-OSD216P QCT [a] QxStor RCT-200 QxStor RCT-400 QxStor RCT-400 [a] See QCT: QxStor Red Hat Ceph Storage Edition for details. Table 4.4. Additional Servers Configurable for Throughput-optmized Ceph OSD Workloads. Vendor Small (250TB) Medium (1PB) Large (2PB+) Dell PowerEdge R730XD [a] DSS 7000 [b] , twin node DSS 7000, twin node Cisco UCS C240 M4 UCS C3260 [c] UCS C3260 [d] Lenovo System x3650 M5 System x3650 M5 N/A [a] See Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper for details. [b] See Dell EMC DSS 7000 Performance & Sizing Guide for Red Hat Ceph Storage for details. [c] See Red Hat Ceph Storage hardware reference architecture for details. [d] See UCS C3260 for details Cost and capacity-optimized solutions Cost- and capacity-optimized solutions typically focus on higher capacity, or longer archival scenarios. Data can be either semi-structured or unstructured. Workloads include media archives, big data analytics archives, and machine image backups. Large-block sequential I/O is typical. For greater cost effectiveness, OSDs are usually hosted on HDDs with Ceph write journals co-located on the HDDs. Solutions typically include the following elements: CPU: 0.5 cores per HDD, assuming a 2 GHz CPU. RAM: 16 GB baseline, plus 5 GB per OSD. Networking: 10 GbE per 12 OSDs (each for client- and cluster-facing networks). OSD media: 7,200 RPM enterprise HDDs. OSDs: One per HDD. Bluestore WAL/DB: Co-located on the HDD. HBA: JBOD. Supermicro and QCT provide pre-configured server and rack-level solution SKUs for cost- and capacity-focused Ceph workloads. Table 4.5. Pre-configured Rack-level SKUs for Cost- and Capacity-optimized Workloads Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] N/A SRS-42E136-Ceph-03 SRS-42E172-Ceph-03 Table 4.6. Pre-configured Server-level SKUs for Cost- and Capacity-optimized Workloads Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] N/A SSG-6048R-OSD216P [a] SSD-6048R-OSD360P QCT N/A QxStor RCC-400 [a] QxStor RCC-400 [a] [a] See Supermicro's Total Solution for Ceph for details. Table 4.7. Additional Servers Configurable for Cost- and Capacity-optimized Workloads Vendor Small (250TB) Medium (1PB) Large (2PB+) Dell N/A DSS 7000, twin node DSS 7000, twin node Cisco N/A UCS C3260 UCS C3260 Lenovo N/A System x3650 M5 N/A Additional Resources Red Hat Ceph Storage on Samsung NVMe SSDs Deploying MySQL Databases on Red Hat Ceph Storage Intel(R) Data Center Blocks for Cloud - Red Hat OpenStack Platform with Red Hat Ceph Storage Red Hat Ceph Storage on QCT Servers Red Hat Ceph Storage on Servers with Intel Processors and SSDs Red Hat Data Services Solutions
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/hardware_guide/server-and-rack-solutions_hw
Appendix B. Contact information
Appendix B. Contact information Red Hat Decision Manager documentation team: [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/author-group
Chapter 2. Configuring a GCP project
Chapter 2. Configuring a GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 2.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 2.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 2.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 2.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 2.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 2.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 2.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Compute Global 11 1 Forwarding rules Compute Global 2 0 In-use global IP addresses Compute Global 4 1 Health checks Compute Global 3 0 Images Compute Global 1 0 Networks Compute Global 2 0 Static IP addresses Compute Region 4 1 Routers Compute Global 1 0 Routes Compute Global 2 0 Subnetworks Compute Global 2 0 Target pools Compute Global 3 0 CPUs Compute Region 28 4 Persistent disk SSD (GB) Compute Region 896 128 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 2.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. You must have a service account key or a virtual machine with an attached service account to create the cluster. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. Additional resources See Manually creating IAM for more details about using manual credentials mode. 2.5.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create a service account with the following permissions. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin IAM Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using passthrough credentials mode Compute Load Balancer Admin IAM Role Viewer The roles are applied to the service accounts that the control plane and compute machines use: Table 2.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 2.5.2. Required GCP permissions for installer-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the installer-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. For more information, see "Required roles for using passthrough credentials mode" in the "Required GCP roles" section. Example 2.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 2.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 2.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list Example 2.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 2.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 2.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 2.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 2.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 2.10. Required IAM permissions for installation iam.roles.get Example 2.11. Optional Images permissions for installation compute.images.list Example 2.12. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 2.13. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 2.14. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 2.15. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 2.16. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.17. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 2.18. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 2.19. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 2.20. Required Images permissions for deletion compute.images.list 2.6. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 2.7. steps Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_gcp/installing-gcp-account
Chapter 4. Post-installation configuration
Chapter 4. Post-installation configuration 4.1. Postinstallation configuration The following procedures are typically performed after OpenShift Virtualization is installed. You can configure the components that are relevant for your environment: Node placement rules for OpenShift Virtualization Operators, workloads, and controllers Network configuration : Enabling the creation of load balancer services by using the Red Hat OpenShift Service on AWS web console Storage configuration : Defining a default storage class for the Container Storage Interface (CSI) Configuring local storage by using the Hostpath Provisioner (HPP) 4.2. Specifying nodes for OpenShift Virtualization components The default scheduling for virtual machines (VMs) on bare metal nodes is appropriate. Optionally, you can specify the nodes where you want to deploy OpenShift Virtualization Operators, workloads, and controllers by configuring node placement rules. Note You can configure node placement rules for some components after installing OpenShift Virtualization, but virtual machines cannot be present if you want to configure node placement rules for workloads. 4.2.1. About node placement rules for OpenShift Virtualization components You can use node placement rules for the following tasks: Deploy virtual machines only on nodes intended for virtualization workloads. Deploy Operators only on infrastructure nodes. Maintain separation between workloads. Depending on the object, you can use one or more of the following rule types: nodeSelector Allows pods to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with pods. Affinity also allows for more nuance in how the rules are applied. For example, you can specify that a rule is a preference, not a requirement. If a rule is a preference, pods are still scheduled when the rule is not satisfied. tolerations Allows pods to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts pods that tolerate the taint. 4.2.2. Applying node placement rules You can apply node placement rules by editing a HyperConverged or HostPathProvisioner object using the command line. Prerequisites The oc CLI tool is installed. You are logged in with cluster administrator permissions. Procedure Edit the object in your default editor by running the following command: USD oc edit <resource_type> <resource_name> -n {CNVNamespace} Save the file to apply the changes. 4.2.3. Node placement rule examples You can specify node placement rules for a OpenShift Virtualization component by editing a HyperConverged or HostPathProvisioner object. 4.2.3.1. HyperConverged object node placement rule example To specify the nodes where OpenShift Virtualization deploys its components, you can edit the nodePlacement object in the HyperConverged custom resource (CR) file that you create during OpenShift Virtualization installation. Example HyperConverged object with nodeSelector rule apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2 1 Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-infra-value . 2 workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . Example HyperConverged object with affinity rule apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3 1 Infrastructure resources are placed on nodes labeled example.io/example-infra-key = example-value . 2 workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . 3 Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled. Example HyperConverged object with tolerations rule apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" 1 Nodes reserved for OpenShift Virtualization components are labeled with the key = virtualization:NoSchedule taint. Only pods with matching tolerations are scheduled on reserved nodes. 4.2.3.2. HostPathProvisioner object node placement rule example You can edit the HostPathProvisioner object directly or by using the web console. Warning You must schedule the hostpath provisioner and the OpenShift Virtualization components on the same nodes. Otherwise, virtualization pods that use the hostpath provisioner cannot run. You cannot run virtual machines. After you deploy a virtual machine (VM) with the hostpath provisioner (HPP) storage class, you can remove the hostpath provisioner pod from the same node by using the node selector. However, you must first revert that change, at least for that specific node, and wait for the pod to run before trying to delete the VM. You can configure node placement rules by specifying nodeSelector , affinity , or tolerations for the spec.workload field of the HostPathProvisioner object that you create when you install the hostpath provisioner. Example HostPathProvisioner object with nodeSelector rule apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: "</path/to/backing/directory>" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1 1 Workloads are placed on nodes labeled example.io/example-workloads-key = example-workloads-value . 4.2.4. Additional resources Specifying nodes for virtual machines Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules 4.3. Postinstallation network configuration By default, OpenShift Virtualization is installed with a single, internal pod network. 4.3.1. Installing networking Operators 4.3.2. Configuring a Linux bridge network After you install the Kubernetes NMState Operator, you can configure a Linux bridge network for live migration or external access to virtual machines (VMs). 4.3.2.1. Creating a Linux bridge NNCP You can create a NodeNetworkConfigurationPolicy (NNCP) manifest for a Linux bridge network. Prerequisites You have installed the Kubernetes NMState Operator. Procedure Create the NodeNetworkConfigurationPolicy manifest. This example includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8 1 Name of the policy. 2 Name of the interface. 3 Optional: Human-readable description of the interface. 4 The type of interface. This example creates a bridge. 5 The requested state for the interface after creation. 6 Disables IPv4 in this example. 7 Disables STP in this example. 8 The node NIC to which the bridge is attached. 4.3.2.2. Creating a Linux bridge NAD by using the web console You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the Red Hat OpenShift Service on AWS web console. A Linux bridge network attachment definition is the most efficient method for connecting a virtual machine to a VLAN. Warning Configuring IP address management (IPAM) in a network attachment definition for virtual machines is not supported. Procedure In the web console, click Networking NetworkAttachmentDefinitions . Click Create Network Attachment Definition . Note The network attachment definition must be in the same namespace as the pod or virtual machine. Enter a unique Name and optional Description . Select CNV Linux bridge from the Network Type list. Enter the name of the bridge in the Bridge Name field. Optional: If the resource has VLAN IDs configured, enter the ID numbers in the VLAN Tag Number field. Optional: Select MAC Spoof Check to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. Click Create . 4.3.3. Configuring a network for live migration After you have configured a Linux bridge network, you can configure a dedicated network for live migration. A dedicated network minimizes the effects of network saturation on tenant workloads during live migration. 4.3.3.1. Configuring a dedicated secondary network for live migration To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the NetworkAttachmentDefinition object to the HyperConverged custom resource (CR). Prerequisites You installed the OpenShift CLI ( oc ). You logged in to the cluster as a user with the cluster-admin role. Each node has at least two Network Interface Cards (NICs). The NICs for live migration are connected to the same VLAN. Procedure Create a NetworkAttachmentDefinition manifest according to the following example: Example configuration file apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ "cniVersion": "0.3.1", "name": "migration-bridge", "type": "macvlan", "master": "eth1", 2 "mode": "bridge", "ipam": { "type": "whereabouts", 3 "range": "10.200.5.0/24" 4 } }' 1 Specify the name of the NetworkAttachmentDefinition object. 2 Specify the name of the NIC to be used for live migration. 3 Specify the name of the CNI plugin that provides the network for the NAD. 4 Specify an IP address range for the secondary network. This range must not overlap the IP addresses of the main network. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the name of the NetworkAttachmentDefinition object to the spec.liveMigrationConfig stanza of the HyperConverged CR: Example HyperConverged manifest apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150 # ... 1 Specify the name of the Multus NetworkAttachmentDefinition object to be used for live migrations. Save your changes and exit the editor. The virt-handler pods restart and connect to the secondary network. Verification When the node that the virtual machine runs on is placed into maintenance mode, the VM automatically migrates to another node in the cluster. You can verify that the migration occurred over the secondary network and not the default pod network by checking the target IP address in the virtual machine instance (VMI) metadata. USD oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}' 4.3.3.2. Selecting a dedicated network by using the web console You can select a dedicated network for live migration by using the Red Hat OpenShift Service on AWS web console. Prerequisites You configured a Multus network for live migration. You created a network attachment definition for the network. Procedure Navigate to Virtualization > Overview in the Red Hat OpenShift Service on AWS web console. Click the Settings tab and then click Live migration . Select the network from the Live migration network list. 4.3.4. Enabling load balancer service creation by using the web console You can enable the creation of load balancer services for a virtual machine (VM) by using the Red Hat OpenShift Service on AWS web console. Prerequisites You have configured a load balancer for the cluster. You are logged in as a user with the cluster-admin role. You created a network attachment definition for the network. Procedure Navigate to Virtualization Overview . On the Settings tab, click Cluster . Expand General settings and SSH configuration . Set SSH over LoadBalancer service to on. 4.4. Postinstallation storage configuration The following storage configuration tasks are mandatory: You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class. Optional: You can configure local storage by using the hostpath provisioner (HPP). See the storage configuration overview for more options, including configuring the Containerized Data Importer (CDI), data volumes, and automatic boot source updates. 4.4.1. Configuring local storage by using the HPP When you install the OpenShift Virtualization Operator, the Hostpath Provisioner (HPP) Operator is automatically installed. The HPP Operator creates the HPP provisioner. The HPP is a local storage provisioner designed for OpenShift Virtualization. To use the HPP, you must create an HPP custom resource (CR). Important HPP storage pools must not be in the same partition as the operating system. Otherwise, the storage pools might fill the operating system partition. If the operating system partition is full, performance can be effected or the node can become unstable or unusable. 4.4.1.1. Creating a storage class for the CSI driver with the storagePools stanza To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver. When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. Prerequisites Log in as a user with cluster-admin privileges. Procedure Create a storageclass_csi.yaml file to define the storage class: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3 1 The two possible reclaimPolicy values are Delete and Retain . If you do not specify a value, the default value is Delete . 2 The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. 3 Specify the name of the storage pool defined in the HPP CR. Save the file and exit. Create the StorageClass object by running the following command: USD oc create -f storageclass_csi.yaml 4.5. Configuring certificate rotation Configure certificate rotation parameters to replace existing certificates. 4.5.1. Configuring certificate rotation You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3 1 The value of ca.renewBefore must be less than or equal to the value of ca.duration . 2 The value of server.duration must be less than or equal to the value of ca.duration . 3 The value of server.renewBefore must be less than or equal to the value of server.duration . Apply the YAML file to your cluster. 4.5.2. Troubleshooting certificate rotation parameters Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions: The value of ca.renewBefore must be less than or equal to the value of ca.duration . The value of server.duration must be less than or equal to the value of ca.duration . The value of server.renewBefore must be less than or equal to the value of server.duration . If the default values conflict with these conditions, you will receive an error. If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration , conflicting with the specified conditions. Example certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s This results in the following error message: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration The error message only mentions the first conflict. Review all certConfig values before you proceed.
[ "oc edit <resource_type> <resource_name> -n {CNVNamespace}", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/post-installation-configuration
Chapter 6. Deploying a single logical service across many sites for failover
Chapter 6. Deploying a single logical service across many sites for failover A typical scenario for using Service Interconnect is to deploy a server process on two sites with the intention that if one site fails, the other site seamlessly processes any further requests. In this scenario the primary server responds to all requests while that server is available and traffic is only directed to the secondary server when the primary server is not available. The procedure describes two servers, however this technique works for many servers. Prerequisites Two or more unlinked sites. A basic understanding of Service Interconnect and its networking model. Procedure Create sites by using skupper init . Deploy your servers on different sites. Generate a token on the first site: USD skupper token create token.yaml This file contains a key and the location of the site that created it. Note Access to this file provides access to the service network. Protect it appropriately. Use the token on the cluster that you want to connect from: To create a link to the first site: USD skupper link create token.yaml --cost 99999 The high cost setting means that traffic is not directed to this site under normal circumstances. However, if there is no other server available, all traffic is directed to this site. Expose the servers on the service network for both sites. Create the service: USD skupper service create <name> <port> where <name> is the name of the service you want to create. <port> is the port the service uses. By default, this service is now visible on both sites, although there is no server available to process requests to this service. Note By default, if you create a service on one site, it is available on all sites. However, if enable-service-sync is set to false you need to create the service on both sites. Bind the service with the server on both sites. USD skupper service bind <service-name> <target-type> <target-name> where <service-name> is the name of the service on the service network <target-type> is the object you want to expose, deployment , statefulset , pods , or service . <target-name> is the name of the cluster service For example: USD skupper service bind hello-world-backend deployment hello-world-backend You can use the console to check the traffic flow or monitor the services using your tooling. Clients can connect to either site, and the server on that site processes the requests until the server is not available. Further requests are processed by the server on the other site. If the server on the original site becomes available, it processes all further requests. However existing TCP connections to the secondary or backup server will persist until those TCP connections are closed.
[ "skupper token create token.yaml", "skupper link create token.yaml --cost 99999", "skupper service create <name> <port>", "skupper service bind <service-name> <target-type> <target-name>", "skupper service bind hello-world-backend deployment hello-world-backend" ]
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/deploying-single-logical-service
A.2. ethtool
A.2. ethtool The ethtool utility allows administrators to view and edit network interface card settings. It is useful for observing the statistics of certain devices, such as the number of packets dropped by that device. ethtool , its options, and its usage, are comprehensively documented on the man page.
[ "man ethtool" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-ethtool
Chapter 15. Managing vulnerabilities
Chapter 15. Managing vulnerabilities 15.1. Vulnerability management overview Security vulnerabilities in your environment might be exploited by an attacker to perform unauthorized actions such as carrying out a denial of service attack, executing remote code, or gaining unauthorized access to sensitive data. Therefore, the management of vulnerabilities is a foundational step towards a successful Kubernetes security program. 15.1.1. Vulnerability management process Vulnerability management is a continuous process to identify and remediate vulnerabilities. Red Hat Advanced Cluster Security for Kubernetes helps you to facilitate a vulnerability management process. A successful vulnerability management program often includes the following critical tasks: Performing asset assessment Prioritizing the vulnerabilities Assessing the exposure Taking action Continuously reassessing assets Red Hat Advanced Cluster Security for Kubernetes helps organizations to perform continuous assessments on their OpenShift Container Platform and Kubernetes clusters. It provides organizations with the contextual information they need to prioritize and act on vulnerabilities in their environment more effectively. 15.1.1.1. Performing asset assessment Performing an assessment of an organization's assets involve the following actions: Identifying the assets in your environment Scanning these assets to identify known vulnerabilities Reporting on the vulnerabilities in your environment to impacted stakeholders When you install Red Hat Advanced Cluster Security for Kubernetes on your Kubernetes or OpenShift Container Platform cluster, it first aggregates the assets running inside of your cluster to help you identify those assets. RHACS allows organizations to perform continuous assessments on their OpenShift Container Platform and Kubernetes clusters. RHACS provides organizations with the contextual information to prioritize and act on vulnerabilities in their environment more effectively. Important assets that should be monitored by the organization's vulnerability management process using RHACS include: Components : Components are software packages that may be used as part of an image or run on a node. Components are the lowest level where vulnerabilities are present. Therefore, organizations must upgrade, modify or remove software components in some way to remediate vulnerabilities. Images : A collection of software components and code that create an environment to run an executable portion of code. Images are where you upgrade components to fix vulnerabilities. Nodes : A server used to manage and run applications using OpenShift or Kubernetes and the components that make up the OpenShift Container Platform or Kubernetes service. RHACS groups these assets into the following structures: Deployment : A definition of an application in Kubernetes that may run pods with containers based on one or many images. Namespace : A grouping of resources such as Deployments that support and isolate an application. Cluster : A group of nodes used to run applications using OpenShift or Kubernetes. RHACS scans the assets for known vulnerabilities and uses the Common Vulnerabilities and Exposures (CVE) data to assess the impact of a known vulnerability. 15.1.1.2. Prioritizing the vulnerabilities Answer the following questions to prioritize the vulnerabilities in your environment for action and investigation: How important is an affected asset for your organization? How severe does a vulnerability need to be for investigation? Can the vulnerability be fixed by a patch for the affected software component? Does the existence of the vulnerability violate any of your organization's security policies? The answers to these questions help security and development teams decide if they want to gauge the exposure of a vulnerability. Red Hat Advanced Cluster Security for Kubernetes provides you the means to facilitate the prioritization of the vulnerabilities in your applications and components. You can use data reported by RHACS to decide which vulnerabilities are critical to address. For example, when viewing vulnerability findings by CVE, you might want to consider the following data that RHACS provides and that you can use to sort and prioritize vulnerabilities: CVE severity: RHACS reports the number of images that are affected by the CVE and its severity rating (for example, low, moderate, important, or critical) from Red Hat Product Security. Top CVSS: The highest Common Vulnerability Scoring System (CVSS) score, from data gathered from Red Hat and vendor sources, of this CVE across images. Top NVD CVSS: The highest CVSS score, from the National Vulnerability Database, of this CVE across images. You must have Scanner V4 enabled to view this data. EPSS probability: The likelihood that the vulnerability will be exploited according to the Exploit Prediction Scoring System (EPSS) . This EPSS data provides a percentage estimate of the probability that exploitation of this vulnerability will be observed in the 30 days. The EPSS collects data of observed exploitation activity from partners, and exploitation activity does not mean that an attempted exploitation was successful. The EPSS score should be used as a single data point along with other information , such as the age of the CVE, to help you prioritize the vulnerabilities to address. For more information, see RHACS and EPSS . 15.1.1.3. Assessing the exposure To assess your exposure to a vulnerability, answer the following questions: Is your application impacted by a vulnerability? Is the vulnerability mitigated by some other factor? Are there any known threats that could lead to the exploitation of this vulnerability? Are you using the software package which has the vulnerability? Is spending time on a specific vulnerability and the software package worth it? Take some of the following actions based on your assessment: Consider marking the vulnerability as a false positive if you determine that there is no exposure or that the vulnerability does not apply in your environment. Consider if you would prefer to remediate, mitigate or accept the risk if you are exposed. Consider if you want to remove or change the software package to reduce your attack surface. 15.1.1.4. Taking action Once you have decided to take action on a vulnerability, you can take one of the following actions: Remediate the vulnerability Mitigate and accept the risk Accept the risk Mark the vulnerability as a false positive You can remediate vulnerabilities by performing one of the following actions: Remove a software package Update a software package to a non-vulnerable version 15.2. Viewing and addressing vulnerabilities The Vulnerability Management functions provide methods to view and manage vulnerabilities discovered by RHACS. Common vulnerability management tasks involve identifying and prioritizing vulnerabilities, remedying them, and monitoring for new threats. Historically, RHACS provided a view of vulnerabilities discovered in your system in the vulnerability management dashboard. The dashboard is deprecated in RHACS 4.5 and will be removed in a future release. For more information about the dashboard, see Using the vulnerability management dashboard . Currently, vulnerability information is provided in pages that are accessed by selecting Vulnerability Management Results . You can select different views based on whether you want to view vulnerabilities discovered in your workloads, vulnerabilities discovered in platform components, such as OpenShift, or node vulnerabilities. Depending on the view, you can filter results based on specific criteria: for example, you can display vulnerabilities of different severity, vulnerabilities in deployments with specific annotations, or vulnerabilities in images that are based on a specific operating system. 15.2.1. Viewing vulnerability management data in the RHACS portal Beginning with release 4.7, RHACS has reorganized data for vulnerabilities it discovers and separated vulnerability data by category, such as vulnerabilities in user workloads and nodes, and platform vulnerabilities. In the Vulnerability Management menu, the Results page provides vulnerability data. You can view vulnerability data by category by clicking the tabs at the top of the page. The tabs include the following categories: User workloads This tab provides information about vulnerabilities that affect workloads and images in your system that you have deployed. Because these workloads are deployed and managed by you, they are called user workloads . Platform This tab provides information about vulnerabilities that RHACS identifies as related to the platform , for example, vulnerabilities in workloads and images that the OpenShift platform and layered services deploy. RHACS uses a regular expression pattern to examine the namespaces of workloads and identify workloads that belong to platform components. For example, currently, RHACS identifies vulnerabilities in the following namespaces as belonging to the platform: OpenShift Container Platform: Namespace starts with openshift- or kube- Layered products: Namespace starts with rhacs-operator Namespace starts with open-cluster-management Namespace is stackrox , multicluster-engine , aap , or hive Third-party partners: Namespace is nvidia-gpu-operator Nodes This tab provides a view of vulnerabilities across nodes, including user-managed and platform workloads and images. More views This menu provides access to additional ways to view vulnerability information, including the following views: All vulnerable images Inactive images Images without CVEs Kubernetes components 15.2.2. Viewing user workload vulnerabilities In the Vulnerability Management Results page, you can get information about the vulnerabilities in applications running on clusters in your system. With this information, you can prioritize and manage vulnerabilities across images and deployments. In the User workload vulnerabilities page, you can view images and deployments with vulnerabilities and filter by image, deployment, namespace, cluster, CVE, component, and component source. Procedure In the RHACS portal, go to Vulnerability Management Results . Select the User Workloads tab. By default, the Observed tab is selected. Optional: You can choose to view observed vulnerabilities or those that have been deferred or marked as false positives. Click one of the following tabs: Observed : Lists vulnerabilities that RHACS observed in your user workloads. Deferred : Lists vulnerabilities that have been observed but had a deferral request submitted and approved in the exception management workflow. False positives : Lists vulnerabilities that have been observed but were identified as false positives in the exception management workflow. Optional: You can select the following options to refine the list of results: Prioritize by namespace view : Displays a list of namespaces sorted according to the risk priority. You can use this view to quickly identify and address the most critical areas. In this view, click <number> deployments in a table row to return to the vulnerability findings view, with filters applied to show only deployments for the selected namespace. Default filters : You can select filters for CVE severity and CVE status that are automatically applied across all views on this page. These filters are applied when you visit the page from another section of the RHACS web portal or from a bookmarked URL. They are saved in the local storage of your browser. To filter the list of results by entity, for example, to search for a specific named CVE, select the appropriate filters and attributes. To select multiple entities and attributes, click the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. Table 15.1. Filter options Entity Attributes Image Name : The name of the image. Operating system : The operating system of the image. Tag : The tag for the image. Label : The label for the image. Registry : The registry where the image is located. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than EPSS probability : The likelihood that the vulnerability will be exploited according to the Exploit Prediction Scoring System (EPSS) . This EPSS data provides a percentage estimate of the probability that exploitation of this vulnerability will be observed in the 30 days. The EPSS collects data of observed exploitation activity from partners, and exploitation activity does not mean that an attempted exploitation was successful. The EPSS score should be used as a single data point along with other information , such as the age of the CVE, to help you prioritize the vulnerabilities to address. For more information, see RHACS and EPSS . Image Component Name : The name of the image component, for example, activerecord-sql-server-adapter Source : OS Python Java Ruby Node.js Go Dotnet Core Runtime Infrastructure Version : Version of the image component; for example, 3.4.21 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Deployment Name : Name of the deployment. Label : Label for the deployment. Annotation : The annotation for the deployment. Status : Whether the deployment is inactive or active. Namespace ID : The metadata.uid of the namespace that is created by Kubernetes. Name : The name of the namespace. Label : The label for the namespace. Annotation : The annotation for the namespace. Cluster ID : The alphanumeric ID for the cluster. This is an internal identifier that RHACS assigns for tracking purposes. Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. CVE severity : You can select one or more levels. CVE status : You can select Fixable or Not fixable . Click one of the following tabs to view the data that you want: <number> CVEs : Displays vulnerabilities organized by CVE <number> Images : Displays images that contain discovered vulnerabilities. <number> Deployments : Displays deployments that contain discovered vulnerabilities. Optional: Choose the appropriate method to re-organize the information in the User Workloads tab: To sort the table in ascending or descending order, select a column heading. To select the categories that you want to display in the table, perform the following steps: Click Columns . Choose the appropriate method to manage the columns: To view all the categories, click Select all . To reset to the default categories, click Reset to default . To view only the selected categories, select the one or more categories that you want to view, and then click Save . In the list of results, click a CVE, image name, or deployment name to view more information about the item. For example, depending on the item type, you can view the following information: Whether a CVE is fixable Whether an image is active The Dockerfile line in the image that contains the CVE External links to information about the CVE in Red Hat and other CVE databases 15.2.3. Viewing platform vulnerabilities The Platform vulnerabilities page provides information about vulnerabilities that RHACS identifies as related to the platform , for example, vulnerabilities in workloads and images that are used by the OpenShift Platform and layered services. Procedure In the RHACS portal, go to Vulnerability Management Results . Select the Platform tab. By default, the Observed tab is selected. Optional: You can choose to view observed vulnerabilities or those that have been deferred or marked as false positives. Click one of the following tabs: Observed : Lists vulnerabilities that RHACS observed in platform workloads and images. Deferred : Lists vulnerabilities that have been observed but had a deferral request submitted and approved in the exception management workflow. False positives : Lists vulnerabilities that have been observed but were identified as false positives in the exception management workflow. Optional: You can select the following options to refine the list of results: Prioritize by namespace view : Displays a list of namespaces sorted according to the risk priority. You can use this view to quickly identify and address the most critical areas. In this view, click <number> deployments in a table row to return to the platform vulnerabilities view, with filters applied to show only deployments for the selected namespace. Default filters : You can select filters for CVE severity and CVE status that are automatically applied across all views on this page. These filters are applied when you visit the page from another section of the RHACS web portal or from a bookmarked URL. They are saved in the local storage of your browser. To filter the list of results by entity, for example, to search for a specific named CVE, select the appropriate filters and attributes. To select multiple entities and attributes, click the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. Table 15.2. Filter options Entity Attributes Image Name : The name of the image. Operating system : The operating system of the image. Tag : The tag for the image. Label : The label for the image. Registry : The registry where the image is located. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than EPSS probability : The likelihood that the vulnerability will be exploited according to the Exploit Prediction Scoring System (EPSS) . This EPSS data provides a percentage estimate of the probability that exploitation of this vulnerability will be observed in the 30 days. The EPSS collects data of observed exploitation activity from partners, and exploitation activity does not mean that an attempted exploitation was successful. The EPSS score should be used as a single data point along with other information , such as the age of the CVE, to help you prioritize the vulnerabilities to address. For more information, see RHACS and EPSS . Image Component Name : The name of the image component, for example, activerecord-sql-server-adapter Source : OS Python Java Ruby Node.js Go Dotnet Core Runtime Infrastructure Version : Version of the image component; for example, 3.4.21 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Deployment Name : Name of the deployment. Label : Label for the deployment. Annotation : The annotation for the deployment. Status : Whether the deployment is inactive or active. Namespace ID : The metadata.uid of the namespace that is created by Kubernetes. Name : The name of the namespace. Label : The label for the namespace. Annotation : The annotation for the namespace. Cluster ID : The alphanumeric ID for the cluster. This is an internal identifier that RHACS assigns for tracking purposes. Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. CVE severity : You can select one or more levels. CVE status : You can select Fixable or Not fixable . Click one of the following tabs to view the data that you want: <number> CVEs : Displays vulnerabilities organized by CVE <number> Images : Displays images that contain discovered vulnerabilities. <number> Deployments : Displays deployments that contain discovered vulnerabilities. Optional: Choose the appropriate method to re-organize the information in the User Workloads tab: To sort the table in ascending or descending order, select a column heading. To select the categories that you want to display in the table, perform the following steps: Click Columns . Choose the appropriate method to manage the columns: To view all the categories, click Select all . To reset to the default categories, click Reset to default . To view only the selected categories, select the one or more categories that you want to view, and then click Save . In the list of results, click a CVE, image name, or deployment name to view more information about the item. For example, depending on the item type, you can view the following information: Whether a CVE is fixable Whether an image is active The Dockerfile line in the image that contains the CVE External links to information about the CVE in Red Hat and other CVE databases 15.2.4. Viewing vulnerabilities in nodes You can identify vulnerabilities in your nodes by using RHACS. The vulnerabilities that are identified include the following: Vulnerabilities in core Kubernetes components Vulnerabilities in container runtimes such as Docker, CRI-O, runC, and containerd For more information about operating systems that RHACS can scan, see "Supported operating systems". RHACS currently supports scanning nodes with the StackRox scanner and Scanner V4. Depending on which scanner is configured, different results might appear in the list of vulnerabilities. For more information, see "Understanding differences in scanning results between the StackRox Scanner and Scanner V4". Procedure In the RHACS portal, go to Vulnerability Management Results . Select the Nodes tab. Optional: The page defaults to a list of observed CVEs. Click Show snoozed CVEs to view them. Optional: To filter CVEs according to entity, select the appropriate filters and attributes. To add more filtering criteria, follow these steps: Select the entity or attribute from the list. Depending on your choices, enter the appropriate information such as text, or select a date or object. Click the right arrow icon. Optional: Select additional entities and attributes, and then click the right arrow icon to add them. The filter entities and attributes are listed in the following table. Table 15.3. Filter options Entity Attributes Node Name : The name of the node. Operating system : The operating system of the node, for example, Red Hat Enterprise Linux (RHEL). Label : The label of the node. Annotation : The annotation for the node. Scan time : The scan date of the node. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than Node Component Name : The name of the component. Version : The version of the component, for example, 4.15.0-2024 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Cluster ID : The alphanumeric ID for the cluster. This is an internal identifier that RHACS assigns for tracking purposes. Name : The name of the cluster. Label : The label for the cluster. Type : The type of cluster, for example, OCP. Platform type : The type of platform, for example, OpenShift 4 cluster. Optional: To refine the list of results, do any of the following tasks: Click CVE severity , and then select one or more levels. Click CVE status , and then select Fixable or Not fixable . To view the data, click one of the following tabs: <number> CVEs : Displays a list of all the CVEs affecting all of your nodes. <number> Nodes : Displays a list of nodes that contain CVEs. To view the details of the node and information about the CVEs according to the CVSS score and fixable CVEs for that node, click a node name in the list of nodes. 15.2.4.1. Disabling identifying vulnerabilities in nodes Identifying vulnerabilities in nodes is enabled by default. You can disable it from the RHACS portal. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under Image Integrations , select StackRox Scanner . From the list of scanners, select StackRox Scanner to view its details. Click Edit . To use only the image scanner and not the node scanner, click Image Scanner . Click Save . Additional resources Understanding differences in scanning results between the StackRox Scanner and Scanner V4 Supported operating systems 15.2.5. Accessing additional views in vulnerability management The More views tab provides additional ways to view vulnerabilities in your system, including the following views: All vulnerable images: Displays vulnerabilities for user workloads, platform vulnerabilities, and vulnerabilities for inactive images in the same page. Inactive images: Displays vulnerabilities for watched images and images that are not currently deployed as workloads. Vulnerabilities are reported for images based on your image retention settings. Images without CVEs: Shows images and workloads without observed CVEs. See "Analyze images and deployments without observed CVEs". Kubernetes components: Displays vulnerabilities affecting the underlying Kubernetes structure. 15.2.5.1. Viewing all vulnerable images You can view a list of vulnerabilities for user workloads, platform vulnerabilities, and inactive images on the same page. Procedure In the RHACS portal, go to Vulnerability Management Results . Click More Views and select All vulnerable images . Optional: You can choose to view observed vulnerabilities or those that have been deferred or marked as false positives. Click one of the following tabs: Observed : Lists vulnerabilities that RHACS observed across all images and workloads. Deferred : Lists vulnerabilities that have been observed but had a deferral request submitted and approved in the exception management workflow. False positives : Lists vulnerabilities that have been observed but were identified as false positives in the exception management workflow. Optional: You can select the following options to refine the list of results: Prioritize by namespace view : Displays a list of namespaces sorted according to the risk priority. You can use this view to quickly identify and address the most critical areas. In this view, click <number> deployments in a table row to return to the all vulnerable images view, with filters applied to show only deployments for the selected namespace. Default filters : You can select filters for CVE severity and CVE status that are automatically applied across all views on this page. These filters are applied when you visit the page from another section of the RHACS web portal or from a bookmarked URL. They are saved in the local storage of your browser. Click one of the following tabs to view the data that you want: <number> CVEs : Displays vulnerabilities organized by CVE <number> Images : Displays images that contain discovered vulnerabilities. <number> Deployments : Displays deployments that contain discovered vulnerabilities. Optional: Choose the appropriate method to re-organize the information in the User Workloads tab: To sort the table in ascending or descending order, select a column heading. To filter the table, use the filter bar. To select the categories that you want to display in the table, perform the following steps: Click Columns . Choose the appropriate method to manage the columns: To view all the categories, click Select all . To reset to the default categories, click Reset to default . To view only the selected categories, select the one or more categories that you want to view, and then click Save . To filter the list of results by entity, for example, to search for a specific named CVE, select the appropriate filters and attributes. To select multiple entities and attributes, click the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Table 15.4. CVE filtering Entity Attributes Image Name : The name of the image. Operating system : The operating system of the image. Tag : The tag for the image. Label : The label for the image. Registry : The registry where the image is located. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than EPSS probability : The likelihood that the vulnerability will be exploited according to the Exploit Prediction Scoring System (EPSS) . This EPSS data provides a percentage estimate of the probability that exploitation of this vulnerability will be observed in the 30 days. The EPSS collects data of observed exploitation activity from partners, and exploitation activity does not mean that an attempted exploitation was successful. The EPSS score should be used as a single data point along with other information , such as the age of the CVE, to help you prioritize the vulnerabilities to address. For more information, see RHACS and EPSS . Image Component Name : The name of the image component, for example, activerecord-sql-server-adapter Source : OS Python Java Ruby Node.js Go Dotnet Core Runtime Infrastructure Version : Version of the image component; for example, 3.4.21 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Deployment Name : Name of the deployment. Label : Label for the deployment. Annotation : The annotation for the deployment. Status : Whether the deployment is inactive or active. Namespace ID : The metadata.uid of the namespace that is created by Kubernetes. Name : The name of the namespace. Label : The label for the namespace. Annotation : The annotation for the namespace. Cluster ID : The alphanumeric ID for the cluster. This is an internal identifier that RHACS assigns for tracking purposes. Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. CVE severity : You can select one or more levels. CVE status : You can select Fixable or Not fixable . Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. In the list of results, click a CVE, image name, or deployment name to view more information about the item. For example, depending on the item type, you can view the following information: Whether a CVE is fixable Whether an image is active The Dockerfile line in the image that contains the CVE External links to information about the CVE in Red Hat and other CVE databases 15.2.5.2. Scanning inactive images Red Hat Advanced Cluster Security for Kubernetes (RHACS) scans all active (deployed) images every 4 hours and updates the image scan results to reflect the latest vulnerability definitions. You can also configure RHACS to scan inactive (not deployed) images automatically. Procedure In the RHACS portal, click Vulnerability Management Results . Click More Views Inactive images . Click Manage watched images . In the Image name field, enter the fully-qualified image name that begins with the registry and ends with the image tag, for example, docker.io/library/nginx:latest . Click Add image to watch list . Optional: To remove a watched image, locate the image in the Manage watched images window, and click Remove watch . Important In the RHACS portal, click Platform Configuration System Configuration to view the data retention configuration. All the data related to the image removed from the watched image list continues to appear in the RHACS portal for the number of days mentioned on the System Configuration page and is only removed after that period is over. Click Close to return to the Inactive images page. 15.2.5.3. Analyze images and deployments without observed CVEs When you view the list of images without vulnerabilities, RHACS shows the images that meet at least one of the following conditions: Images that do not have CVEs Images that report a scanner error that may result in a false negative of no CVEs Note An image that actually contains vulnerabilities can appear in this list inadvertently. For example, if Scanner was able to scan the image and it is known to Red Hat Advanced Cluster Security for Kubernetes (RHACS), but the scan was not successfully completed, RHACS cannot detect vulnerabilities. This scenario occurs if an image has an operating system that RHACS Scanner does not support. RHACS displays scan errors when you hover over an image in the image list or click the image name for more information. Procedure In the RHACS portal, go to Vulnerability Management Results . Click More Views and select Images without CVEs . To filter the list of results by entity, for example, to search for a specific image, select the appropriate filters and attributes. To select multiple entities and attributes, click the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. Table 15.5. Filter options Entity Attributes Image Name : The name of the image. Operating system : The operating system of the image. Tag : The tag for the image. Label : The label for the image. Registry : The registry where the image is located. Image Component Name : The name of the image component, for example, activerecord-sql-server-adapter Source : OS Python Java Ruby Node.js Go Dotnet Core Runtime Infrastructure Version : Version of the image component; for example, 3.4.21 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Deployment Name : Name of the deployment. Label : Label for the deployment. Annotation : The annotation for the deployment. Status : Whether the deployment is inactive or active. Namespace ID : The metadata.uid of the namespace that is created by Kubernetes. Name : The name of the namespace. Label : The label for the namespace. Annotation : The annotation for the namespace. Cluster ID : The alphanumeric ID for the cluster. This is an internal identifier that RHACS assigns for tracking purposes. Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. Click one of the following tabs to view the data that you want: <number> Images : Displays images that contain discovered vulnerabilities. <number> Deployments : Displays deployments that contain discovered vulnerabilities. Optional: Choose the appropriate method to re-organize the information in the page: To select the categories that you want to display in the table, perform the following steps: Click Columns . Choose the appropriate method to manage the columns: To view all the categories, click Select all . To reset to the default categories, click Reset to default . To view only the selected categories, select the one or more categories that you want to view, and then click Save . To sort the table in ascending or descending order, select a column heading. In the list of results, click an image name or deployment name to view more information about the item. 15.2.5.4. Viewing Kubernetes vulnerabilities You can view vulnerabilities in your clusters that affect the underlying Kubernetes structure. Procedure Go to Vulnerability Management Results . Click More Views and select Kubernetes components . Click the <number> CVEs or <number> Clusters to display by CVE or cluster. Optional: Within the results list, you can filter results by cluster and CVE. To filter vulnerabilities based on an entity, select the appropriate filters and attributes. To select multiple entities and attributes, click the right arrow icon to add another criteria. Depending on your choices, enter the appropriate information such as text, or select a date or object. The filter entities and attributes are listed in the following table. Table 15.6. Filter options Entity Attributes Cluster ID : The alphanumeric ID for the cluster. This is an internal identifier that RHACS assigns for tracking purposes. Name : The name of the cluster. Label : The label for the cluster. Type : The cluster type, for example, OCP. Platform type : The platform type, for example, OpenShift 4 cluster. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than Type : The type of CVE: Kubernetes Istio OpenShift Optional: To filter the table based on the status of a CVE, from the CVE status drop-down list, select one or more statuses. The following values are associated with the status of a CVE: Fixable Not fixable Note The Filtered view icon indicates that the displayed results were filtered based on the criteria that you selected. You can click Clear filters to remove all filters, or remove individual filters by clicking on them. In the list of results, click a CVE or cluster name to view more information about the item. For example, depending on the item type, you can view the following information: First discovered date Whether a CVE is fixable External links to information about the CVE in Red Hat and other CVE databases 15.2.6. Excluding CVEs You can exclude or ignore CVEs in RHACS by snoozing node and platform CVEs and deferring or marking node, platform, and image CVEs as false positives. You might want to exclude CVEs if you know that the CVE is a false positive or you have already taken steps to mitigate the CVE. Snoozed CVEs do not appear in vulnerability reports or trigger policy violations. You can snooze a CVE to ignore it globally for a specified period of time. Snoozing a CVE does not require approval. Note Snoozing node and platform CVEs requires that the ROX_VULN_MGMT_LEGACY_SNOOZE environment variable is set to true . Deferring or marking a CVE as a false positive is done through the exception management workflow. This workflow provides the ability to view pending, approved, and denied deferral and false positive requests. You can scope the CVE exception to a single image, all tags for a single image, or globally for all images. When approving or denying a request, you must add a comment. A CVE remains in the observed status until the exception request is approved. A pending request for deferral that is denied by another user is still visible in reports, policy violations, and other places in the system, but is indicated by a Pending exception label to the CVE when visiting the following pages after going to Vulnerability Management Results : User workloads Platform All vulnerable images Inactive images An approved exception for a deferral or false positive has the following effects: Removes the CVE from the Observed tab in the User Workloads tab to either the Deferred or False positive tab Prevents the CVE from triggering policy violations that are related to the CVE Prevents the CVE from showing up in automatically generated vulnerability reports 15.2.6.1. Snoozing platform and node CVEs You can snooze platform and node CVEs that do not relate to your infrastructure. You can snooze CVEs for 1 day, 1 week, 2 weeks, 1 month, or indefinitely, until you unsnooze them. Snoozing a CVE takes effect immediately and does not require an additional approval step. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view platform CVEs, click Vulnerability Management Platform CVEs . To view node CVEs, click Vulnerability Management Node CVEs . Select one or more CVEs. Select the appropriate method to snooze the CVE: If you selected a single CVE, click the overflow menu, , and then select Snooze CVE . If you selected multiple CVEs, click Bulk actions Snooze CVEs . Select the duration of time to snooze. Click Snooze CVEs . You receive a confirmation that you have requested to snooze the CVEs. 15.2.6.2. Unsnoozing platform and node CVEs You can unsnooze platform and node CVEs that you have previously snoozed. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view the list of platform CVEs, click Vulnerability Management Platform CVEs . To view the list of node CVEs, click Vulnerability Management Node CVEs . To view the list of snoozed CVEs, click Show snoozed CVEs in the header view. Select one or more CVEs from the list of snoozed CVEs. Select the appropriate method to unsnooze the CVE: If you selected a single CVE, click the overflow menu, , and then select Unsnooze CVE . If you selected multiple CVEs, click Bulk actions Unsnooze CVEs . Click Unsnooze CVEs again. You receive a confirmation that you have requested to unsnooze the CVEs. 15.2.6.3. Viewing snoozed CVEs You can view a list of platform and node CVEs that have been snoozed. Note The ability to snooze a CVE is not enabled by default in the web portal or in the API. To enable the ability to snooze CVEs, set the runtime environment variable ROX_VULN_MGMT_LEGACY_SNOOZE to true . Procedure In the RHACS portal, do any of the following tasks: To view the list of platform CVEs, click Vulnerability Management Platform CVEs . To view the list of node CVEs, click Vulnerability Management Node CVEs . Click Show snoozed CVEs to view the list. 15.2.6.4. Marking a vulnerability as a false positive globally You can create an exception for a vulnerability by marking it as a false positive globally, or across all images. You must get requests to mark a vulnerability as a false positive approved in the exception management workflow. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Results . Click User Workloads . Choose the appropriate method to mark the CVEs: If you want to mark a single CVE, perform the following steps: Find the row which contains the CVE that you want to take action on. Click the overflow menu, , for the CVE that you identified, and then select Mark as false positive . If you want to mark multiple CVEs, perform the following steps: Select each CVE. From the Bulk actions drop-down list, select Mark as false positives . Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception request, click CVE selections . Click Submit request . You receive a confirmation that you have requested an exception. Optional: To copy the approval link and share it with your organization's exception approver, click the copy icon. Click Close . 15.2.6.5. Marking a vulnerability as a false positive for an image or image tag To create an exception for a vulnerability, you can mark it as a false positive for a single image, or across all tags associated with an image. You must get requests to mark a vulnerability as a false positive approved in the exception management workflow. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Results . Click the User Workloads tab. To view the list of images, click <number> Images . Find the row that lists the image that you want to mark as a false positive, and click the image name. Choose the appropriate method to mark the CVEs: If you want to mark a single CVE, perform the following steps: Find the row which contains the CVE that you want to take action on. Click the overflow menu, , for the CVE that you identified, and then select Mark as false positive . If you want to mark multiple CVEs, perform the following steps: Select each CVE. From the Bulk actions drop-down list, select Mark as false positives . Select the scope. You can select either all tags associated with the image or only the image. Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception request, click CVE selections . Click Submit request . You receive a confirmation that you have requested an exception. Optional: To copy the approval link and share it with your organization's exception approver, click the copy icon. Click Close . 15.2.6.6. Viewing deferred and false positive CVEs You can view the CVEs that have been deferred or marked as false positives by using the User Workloads page. Procedure To see CVEs that have been deferred or marked as false positives, with the exceptions approved by an approver, click Vulnerability Management Results . Click the User Workloads tab. Complete any of the following actions: To see CVEs that have been deferred, click the Deferred tab. To see CVEs that have been marked as false positives, click the False positives tab. Note To approve, deny, or change deferred or false positive CVEs, click Vulnerability Management Exception Management . Optional: To view additional information about the deferral or false positive, click View in the Request details column. The Exception Management page is displayed. 15.2.6.7. Deferring CVEs You can accept risk with or without mitigation and defer CVEs. You must get deferral requests approved in the exception management workflow. Prerequisites You have write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, click Vulnerability Management Results . Click the User Workloads tab. Choose the appropriate method to defer a CVE: If you want to defer a single CVE, perform the following steps: Find the row which contains the CVE that you want to mark as a false positive. Click the overflow menu, , for the CVE that you identified, and then click Defer CVE . If you want to defer multiple CVEs, perform the following steps: Select each CVE. Click Bulk actions Defer CVEs . Select the time period for the deferral. Enter a rationale for requesting the exception. Optional: To review the CVEs that are included in the exception menu, click CVE selections . Click Submit request . You receive a confirmation that you have requested a deferral. Optional: To copy the approval link to share it with your organization's exception approver, click the copy icon. Click Close . 15.2.6.7.1. Configuring vulnerability exception expiration periods You can configure the time periods available for vulnerability management exceptions. These options are available when users request to defer a CVE. Prerequisites You have write permission for the VulnerabilityManagementRequests resource. Procedure In the RHACS portal, go to Platform Configuration Exception Configuration . You can configure expiration times that users can select when they request to defer a CVE. Enabling a time period makes it available to users and disabling it removes it from the user interface. 15.2.6.8. Reviewing and managing an exception request to defer or mark a CVE as false positive You can review, update, approve, or deny an exception requests for deferring and marking CVEs as false positives. Prerequisites You have the write permission for the VulnerabilityManagementRequests resource. Procedure To view the list of pending requests, do any of the following tasks: Paste the approval link into your browser. Click Vulnerability Management Exception Management , and then click the request name in the Pending requests tab. Review the scope of the vulnerability and decide whether or not to approve it. Choose the appropriate option to manage a pending request: If you want to deny the request and return the CVE to observed status, click Deny request . Enter a rationale for the denial, and click Deny . If you want to approve the request, click Approve request . Enter a rationale for the approval, and click Approve . To cancel a request that you have created and return the CVE to observed status, click Cancel request . You can only cancel requests that you have created. To update the deferral time period or rationale for a request that you have created, click Update request . You can only update requests that you have created. After you make changes, click Submit request . You receive a confirmation that you have submitted a request. 15.2.7. Identifying Dockerfile lines in images that introduced components with CVEs You can identify specific Dockerfile lines in an image that introduced components with CVEs. Procedure To view a problematic line: In the RHACS portal, click Vulnerability Management Results . Click User Workloads . Click the tab to view the type of CVEs. The following tabs are available: Observed Deferred False positives In the list of CVEs, click the CVE name to open the page containing the CVE details. The Affected components column lists the components that include the CVE. Expand the CVE to display additional information, including the Dockerfile line that introduced the component. 15.2.8. Finding a new component version The following procedure finds a new component version to upgrade to. Procedure In the RHACS portal, click Vulnerability Management Results . Click the User Workloads tab. Click <number> Images and select an image. To view additional information, locate the CVE and click the expand icon. The additional information includes the component that the CVE is in and the version in which the CVE is fixed, if it is fixable. Update your image to a later version. 15.2.9. Exporting workload vulnerabilities by using the API You can export workload vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the API. For these examples, workloads are composed of deployments and their associated images. The export uses the /v1/export/vuln-mgmt/workloads streaming API. It allows the combined export of deployments and images. The images payload contains the full vulnerability information. The output is streamed and has the following schema: {"result": {"deployment": {...}, "images": [...]}} ... {"result": {"deployment": {...}, "images": [...]}} The following examples assume that these environment variables have been set: ROX_API_TOKEN : API token with view permissions for the Deployment and Image resources ROX_ENDPOINT : Endpoint under which Central's API is available To export all workloads, enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads To export all workloads with a query timeout of 60 seconds, enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?timeout=60 To export all workloads matching the query Deployment:app Namespace:default , enter the following command: USD curl -H "Authorization: Bearer USDROX_API_TOKEN" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Aapp%2BNamespace%3Adefault Additional resources Searching and filtering 15.3. Vulnerability reporting You can create and download an on-demand image vulnerability report from the Vulnerability Management Vulnerability Reporting menu in the RHACS web portal. This report contains a comprehensive list of common vulnerabilities and exposures in images and deployments, referred to as workload CVEs or user workloads in RHACS. To share this report with auditors or internal stakeholders, you can schedule emails in RHACS or download the report and share it by using other methods. 15.3.1. Reporting vulnerabilities to teams As organizations must constantly reassess and report on their vulnerabilities, some organizations find it helpful to have scheduled communications to key stakeholders to help in the vulnerability management process. You can use Red Hat Advanced Cluster Security for Kubernetes to schedule these reoccurring communications through e-mail. These communications should be scoped to the most relevant information that the key stakeholders need. For sending these communications, you must consider the following questions: What schedule would have the most impact when communicating with the stakeholders? Who is the audience? Should you only send specific severity vulnerabilities in your report? Should you only send fixable vulnerabilities in your report? 15.3.2. Creating vulnerability management report configurations RHACS guides you through the process of creating a vulnerability management report configuration. This configuration determines the information that will be included in a report job that runs at a scheduled time or that you run on demand. Procedure In the RHACS portal, click Vulnerability Management Vulnerability Reporting . Click Create report . In the Configure report parameters page, provide the following information: Report name : Enter a name for your report configuration. Report description : Enter a text describing the report configuration. This is optional. CVE severity : Select the severity of common vulnerabilities and exposures (CVEs) that you want to include in the report configuration. CVE status : Select one or more CVE statuses. The following values are associated with the CVE status: Fixable Unfixable Image type : Select one or more image types. The following values are associated with image types: Deployed images Watched images CVEs discovered since : Select the time period for which you want to include the CVEs in the report configuration. Optional: Select the Include NVD CVSS checkbox, if you want to include the NVD CVSS column in the report configuration. Configure collection included : To configure at least one collection, do any of the following tasks: Select an existing collection that you want to include. To view the collection information, edit the collection, and get a preview of collection results, click View . When viewing the collection, entering text in the field searches for collections matching that text string. To create a new collection, click Create collection . Note For more information about collections, see "Creating and using deployment collections". To configure the delivery destinations and optionally set up a schedule for delivery, click . 15.3.2.1. Configuring delivery destinations and scheduling Configuring destinations and delivery schedules for vulnerability reports is optional, unless on the page, you selected the option to include CVEs that were discovered since the last scheduled report. If you selected that option, configuring destinations and delivery schedules for vulnerability reports is required. Procedure To configure destinations for delivery, in the Configure delivery destinations section, you can add a delivery destination and set up a schedule for reporting. To email reports, you must configure at least one email notifier. Select an existing notifier or create a new email notifier to send your report by email. For more information about creating an email notifier, see "Configuring the email plugin" in the "Additional resources" section. When you select a notifier, the email addresses configured in the notifier as Default recipients appear in the Distribution list field. You can add additional email addresses that are separated by a comma. A default email template is automatically applied. To edit this default template, perform the following steps: Click the edit icon and enter a customized subject and email body in the Edit tab. Click the Preview tab to see your proposed template. Click Apply to save your changes to the template. Note When reviewing the report jobs for a specific report, you can see whether the default template or a customized template was used when creating the report. In the Configure schedule section, select the frequency and day of the week for the report. Click to review your vulnerability report configuration and finish creating it. 15.3.2.2. Reviewing and creating the report configuration You can review the details of your vulnerability report configuration before creating it. Procedure In the Review and create section, you can review the report configuration parameters, delivery destination, email template that is used if you selected email delivery, delivery schedule, and report format. To make any changes, click Back to go to the section and edit the fields that you want to change. Click Create to create the report configuration and save it. 15.3.3. Vulnerability report permissions The ability to create, view, and download reports depends on the access control settings, or roles and permission sets, for your user account. For example, you can only view, create, and download reports for data that your user account has permission to access. In addition, the following restrictions apply: You can only download reports that you have generated; you cannot download reports generated by other users. Report permissions are restricted depending on the access settings for user accounts. If the access settings for your account change, old reports do not reflect the change. For example, if you are given new permissions and want to view vulnerability data that is now allowed by those permissions, you must create a new vulnerability report. 15.3.4. Editing vulnerability report configurations You can edit existing vulnerability report configurations from the list of report configurations, or by selecting an individual report configuration first. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . To edit an existing vulnerability report configuration, complete any of the following actions: Locate the report configuration that you want to edit in the list of report configurations. Click the overflow menu, , and then select Edit report . Click the report configuration name in the list of report configurations. Then, click Actions and select Edit report . Make changes to the report configuration and save. 15.3.5. Downloading vulnerability reports You can generate an on-demand vulnerability report and then download it. Note You can only download reports that you have generated; you cannot download reports generated by other users. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . In the list of report configurations, locate the report configuration that you want to use to create the downloadable report. Generate the vulnerability report by using one of the following methods: To generate the report from the list: Click the overflow menu, , and then select Generate download . The My active job status column displays the status of your report creation. After the Processing status goes away, you can download the report. To generate the report from the report window: Click the report configuration name to open the configuration detail window. Click Actions and select Generate download . To download the report, if you are viewing the list of report configurations, click the report configuration name to open it. Click All report jobs from the menu on the header. If the report is completed, click the Ready for download link in the Status column. The report is in .csv format and is compressed into a .zip file for download. 15.3.6. Sending vulnerability reports on-demand You can send vulnerability reports immediately, rather than waiting for the scheduled send time. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . In the list of report configurations, locate the report configuration for the report that you want to send. Click the overflow menu, , and then select Send report now . 15.3.7. Cloning vulnerability report configurations You can make copies of vulnerability report configurations by cloning them. This is useful when you want to reuse report configurations with minor changes, such as reporting vulnerabilities in different deployments or namespaces. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . Locate the report configuration that you want to clone in the list of report configurations. Click Clone report . Make any changes that you want to the report parameters and delivery destinations. Click Create . 15.3.8. Deleting vulnerability report configurations Deleting a report configuration deletes the configuration and any reports that were previously run using this configuration. Procedure In the RHACS web portal, click Vulnerability Management Vulnerability Reporting . Locate the report configuration that you want to delete in the list of reports. Click the overflow menu, , and then select Delete report . 15.3.9. Configuring vulnerability management report job retention settings You can configure settings that determine when vulnerability report job requests expire and other retention settings for report jobs. Note These settings do not affect the following vulnerability report jobs: Jobs in the WAITING or PREPARING state (unfinished jobs) The last successful scheduled report job The last successful on-demand emailed report job The last successful downloadable report job Downloadable report jobs for which the report file has not been deleted by either manual deletion or by configuring the downloadable report pruning settings Procedure In the RHACS web portal, go to Platform Configuration System Configuration . You can configure the following settings for vulnerability report jobs: Vulnerability report run history retention : The number of days that a record is kept of vulnerability report jobs that have been run. This setting controls how many days that report jobs are listed in the All report jobs tab under Vulnerability Management Vulnerability Reporting when a report configuration is selected. The entire report history after the exclusion date is deleted, with the exception of the following jobs: Unfinished jobs. Jobs for which prepared downloadable reports still exist in the system. The last successful report job for each job type (scheduled email, on-demand email, or download). This ensures users have information about the last run job for each type. Prepared downloadable vulnerability reports retention days : The number of days that prepared, on-demand downloadable vulnerability report jobs are available for download on the All report jobs tab under Vulnerability Management Vulnerability Reporting when a report configuration is selected. Prepared downloadable vulnerability reports limit : The limit, in MB, of space allocated to prepared downloadable vulnerability report jobs. After the limit is reached, the oldest report job in the download queue is removed. To change these values, click Edit , make your changes, and then click Save . 15.3.10. Additional resources Creating and using deployment collections Migration of access scopes to collections Configuring the email plugin 15.4. Using the vulnerability management dashboard (deprecated) Historically, RHACS has provided a view of vulnerabilities discovered in your system in the vulnerability management dashboard. With the dashboard, you can view vulnerabilities by image, node, or platform. You can also view vulnerabilities by clusters, namespaces, deployments, node components, and image components. The dashboard is deprecated in RHACS 4.5 and will be removed in a future release. Important To perform actions on vulnerabilities, such as view additional information about a vulnerability, defer a vulnerability, or mark a vulnerability as a false positive, go to Vulnerability Management Results and click the User Workloads tab. To review requests for deferring and marking CVEs as false positives, click Vulnerability Management Exception Management . 15.4.1. Viewing application vulnerabilities by using the dashboard You can view application vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the dashboard. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Application & Infrastructure Namespaces or Deployments . From the list, search for and select the Namespace or Deployment you want to review. To get more information about the application, select an entity from Related entities on the right. 15.4.2. Viewing image vulnerabilities by using the dashboard You can view image vulnerabilities in Red Hat Advanced Cluster Security for Kubernetes by using the dashboard. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select <number> Images . From the list of images, select the image you want to investigate. You can also filter the list by performing one of the following steps: Enter Image in the search bar and then select the Image attribute. Enter the image name in the search bar. In the image details view, review the listed CVEs and prioritize taking action to address the impacted components. Select Components from Related entities on the right to get more information about all the components that are impacted by the selected image. Or select Components from the Affected components column under the Image findings section for a list of components affected by specific CVEs. 15.4.3. Viewing cluster vulnerabilities by using the dashboard You can view vulnerabilities in clusters by using Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Application & Infrastructure Clusters . From the list of clusters, select the cluster you want to investigate. Review the cluster's vulnerabilities and prioritize taking action on the impacted nodes on the cluster. 15.4.4. Viewing node vulnerabilities by using the dashboard You can view vulnerabilities in specific nodes by using Red Hat Advanced Cluster Security for Kubernetes. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . On the Dashboard view header, select Nodes . From the list of nodes, select the node you want to investigate. Review vulnerabilities for the selected node and prioritize taking action. To get more information about the affected components in a node, select Components from Related entities on the right. 15.4.5. Finding the most vulnerable image components by using the dashboard Use the Vulnerability Management view for identifying highly vulnerable image components. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. From the Vulnerability Management view header, select Application & Infrastructure Image Components . In the Image Components view, select the Image CVEs column header to arrange the components in descending order (highest first) based on the CVEs count. 15.4.6. Viewing details only for fixable CVEs by using the dashboard Use the Vulnerability Management view to filter and show only the fixable CVEs. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . From the Vulnerability Management view header, under Filter CVEs , click Fixable . 15.4.7. Identifying the operating system of the base image by using the dashboard Use the Vulnerability Management view to identify the operating system of the base image. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. From the Vulnerability Management view header, select Images . View the base operating system (OS) and OS version for all images under the Image OS column. Select an image to view its details. The base operating system is also available under the Image Summary Details and Metadata section. Note Red Hat Advanced Cluster Security for Kubernetes lists the Image OS as unknown when either: The operating system information is not available, or If the image scanner in use does not provide this information. Docker Trusted Registry, Google Container Registry, and Anchore do not provide this information. 15.4.8. Identifying top risky objects by using the dashboard Use the Vulnerability Management view for identifying the top risky objects in your environment. The Top Risky widget displays information about the top risky images, deployments, clusters, and namespaces in your environment. The risk is determined based on the number of vulnerabilities and their CVSS scores. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. Select the Top Risky widget header to choose between riskiest images, deployments, clusters, and namespaces. The small circles on the chart represent the chosen object (image, deployment, cluster, namespace). Hover over the circles to see an overview of the object they represent. And select a circle to view detailed information about the selected object, its related entities, and the connections between them. For example, if you are viewing Top Risky Deployments by CVE Count and CVSS score , each circle on the chart represents a deployment. When you hover over a deployment, you see an overview of the deployment, which includes deployment name, name of the cluster and namespace, severity, risk priority, CVSS, and CVE count (including fixable). When you select a deployment, the Deployment view opens for the selected deployment. The Deployment view shows in-depth details of the deployment and includes information about policy violations, common vulnerabilities, CVEs, and riskiest images for that deployment. Select View All on the widget header to view all objects of the chosen type. For example, if you chose Top Risky Deployments by CVE Count and CVSS score , you can select View All to view detailed information about all deployments in your infrastructure. 15.4.9. Identifying top riskiest images and components by using the dashboard Similar to the Top Risky , the Top Riskiest widget lists the names of the top riskiest images and components. This widget also includes the total number of CVEs and the number of fixable CVEs in the listed images. Procedure Go to the RHACS portal and click Vulnerability Management from the navigation menu. Select the Top Riskiest Images widget header to choose between the riskiest images and components. If you are viewing Top Riskiest Images : When you hover over an image in the list, you see an overview of the image, which includes image name, scan time, and the number of CVEs along with severity (critical, high, medium, and low). When you select an image, the Image view opens for the selected image. The Image view shows in-depth details of the image and includes information about CVEs by CVSS score, top riskiest components, fixable CVEs, and Dockerfile for the image. Select View All on the widget header to view all objects of the chosen type. For example, if you chose Top Riskiest Components , you can select View All to view detailed information about all components in your infrastructure. 15.4.10. Viewing the Dockerfile for an image by using the dashboard Use the Vulnerability Management view to find the root cause of vulnerabilities in an image. You can view the Dockerfile and find exactly which command in the Dockerfile introduced the vulnerabilities and all components that are associated with that single command. The Dockerfile section shows information about: All the layers in the Dockerfile The instructions and their value for each layer The components included in each layer The number of CVEs in components for each layer When there are components introduced by a specific layer, you can select the expand icon to see a summary of its components. If there are any CVEs in those components, you can select the expand icon for an individual component to get more details about the CVEs affecting that component. Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select an image from either the Top Riskiest Images widget or click the Images button at the top of the dashboard and select an image. In the Image details view, to Dockerfile , select the expand icon to see a summary of instructions, values, creation date, and components. Select the expand icon for an individual component to view more information. 15.4.11. Identifying the container image layer that introduces vulnerabilities by using the dashboard You can use the Vulnerability Management dashboard to identify vulnerable components and the image layer they appear in. Procedure Go to the RHACS portal and click Vulnerability Management Dashboard from the navigation menu. Select an image from either the Top Riskiest Images widget or click the Images button at the top of the dashboard and select an image. In the Image details view, to Dockerfile , select the expand icon to see a summary of image components. Select the expand icon for specific components to get more details about the CVEs affecting the selected component. 15.4.12. Viewing recently detected vulnerabilities by using the dashboard The Recently Detected Vulnerabilities widget on the Vulnerability Management Dashboard view shows a list of recently discovered vulnerabilities in your scanned images, based on the scan time and CVSS score. It also includes information about the number of images affected by the CVE and its impact (percentage) on your environment. When you hover over a CVE in the list, you see an overview of the CVE, which includes scan time, CVSS score, description, impact, and whether it's scored by using CVSS v2 or v3. When you select a CVE, the CVE details view opens for the selected CVE. The CVE details view shows in-depth details of the CVE and the components, images, and deployments and deployments in which it appears. Select View All on the Recently Detected Vulnerabilities widget header to view a list of all the CVEs in your infrastructure. You can also filter the list of CVEs. 15.4.13. Viewing the most common vulnerabilities by using the dashboard The Most Common Vulnerabilities widget on the Vulnerability Management Dashboard view shows a list of vulnerabilities that affect the largest number of deployments and images arranged by their CVSS score. When you hover over a CVE in the list, you see an overview of the CVE which includes, scan time, CVSS score, description, impact, and whether it is scored by using CVSS v2 or v3. When you select a CVE, the CVE details view opens for the selected CVE. The CVE details view shows in-depth details of the CVE and the components, images, and deployments and deployments in which it appears. Select View All on the Most Common Vulnerabilities widget header to view a list of all the CVEs in your infrastructure. You can also filter the list of CVEs. To export the CVEs as a CSV file, select Export Download CVES as CSV . 15.4.14. Finding clusters with most Kubernetes and Istio vulnerabilities by using the dashboard You can identify the clusters with most Kubernetes, Red Hat OpenShift, and Istio vulnerabilities (deprecated) in your environment by using the vulnerability management dashboard. Procedure In the RHACS portal, click Vulnerability Management -> Dashboard . The Clusters with most orchestrator and Istio vulnerabilities widget shows a list of clusters, ranked by the number of Kubernetes, Red Hat OpenShift, and Istio vulnerabilities (deprecated) in each cluster. The cluster on top of the list is the cluster with the highest number of vulnerabilities. Click on one of the clusters from the list to view details about the cluster. The Cluster view includes: Cluster Summary section, which shows cluster details and metadata, top risky objects (deployments, namespaces, and images), recently detected vulnerabilities, riskiest images, and deployments with the most severe policy violations. Cluster Findings section, which includes a list of failing policies and list of fixable CVEs. Related Entities section, which shows the number of namespaces, deployments, policies, images, components, and CVEs the cluster contains. You can select these entities to view more details. Click View All on the widget header to view the list of all clusters. 15.4.15. Identifying vulnerabilities in nodes by using the dashboard You can use the Vulnerability Management view to identify vulnerabilities in your nodes. The identified vulnerabilities include vulnerabilities in core Kubernetes components and container runtimes such as Docker, CRI-O, runC, and containerd. For more information on operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select Nodes on the header to view a list of all the CVEs affecting your nodes. Select a node from the list to view details of all CVEs affecting that node. When you select a node, the Node details panel opens for the selected node. The Node view shows in-depth details of the node and includes information about CVEs by CVSS score and fixable CVEs for that node. Select View All on the CVEs by CVSS score widget header to view a list of all the CVEs in the selected node. You can also filter the list of CVEs. To export the fixable CVEs as a CSV file, select Export as CSV under the Node Findings section. Additional resources Supported operating systems 15.4.16. Creating policies to block specific CVEs by using the dashboard You can create new policies or add specific CVEs to an existing policy from the Vulnerability Management view. Procedure Click CVEs from the Vulnerability Management view header. You can select the checkboxes for one or more CVEs, and then click Add selected CVEs to Policy ( add icon) or move the mouse over a CVE in the list, and select the Add icon. For Policy Name : To add the CVE to an existing policy, select an existing policy from the drop-down list box. To create a new policy, enter the name for the new policy, and select Create <policy_name> . Select a value for Severity , either Critical , High , Medium , or Low . Choose the Lifecycle Stage to which your policy is applicable, from Build , or Deploy . You can also select both life-cycle stages. Enter details about the policy in the Description box. Turn off the Enable Policy toggle if you want to create the policy but enable it later. The Enable Policy toggle is on by default. Verify the listed CVEs which are included in this policy. Click Save Policy . 15.5. Scanning RHCOS node hosts For OpenShift Container Platform, Red Hat Enterprise Linux CoreOS (RHCOS) is the only supported operating system for control plane. For node hosts, OpenShift Container Platform supports both RHCOS and Red Hat Enterprise Linux (RHEL). With Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can scan RHCOS nodes for vulnerabilities and detect potential security threats. RHACS scans RHCOS RPMs installed on the node host, as part of the RHCOS installation, for any known vulnerabilities. First, RHACS analyzes and detects RHCOS components. Then it matches vulnerabilities for identified components by using RHEL and the following data streams: OpenShift 4.X Open Vulnerability and Assessment Language (OVAL) v2 security data streams is used if StackRox Scanner is used for node scanning. Red Hat Common Security Advisory Framework (CSAF) Vulnerability Exploitability eXchange (VEX) is used if Scanner V4 is used for node scanning. Note If you installed RHACS by using the roxctl CLI, you must manually enable the RHCOS node scanning features. When you use Helm or Operator installation methods on OpenShift Container Platform, this feature is enabled by default. Additional resources RHEL Versions Utilized by RHEL CoreOS and OCP 15.5.1. Enabling RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . This procedure describes how to enable node scanning for the first time. If you are reconfiguring Red Hat Advanced Cluster Security for Kubernetes to use the StackRox Scanner instead of Scanner V4, follow the procedure in "Restoring RHCOS node scanning with the StackRox Scanner". Procedure Run one of the following commands to update the compliance container. For a default compliance container with metrics disabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":"disabled"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' For a compliance container with Prometheus metrics enabled, run the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"name":"compliance","env":[{"name":"ROX_METRICS_PORT","value":":9091"},{"name":"ROX_NODE_SCANNING_ENDPOINT","value":"127.0.0.1:8444"},{"name":"ROX_NODE_SCANNING_INTERVAL","value":"4h"},{"name":"ROX_NODE_SCANNING_INTERVAL_DEVIATION","value":"24m"},{"name":"ROX_NODE_SCANNING_MAX_INITIAL_WAIT","value":"5m"},{"name":"ROX_RHCOS_NODE_SCANNING","value":"true"},{"name":"ROX_CALL_NODE_INVENTORY_ENABLED","value":"true"}]}]}}}}' Update the Collector DaemonSet (DS) by taking the following steps: Add new volume mounts to Collector DS by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"volumes":[{"name":"tmp-volume","emptyDir":{}},{"name":"cache-volume","emptyDir":{"sizeLimit":"200Mi"}}]}}}}' Add the new NodeScanner container by running the following command: USD oc -n stackrox patch daemonset/collector -p '{"spec":{"template":{"spec":{"containers":[{"command":["/scanner","--nodeinventory","--config=",""],"env":[{"name":"ROX_NODE_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"spec.nodeName"}}},{"name":"ROX_CLAIR_V4_SCANNING","value":"true"},{"name":"ROX_COMPLIANCE_OPERATOR_INTEGRATION","value":"true"},{"name":"ROX_CSV_EXPORT","value":"false"},{"name":"ROX_DECLARATIVE_CONFIGURATION","value":"false"},{"name":"ROX_INTEGRATIONS_AS_CONFIG","value":"false"},{"name":"ROX_NETPOL_FIELDS","value":"true"},{"name":"ROX_NETWORK_DETECTION_BASELINE_SIMULATION","value":"true"},{"name":"ROX_NETWORK_GRAPH_PATTERNFLY","value":"true"},{"name":"ROX_NODE_SCANNING_CACHE_TIME","value":"3h36m"},{"name":"ROX_NODE_SCANNING_INITIAL_BACKOFF","value":"30s"},{"name":"ROX_NODE_SCANNING_MAX_BACKOFF","value":"5m"},{"name":"ROX_PROCESSES_LISTENING_ON_PORT","value":"false"},{"name":"ROX_QUAY_ROBOT_ACCOUNTS","value":"true"},{"name":"ROX_ROXCTL_NETPOL_GENERATE","value":"true"},{"name":"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS","value":"false"},{"name":"ROX_SYSLOG_EXTRA_FIELDS","value":"true"},{"name":"ROX_SYSTEM_HEALTH_PF","value":"false"},{"name":"ROX_VULN_MGMT_WORKLOAD_CVES","value":"false"}],"image":"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.0","imagePullPolicy":"IfNotPresent","name":"node-inventory","ports":[{"containerPort":8444,"name":"grpc","protocol":"TCP"}],"volumeMounts":[{"mountPath":"/host","name":"host-root-ro","readOnly":true},{"mountPath":"/tmp/","name":"tmp-volume"},{"mountPath":"/cache","name":"cache-volume"}]}]}}}}' 15.5.2. Enabling RHCOS node scanning with Scanner V4 If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed the following software: Secured Cluster services on OpenShift Container Platform 4.12 or later RHACS version 4.6 or later For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure To enable node indexing, also known as node scanning, by using Scanner V4: Ensure that Scanner V4 is deployed in the Central cluster: USD kubectl -n stackrox get deployment scanner-v4-indexer scanner-v4-matcher scanner-v4-db 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Central pod, on the central container, set the ROX_NODE_INDEX_ENABLED and the ROX_SCANNER_V4 variables to true by running the following command on the Central cluster: USD kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Sensor pod, on the sensor container, set the ROX_NODE_INDEX_ENABLED and the ROX_SCANNER_V4 variables to true by running the following command on all secured clusters where you want to enable node scanning: USD kubectl -n stackrox set env deployment/sensor ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Collector Daemonset, in the compliance container, set the ROX_NODE_INDEX_ENABLED and the ROX_SCANNER_V4 variables to true by running the following command on all secured clusters where you want to enable node scanning: USD kubectl -n stackrox set env daemonset/collector ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . To verify that node scanning is working, examine the Central logs for the following message: Scanned index report and found <number> components for node <node_name>. where: <number> Specifies the number of discovered components. <node_name> Specifies the name of the node. 15.5.3. Restoring RHCOS node scanning with the StackRox Scanner If you use OpenShift Container Platform, you can enable scanning of Red Hat Enterprise Linux CoreOS (RHCOS) nodes for vulnerabilities by using Red Hat Advanced Cluster Security for Kubernetes (RHACS). This feature is available with both the StackRox Scanner and Scanner V4. Follow this procedure if you want to use the StackRox Scanner to scan Red Hat Enterprise Linux CoreOS (RHCOS) nodes, but you want to keep using Scanner V4 to scan other nodes. Prerequisites For scanning RHCOS node hosts of the secured cluster, you must have installed Secured Cluster services on OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . For life cycle support information for RHACS, see the Red Hat Advanced Cluster Security for Kubernetes Support Policy . Procedure To enable node indexing, also known as node scanning, by using the StackRox Scanner: Ensure that the StackRox Scanner is deployed in the Central cluster: USD kubectl -n stackrox get deployment scanner scanner-db 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Central pod, on the central container, set ROX_NODE_INDEX_ENABLED to false by running the following command on the Central cluster: USD kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=false 1 1 For OpenShift Container Platform, use oc instead of kubectl . In the Collector Daemonset, in the compliance container, set ROX_CALL_NODE_INVENTORY_ENABLED to true by running the following command on all secured clusters where you want to enable node scanning: USD kubectl -n stackrox set env daemonset/collector ROX_CALL_NODE_INVENTORY_ENABLED=true 1 1 For OpenShift Container Platform, use oc instead of kubectl . To verify that node scanning is working, examine the Central logs for the following message: Scanned node inventory <node_name> (id: <node_id>) with <number> components. where: <number> Specifies the number of discovered components. <node_name> Specifies the name of the node. <node_id> Specifies the internal ID of the node. 15.5.4. Analysis and detection When you use RHACS with OpenShift Container Platform, RHACS creates two coordinating containers for analysis and detection, the Compliance container and the Node-inventory container. The Compliance container was already a part of earlier RHACS versions. However, the Node-inventory container is new with RHACS 4.0 and works only with OpenShift Container Platform cluster nodes. Upon start-up, the Compliance and Node-inventory containers begin the first inventory scan of Red Hat Enterprise Linux CoreOS (RHCOS) software components within five minutes. , the Node-inventory container scans the node's file system to identify installed RPM packages and report on RHCOS software components. Afterward, inventory scanning occurs at periodic intervals, typically every four hours. You can customize the default interval by configuring the ROX_NODE_SCANNING_INTERVAL environment variable for the Compliance container. 15.5.5. Vulnerability matching on RHCOS nodes Central services, which include Central and Scanner, perform vulnerability matching. Node scanning is performed using the following scanners: StackRox Scanner: This is the default scanner. StackRox Scanner uses Red Hat's Open Vulnerability and Assessment Language (OVAL) v2 security data streams to match vulnerabilities on Red Hat Enterprise Linux CoreOS (RHCOS) software components. Scanner V4: Scanner V4 is available for node scanning as a Technology Preview feature. Scanner V4 must be explicitly enabled. See the documentation in "Additional resources" for more information. When scanning RHCOS nodes, RHACS releases after 4.0 no longer use the Kubernetes node metadata to find the kernel and container runtime versions. Instead, RHACS uses the installed RHCOS RPMs to assess that information. Additional resources Scanner V4 settings for installing RHACS for OpenShift Container Platform by using the Operator Scanner V4 settings for installing RHACS for OpenShift Container Platform by using Helm Scanner V4 settings for installing RHACS for Kubernetes by using Helm 15.5.6. Related environment variables You can use the following environment variables to configure RHCOS node scanning on RHACS. Table 15.7. Node-inventory configuration Environment Variable Description ROX_NODE_SCANNING_CACHE_TIME The time after which a cached inventory is considered outdated. Defaults to 90% of ROX_NODE_SCANNING_INTERVAL that is 3h36m . ROX_NODE_SCANNING_INITIAL_BACKOFF The initial time in seconds a node scan will be delayed if a backoff file is found. The default value is 30s . ROX_NODE_SCANNING_MAX_BACKOFF The upper limit of backoff. The default value is 5m, being 50% of Kubernetes restart policy stability timer. Table 15.8. Compliance configuration Environment Variable Description ROX_NODE_INDEX_ENABLED Controls whether node indexing is enabled for this cluster. The default value is false . Set this variable to use Scanner V4-based RHCOS node scanning. ROX_NODE_SCANNING_INTERVAL The base value of the interval duration between node scans. The default value is 4h . ROX_NODE_SCANNING_INTERVAL_DEVIATION The duration of node scans can differ from the base interval time. However, the maximum value is limited by the ROX_NODE_SCANNING_INTERVAL . ROX_NODE_SCANNING_MAX_INITIAL_WAIT The maximum wait time before the first node scan, which is randomly generated. You can set this value to 0 to disable the initial node scanning wait time. The default value is 5m . 15.5.7. Identifying vulnerabilities in nodes by using the dashboard You can use the Vulnerability Management view to identify vulnerabilities in your nodes. The identified vulnerabilities include vulnerabilities in core Kubernetes components and container runtimes such as Docker, CRI-O, runC, and containerd. For more information on operating systems that RHACS can scan, see "Supported operating systems". Procedure In the RHACS portal, go to Vulnerability Management Dashboard . Select Nodes on the header to view a list of all the CVEs affecting your nodes. Select a node from the list to view details of all CVEs affecting that node. When you select a node, the Node details panel opens for the selected node. The Node view shows in-depth details of the node and includes information about CVEs by CVSS score and fixable CVEs for that node. Select View All on the CVEs by CVSS score widget header to view a list of all the CVEs in the selected node. You can also filter the list of CVEs. To export the fixable CVEs as a CSV file, select Export as CSV under the Node Findings section. 15.5.8. Viewing vulnerabilities in nodes You can identify vulnerabilities in your nodes by using RHACS. The vulnerabilities that are identified include the following: Vulnerabilities in core Kubernetes components Vulnerabilities in container runtimes such as Docker, CRI-O, runC, and containerd For more information about operating systems that RHACS can scan, see "Supported operating systems". RHACS currently supports scanning nodes with the StackRox scanner and Scanner V4. Depending on which scanner is configured, different results might appear in the list of vulnerabilities. For more information, see "Understanding differences in scanning results between the StackRox Scanner and Scanner V4". Procedure In the RHACS portal, go to Vulnerability Management Results . Select the Nodes tab. Optional: The page defaults to a list of observed CVEs. Click Show snoozed CVEs to view them. Optional: To filter CVEs according to entity, select the appropriate filters and attributes. To add more filtering criteria, follow these steps: Select the entity or attribute from the list. Depending on your choices, enter the appropriate information such as text, or select a date or object. Click the right arrow icon. Optional: Select additional entities and attributes, and then click the right arrow icon to add them. The filter entities and attributes are listed in the following table. Table 15.9. Filter options Entity Attributes Node Name : The name of the node. Operating system : The operating system of the node, for example, Red Hat Enterprise Linux (RHEL). Label : The label of the node. Annotation : The annotation for the node. Scan time : The scan date of the node. CVE Name : The name of the CVE. Discovered time : The date when RHACS discovered the CVE. CVSS : The severity level for the CVE. The following values are associated with the severity level for the CVE: is greater than is greater than or equal to is equal to is less than or equal to is less than Node Component Name : The name of the component. Version : The version of the component, for example, 4.15.0-2024 . You can use this to search for a specific version of a component, for example, in conjunction with a component name. Cluster ID : The alphanumeric ID for the cluster. This is an internal identifier that RHACS assigns for tracking purposes. Name : The name of the cluster. Label : The label for the cluster. Type : The type of cluster, for example, OCP. Platform type : The type of platform, for example, OpenShift 4 cluster. Optional: To refine the list of results, do any of the following tasks: Click CVE severity , and then select one or more levels. Click CVE status , and then select Fixable or Not fixable . To view the data, click one of the following tabs: <number> CVEs : Displays a list of all the CVEs affecting all of your nodes. <number> Nodes : Displays a list of nodes that contain CVEs. To view the details of the node and information about the CVEs according to the CVSS score and fixable CVEs for that node, click a node name in the list of nodes. 15.5.9. Understanding differences in scanning results between the Stackrox Scanner and Scanner V4 Scanning RHCOS node hosts with Scanner V4 reports significantly more CVEs for the same operating system version. For example, Scanner V4 reports about 390 CVEs, compared to about 50 CVEs that are reported by StackRox Scanner. A manual review of selected vulnerabilities revealed the following causes: The Vulnerability Exploitability eXchange (VEX) data used in Scanner V4 is more accurate. The VEX data includes granular statuses, such as "no fix planned" and "fix deferred". Some vulnerabilities reported by StackRox Scanner were false positives. As a result, Scanner V4 provides a more accurate and realistic vulnerability assessment. Users might find discrepancies in reported vulnerabilities surprising, especially if some secured clusters still use older RHACS versions with StackRox Scanner while others use Scanner V4. To help you understand this difference, the following example provides an explanation and guidance on how to manually verify reported vulnerabilities. 15.5.9.1. Example of discrepancies in reported vulnerabilities In this example, we analyzed the differences in reported CVEs for three arbitrarily selected RHCOS versions. This example presents findings for RHCOS version 417.94.202501071621-0 . For this version, RHACS provided the following scan results: StackRox Scanner reported 49 CVEs. Scanner V4 reported 389 CVEs. The breakdown is as follows: 1 CVE is reported only by the StackRox Scanner. 48 CVEs are reported by both scanners. 341 CVEs are reported only by Scanner V4. 15.5.9.1.1. CVEs reported only by the StackRox Scanner The single CVE reported exclusively by StackRox Scanner was a false positive. CVE-2022-4122 was flagged for the podman package in version 5:5.2.2-1.rhaos4.17.el9.x86_64 . However, a manual review of VEX data from RHSA-2024:9102 indicated that this vulnerability was fixed in version 5:5.2.2-1.el9 . Therefore, the package version scanned was the first to contain the fix and was no longer affected. 15.5.9.1.2. CVEs reported only by Scanner V4 We randomly selected 10 CVEs from the 341 unique to Scanner V4 and conducted a detailed analysis using VEX data. The vulnerabilities fell into two categories: Affected packages with a fine-grained status indicating that no fix is planned Affected packages with no additional status details regarding a fix For example, the following results were analyzed: The git-core package (version 2.43.5-1.el9_4 ) was flagged for CVE-2024-50349 ( VEX data ) and marked as "Affected" with a fine-grained status of "Fix deferred." This means a fix is not guaranteed due to higher-priority development work. The package is affected by three CVEs in total. The vim-minimal package (version 2:8.2.2637-20.el9_1 ) was flagged for 109 CVEs, 108 of which have low CVSS scores. Most are marked as "Affected" with a fine-grained status of "Will not fix." The krb5-libs package (version 1.21.1-2.el9_4.1 ) was flagged for CVE-2025-24528 ( VEX data ), but no fine-grained status was available. Given that this CVE was recently discovered at the time of this analysis, its status might be updated soon. 15.5.9.1.3. CVEs reported by both scanners We manually verified three randomly selected packages, finding that the OVAL v2 data used in the StackRox Scanner and the VEX data used in Scanner V4 provided consistent explanations for the detected CVEs. In some cases, CVSS scores differed slightly, which is expected due to variations in VEX publisher data. 15.5.9.2. Verifying the status of vulnerabilities As a best practice, verify the fine-grained statuses of vulnerabilities in node host components that are critical to your environment using publicly available VEX data. VEX data is accessible in both human-readable and machine-readable formats. For more information about interpreting VEX data, visit Recent improvements in Red Hat Enterprise Linux CoreOS security data . 15.6. Generating SBOMs from scanned images You can use RHACS to generate a Software Bill of Materials (SBOM) from scanned container images. Important Generation of SBOMs from the scanned container images is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Scanner V4 must be enabled to generate SBOMs. For information about enabling Scanner V4, see the following resources: For OpenShift Container Platform: Installing Central using the Operator method Scanner V4 (Helm installation) For Kubernetes: Scanner V4 (Helm installation) SBOMs give you a detailed overview of software components, dependencies, and libraries within your application. RHACS uses the results of scans performed by Scanner V4 to generate an SBOM. You can generate an SBOM by using the RHACS portal, the roxctl CLI, or the RHACS API. Note Scanner V4 cannot generate SBOMs from results of delegated scans. In delegated scanning, you are using your secured cluster to index images and sending data to Central for vulnerability matching. SBOM generation is only available when using Scanner V4 configured in Central. 15.6.1. About SBOMs A Software Bill of Materials (SBOM) is a digital record that lists the components of a piece of software and their origins. Organizations can use SBOMs to locate the presence of vulnerable software packages and components and respond more quickly to mitigate the risk. Additionally, being able to generate SBOMs assists organizations in complying with Executive Order 14028: Improving the Nations Cybersecurity . SBOMs can contain different types of information, depending on the methods of data collection and how they are generated. The Cybersecurity & Infrastructure Security Agency (CISA) provides a document, Types of Software Bill of Materials (SBOM) , that summarizes the types of SBOMs. The type of SBOM that RHACS generates is "Analyzed." CISA notes that these types of SBOMs are created through an analysis of artifacts such as executables, packages, containers, and virtual machine images. Analyzed SBOMs provide the following benefits, as summarized by CISA: They can provide information about software without an active development environment. They can be generated without access to the build process. You can use them to discover hidden dependencies that might be missed by other tools. The SBOM generated by RHACS is in System Package Data Exchange (SPDX) 2.3 format. 15.6.2. Generating SBOMs You can generate SBOMs by using the following methods: Using the RHACS portal Go to Vulnerability Management Results and locate the image that you want to use. Do one of the following actions: In the image row, click the overflow menu , and then select Generate SBOM . Select the image to view the image details, and then click Generate SBOM . A window opens that provides information about the image and the SBOM format that is generated. After you click Generate SBOM , RHACS creates the file in JSON format. Depending on your browser configuration, your browser can automatically download the file to your computer. Using the roxctl CLI In the roxctl CLI, run the following command: USD roxctl image sbom --image=image-name 1 1 Type the name and reference of the image that you want to generate an SBOM for, in string format. For example, nginx:latest or nginx@sha256:... . This command has the following options: Table 15.10. Options Option Description -f, --force Bypass Central's cache for the image and force a new pull from the scanner. The default is false . -d, --retry-delay integer Sets the time to wait between retries in seconds. The default is 3. -i, --image string Image name and reference, for example, nginx:latest or nginx@sha256:... . -r, --retries integer Sets the number of times that Scanner V4 should retry before exiting with an error. The default is 3. Using the API You can use the RHACS API to create an SBOM. You must use the ROX_API_TOKEN for authorization to connect to the endpoint and generate the SBOM. The request payload is generated in JSON format. See "GenerateSBOM" in the API reference for more information.
[ "{\"result\": {\"deployment\": {...}, \"images\": [...]}} {\"result\": {\"deployment\": {...}, \"images\": [...]}}", "curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads", "curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?timeout=60", "curl -H \"Authorization: Bearer USDROX_API_TOKEN\" USDROX_ENDPOINT/v1/export/vuln-mgmt/workloads?query=Deployment%3Aapp%2BNamespace%3Adefault", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\"disabled\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"name\":\"compliance\",\"env\":[{\"name\":\"ROX_METRICS_PORT\",\"value\":\":9091\"},{\"name\":\"ROX_NODE_SCANNING_ENDPOINT\",\"value\":\"127.0.0.1:8444\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL\",\"value\":\"4h\"},{\"name\":\"ROX_NODE_SCANNING_INTERVAL_DEVIATION\",\"value\":\"24m\"},{\"name\":\"ROX_NODE_SCANNING_MAX_INITIAL_WAIT\",\"value\":\"5m\"},{\"name\":\"ROX_RHCOS_NODE_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_CALL_NODE_INVENTORY_ENABLED\",\"value\":\"true\"}]}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"volumes\":[{\"name\":\"tmp-volume\",\"emptyDir\":{}},{\"name\":\"cache-volume\",\"emptyDir\":{\"sizeLimit\":\"200Mi\"}}]}}}}'", "oc -n stackrox patch daemonset/collector -p '{\"spec\":{\"template\":{\"spec\":{\"containers\":[{\"command\":[\"/scanner\",\"--nodeinventory\",\"--config=\",\"\"],\"env\":[{\"name\":\"ROX_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"ROX_CLAIR_V4_SCANNING\",\"value\":\"true\"},{\"name\":\"ROX_COMPLIANCE_OPERATOR_INTEGRATION\",\"value\":\"true\"},{\"name\":\"ROX_CSV_EXPORT\",\"value\":\"false\"},{\"name\":\"ROX_DECLARATIVE_CONFIGURATION\",\"value\":\"false\"},{\"name\":\"ROX_INTEGRATIONS_AS_CONFIG\",\"value\":\"false\"},{\"name\":\"ROX_NETPOL_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_DETECTION_BASELINE_SIMULATION\",\"value\":\"true\"},{\"name\":\"ROX_NETWORK_GRAPH_PATTERNFLY\",\"value\":\"true\"},{\"name\":\"ROX_NODE_SCANNING_CACHE_TIME\",\"value\":\"3h36m\"},{\"name\":\"ROX_NODE_SCANNING_INITIAL_BACKOFF\",\"value\":\"30s\"},{\"name\":\"ROX_NODE_SCANNING_MAX_BACKOFF\",\"value\":\"5m\"},{\"name\":\"ROX_PROCESSES_LISTENING_ON_PORT\",\"value\":\"false\"},{\"name\":\"ROX_QUAY_ROBOT_ACCOUNTS\",\"value\":\"true\"},{\"name\":\"ROX_ROXCTL_NETPOL_GENERATE\",\"value\":\"true\"},{\"name\":\"ROX_SOURCED_AUTOGENERATED_INTEGRATIONS\",\"value\":\"false\"},{\"name\":\"ROX_SYSLOG_EXTRA_FIELDS\",\"value\":\"true\"},{\"name\":\"ROX_SYSTEM_HEALTH_PF\",\"value\":\"false\"},{\"name\":\"ROX_VULN_MGMT_WORKLOAD_CVES\",\"value\":\"false\"}],\"image\":\"registry.redhat.io/advanced-cluster-security/rhacs-scanner-slim-rhel8:4.7.0\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"node-inventory\",\"ports\":[{\"containerPort\":8444,\"name\":\"grpc\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/host\",\"name\":\"host-root-ro\",\"readOnly\":true},{\"mountPath\":\"/tmp/\",\"name\":\"tmp-volume\"},{\"mountPath\":\"/cache\",\"name\":\"cache-volume\"}]}]}}}}'", "kubectl -n stackrox get deployment scanner-v4-indexer scanner-v4-matcher scanner-v4-db 1", "kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1", "kubectl -n stackrox set env deployment/sensor ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1", "kubectl -n stackrox set env daemonset/collector ROX_NODE_INDEX_ENABLED=true ROX_SCANNER_V4=true 1", "Scanned index report and found <number> components for node <node_name>.", "kubectl -n stackrox get deployment scanner scanner-db 1", "kubectl -n stackrox set env deployment/central ROX_NODE_INDEX_ENABLED=false 1", "kubectl -n stackrox set env daemonset/collector ROX_CALL_NODE_INVENTORY_ENABLED=true 1", "Scanned node inventory <node_name> (id: <node_id>) with <number> components.", "roxctl image sbom --image=image-name 1" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/operating/managing-vulnerabilities
Ansible Automation Platform 1.2 to 2 Migration Guide
Ansible Automation Platform 1.2 to 2 Migration Guide Red Hat Ansible Automation Platform 2.4 Anshul Behl Roger Lopez [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/ansible_automation_platform_1.2_to_2_migration_guide/index
function::u32_arg
function::u32_arg Name function::u32_arg - Return function argument as unsigned 32-bit value Synopsis Arguments n index of argument to return Description Return the unsigned 32-bit value of argument n, same as uint_arg.
[ "u32_arg:long(n:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-u32-arg
Chapter 7. NVIDIA GPU architecture overview
Chapter 7. NVIDIA GPU architecture overview NVIDIA supports the use of graphics processing unit (GPU) resources on OpenShift Container Platform. OpenShift Container Platform is a security-focused and hardened Kubernetes platform developed and supported by Red Hat for deploying and managing Kubernetes clusters at scale. OpenShift Container Platform includes enhancements to Kubernetes so that users can easily configure and use NVIDIA GPU resources to accelerate workloads. The NVIDIA GPU Operator leverages the Operator framework within OpenShift Container Platform to manage the full lifecycle of NVIDIA software components required to run GPU-accelerated workloads. These components include the NVIDIA drivers (to enable CUDA), the Kubernetes device plugin for GPUs, the NVIDIA Container Toolkit, automatic node tagging using GPU feature discovery (GFD), DCGM-based monitoring, and others. Note The NVIDIA GPU Operator is only supported by NVIDIA. For more information about obtaining support from NVIDIA, see Obtaining Support from NVIDIA . 7.1. NVIDIA GPU prerequisites A working OpenShift cluster with at least one GPU worker node. Access to the OpenShift cluster as a cluster-admin to perform the required steps. OpenShift CLI ( oc ) is installed. The node feature discovery (NFD) Operator is installed and a nodefeaturediscovery instance is created. 7.2. NVIDIA GPU enablement The following diagram shows how the GPU architecture is enabled for OpenShift: Figure 7.1. NVIDIA GPU enablement Note MIG is only supported with A30, A100, A100X, A800, AX800, H100, and H800. 7.2.1. GPUs and bare metal You can deploy OpenShift Container Platform on an NVIDIA-certified bare metal server but with some limitations: Control plane nodes can be CPU nodes. Worker nodes must be GPU nodes, provided that AI/ML workloads are executed on these worker nodes. In addition, the worker nodes can host one or more GPUs, but they must be of the same type. For example, a node can have two NVIDIA A100 GPUs, but a node with one A100 GPU and one T4 GPU is not supported. The NVIDIA Device Plugin for Kubernetes does not support mixing different GPU models on the same node. When using OpenShift, note that one or three or more servers are required. Clusters with two servers are not supported. The single server deployment is called single node openShift (SNO) and using this configuration results in a non-high availability OpenShift environment. You can choose one of the following methods to access the containerized GPUs: GPU passthrough Multi-Instance GPU (MIG) Additional resources Red Hat OpenShift on Bare Metal Stack 7.2.2. GPUs and virtualization Many developers and enterprises are moving to containerized applications and serverless infrastructures, but there is still a lot of interest in developing and maintaining applications that run on virtual machines (VMs). Red Hat OpenShift Virtualization provides this capability, enabling enterprises to incorporate VMs into containerized workflows within clusters. You can choose one of the following methods to connect the worker nodes to the GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time-slicing, when GPU compute capacity is not saturated by workloads. Additional resources NVIDIA GPU Operator with OpenShift Virtualization 7.2.3. GPUs and vSphere You can deploy OpenShift Container Platform on an NVIDIA-certified VMware vSphere server that can host different GPU types. An NVIDIA GPU driver must be installed in the hypervisor in case vGPU instances are used by the VMs. For VMware vSphere, this host driver is provided in the form of a VIB file. The maximum number of vGPUS that can be allocated to worker node VMs depends on the version of vSphere: vSphere 7.0: maximum 4 vGPU per VM vSphere 8.0: maximum 8 vGPU per VM Note vSphere 8.0 introduced support for multiple full or fractional heterogenous profiles associated with a VM. You can choose one of the following methods to attach the worker nodes to the GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing, when not all of the GPU is needed Similar to bare metal deployments, one or three or more servers are required. Clusters with two servers are not supported. Additional resources OpenShift Container Platform on VMware vSphere with NVIDIA vGPUs 7.2.4. GPUs and Red Hat KVM You can use OpenShift Container Platform on an NVIDIA-certified kernel-based virtual machine (KVM) server. Similar to bare-metal deployments, one or three or more servers are required. Clusters with two servers are not supported. However, unlike bare-metal deployments, you can use different types of GPUs in the server. This is because you can assign these GPUs to different VMs that act as Kubernetes nodes. The only limitation is that a Kubernetes node must have the same set of GPU types at its own level. You can choose one of the following methods to access the containerized GPUs: GPU passthrough for accessing and using GPU hardware within a virtual machine (VM) GPU (vGPU) time-slicing when not all of the GPU is needed To enable the vGPU capability, a special driver must be installed at the host level. This driver is delivered as a RPM package. This host driver is not required at all for GPU passthrough allocation. Additional resources How To Deploy OpenShift Container Platform 4.13 on KVM 7.2.5. GPUs and CSPs You can deploy OpenShift Container Platform to one of the major cloud service providers (CSPs): Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure. Two modes of operation are available: a fully managed deployment and a self-managed deployment. In a fully managed deployment, everything is automated by Red Hat in collaboration with CSP. You can request an OpenShift instance through the CSP web console, and the cluster is automatically created and fully managed by Red Hat. You do not have to worry about node failures or errors in the environment. Red Hat is fully responsible for maintaining the uptime of the cluster. The fully managed services are available on AWS and Azure. For AWS, the OpenShift service is called ROSA (Red Hat OpenShift Service on AWS). For Azure, the service is called Azure Red Hat OpenShift. In a self-managed deployment, you are responsible for instantiating and maintaining the OpenShift cluster. Red Hat provides the OpenShift-install utility to support the deployment of the OpenShift cluster in this case. The self-managed services are available globally to all CSPs. It is important that this compute instance is a GPU-accelerated compute instance and that the GPU type matches the list of supported GPUs from NVIDIA AI Enterprise. For example, T4, V100, and A100 are part of this list. You can choose one of the following methods to access the containerized GPUs: GPU passthrough to access and use GPU hardware within a virtual machine (VM). GPU (vGPU) time slicing when the entire GPU is not required. Additional resources Red Hat Openshift in the Cloud 7.2.6. GPUs and Red Hat Device Edge Red Hat Device Edge provides access to MicroShift. MicroShift provides the simplicity of a single-node deployment with the functionality and services you need for resource-constrained (edge) computing. Red Hat Device Edge meets the needs of bare-metal, virtual, containerized, or Kubernetes workloads deployed in resource-constrained environments. You can enable NVIDIA GPUs on containers in a Red Hat Device Edge environment. You use GPU passthrough to access the containerized GPUs. Additional resources How to accelerate workloads with NVIDIA GPUs on Red Hat Device Edge 7.3. GPU sharing methods Red Hat and NVIDIA have developed GPU concurrency and sharing mechanisms to simplify GPU-accelerated computing on an enterprise-level OpenShift Container Platform cluster. Applications typically have different compute requirements that can leave GPUs underutilized. Providing the right amount of compute resources for each workload is critical to reduce deployment cost and maximize GPU utilization. Concurrency mechanisms for improving GPU utilization exist that range from programming model APIs to system software and hardware partitioning, including virtualization. The following list shows the GPU concurrency mechanisms: Compute Unified Device Architecture (CUDA) streams Time-slicing CUDA Multi-Process Service (MPS) Multi-instance GPU (MIG) Virtualization with vGPU Consider the following GPU sharing suggestions when using the GPU concurrency mechanisms for different OpenShift Container Platform scenarios: Bare metal vGPU is not available. Consider using MIG-enabled cards. VMs vGPU is the best choice. Older NVIDIA cards with no MIG on bare metal Consider using time-slicing. VMs with multiple GPUs and you want passthrough and vGPU Consider using separate VMs. Bare metal with OpenShift Virtualization and multiple GPUs Consider using pass-through for hosted VMs and time-slicing for containers. Additional resources Improving GPU Utilization 7.3.1. CUDA streams Compute Unified Device Architecture (CUDA) is a parallel computing platform and programming model developed by NVIDIA for general computing on GPUs. A stream is a sequence of operations that executes in issue-order on the GPU. CUDA commands are typically executed sequentially in a default stream and a task does not start until a preceding task has completed. Asynchronous processing of operations across different streams allows for parallel execution of tasks. A task issued in one stream runs before, during, or after another task is issued into another stream. This allows the GPU to run multiple tasks simultaneously in no prescribed order, leading to improved performance. Additional resources Asynchronous Concurrent Execution 7.3.2. Time-slicing GPU time-slicing interleaves workloads scheduled on overloaded GPUs when you are running multiple CUDA applications. You can enable time-slicing of GPUs on Kubernetes by defining a set of replicas for a GPU, each of which can be independently distributed to a pod to run workloads on. Unlike multi-instance GPU (MIG), there is no memory or fault isolation between replicas, but for some workloads this is better than not sharing at all. Internally, GPU time-slicing is used to multiplex workloads from replicas of the same underlying GPU. You can apply a cluster-wide default configuration for time-slicing. You can also apply node-specific configurations. For example, you can apply a time-slicing configuration only to nodes with Tesla T4 GPUs and not modify nodes with other GPU models. You can combine these two approaches by applying a cluster-wide default configuration and then labeling nodes to give those nodes a node-specific configuration. 7.3.3. CUDA Multi-Process Service CUDA Multi-Process Service (MPS) allows a single GPU to use multiple CUDA processes. The processes run in parallel on the GPU, eliminating saturation of the GPU compute resources. MPS also enables concurrent execution, or overlapping, of kernel operations and memory copying from different processes to enhance utilization. Additional resources CUDA MPS 7.3.4. Multi-instance GPU Using Multi-instance GPU (MIG), you can split GPU compute units and memory into multiple MIG instances. Each of these instances represents a standalone GPU device from a system perspective and can be connected to any application, container, or virtual machine running on the node. The software that uses the GPU treats each of these MIG instances as an individual GPU. MIG is useful when you have an application that does not require the full power of an entire GPU. The MIG feature of the new NVIDIA Ampere architecture enables you to split your hardware resources into multiple GPU instances, each of which is available to the operating system as an independent CUDA-enabled GPU. NVIDIA GPU Operator version 1.7.0 and higher provides MIG support for the A100 and A30 Ampere cards. These GPU instances are designed to support up to seven multiple independent CUDA applications so that they operate completely isolated with dedicated hardware resources. Additional resources NVIDIA Multi-Instance GPU User Guide 7.3.5. Virtualization with vGPU Virtual machines (VMs) can directly access a single physical GPU using NVIDIA vGPU. You can create virtual GPUs that can be shared by VMs across the enterprise and accessed by other devices. This capability combines the power of GPU performance with the management and security benefits provided by vGPU. Additional benefits provided by vGPU includes proactive management and monitoring for your VM environment, workload balancing for mixed VDI and compute workloads, and resource sharing across multiple VMs. Additional resources Virtual GPUs 7.4. NVIDIA GPU features for OpenShift Container Platform NVIDIA Container Toolkit NVIDIA Container Toolkit enables you to create and run GPU-accelerated containers. The toolkit includes a container runtime library and utilities to automatically configure containers to use NVIDIA GPUs. NVIDIA AI Enterprise NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics software optimized, certified, and supported with NVIDIA-Certified systems. NVIDIA AI Enterprise includes support for Red Hat OpenShift Container Platform. The following installation methods are supported: OpenShift Container Platform on bare metal or VMware vSphere with GPU Passthrough. OpenShift Container Platform on VMware vSphere with NVIDIA vGPU. GPU Feature Discovery NVIDIA GPU Feature Discovery for Kubernetes is a software component that enables you to automatically generate labels for the GPUs available on a node. GPU Feature Discovery uses node feature discovery (NFD) to perform this labeling. The Node Feature Discovery Operator (NFD) manages the discovery of hardware features and configurations in an OpenShift Container Platform cluster by labeling nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, OS version, and so on. You can find the NFD Operator in the Operator Hub by searching for "Node Feature Discovery". NVIDIA GPU Operator with OpenShift Virtualization Up until this point, the GPU Operator only provisioned worker nodes to run GPU-accelerated containers. Now, the GPU Operator can also be used to provision worker nodes for running GPU-accelerated virtual machines (VMs). You can configure the GPU Operator to deploy different software components to worker nodes depending on which GPU workload is configured to run on those nodes. GPU Monitoring dashboard You can install a monitoring dashboard to display GPU usage information on the cluster Observe page in the OpenShift Container Platform web console. GPU utilization information includes the number of available GPUs, power consumption (in watts), temperature (in degrees Celsius), utilization (in percent), and other metrics for each GPU. Additional resources NVIDIA-Certified Systems NVIDIA AI Enterprise NVIDIA Container Toolkit Enabling the GPU Monitoring Dashboard MIG Support in OpenShift Container Platform Time-slicing NVIDIA GPUs in OpenShift Deploy GPU Operators in a disconnected or airgapped environment Node Feature Discovery Operator
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/architecture/nvidia-gpu-architecture-overview
Chapter 6. The Resilient Storage add-on
Chapter 6. The Resilient Storage add-on The Resilient Storage add-on enables a shared storage or clustered file system to access the same storage device over a network through a pool of data that is available to each server in the group. The add-on is available with a separate subscription. For details, see the Support Policies for RHEL Resilient Storage - Subscriptions, Support Services, and Software Access . The following table lists all the packages available with the Resilient Storage add-on along with their license. Package License awscli ASL 2.0 and MIT booth GPLv2+ booth-arbitrator GPLv2+ booth-core GPLv2+ booth-site GPLv2+ booth-test GPLv2+ clufter-bin GPLv2+ clufter-cli GPLv2+ clufter-common GPLv2+ clufter-lib-ccs GPLv2+ clufter-lib-general GPLv2+ clufter-lib-pcs GPLv2+ cmirror GPLv2 corosync BSD corosync-qdevice BSD corosync-qnetd BSD corosynclib-devel BSD dlm GPLv2 and GPLv2+ and LGPLv2+ fence-agents-aliyun GPLv2+ and LGPLv2+ and ASL 2.0 and BSD and MIT fence-agents-aws GPLv2+ and LGPLv2+ fence-agents-azure-arm GPLv2+ and LGPLv2+ fence-agents-gce GPLv2+ and LGPLv2+ and MIT fence-agents-openstack GPLv2+ and LGPLv2+ and ASL 2.0 and MIT and Python libknet1 LGPLv2+ libknet1-compress-bzip2-plugin LGPLv2+ libknet1-compress-lz4-plugin LGPLv2+ libknet1-compress-lzma-plugin LGPLv2+ libknet1-compress-lzo2-plugin LGPLv2+ libknet1-compress-plugins-all LGPLv2+ libknet1-compress-zlib-plugin LGPLv2+ libknet1-crypto-nss-plugin LGPLv2+ libknet1-crypto-openssl-plugin LGPLv2+ libknet1-crypto-plugins-all LGPLv2+ libknet1-plugins-all LGPLv2+ libnozzle1 LGPLv2+ pacemaker GPL-2.0-or-later AND LGPL-2.1-or-later pacemaker-cli GPL-2.0-or-later AND LGPL-2.1-or-later pacemaker-cts GPL-2.0-or-later AND LGPL-2.1-or-later pacemaker-doc CC-BY-SA-4.0 pacemaker-libs-devel GPL-2.0-or-later AND LGPL-2.1-or-later pacemaker-nagios-plugins-metadata GPLv3 pacemaker-remote GPL-2.0-or-later AND LGPL-2.1-or-later pcs GPL-2.0-only AND Apache-2.0 AND MIT AND BSD-3-Clause AND (Apache-2.0 OR BSD-3-Clause) AND (BSD-2-Clause OR Ruby) AND (BSD-2-Clause OR GPL-2.0-or-later) AND (GPL-2.0-only or Ruby) pcs-snmp GPL-2.0-only and BSD-2-Clause python3-azure-sdk MIT and ASL 2.0 and MPLv2.0 and BSD and Python python3-boto3 ASL 2.0 python3-botocore ASL 2.0 python3-clufter GPLv2+ and GFDL python3-fasteners ASL 2.0 python3-gflags BSD python3-google-api-client ASL 2.0 python3-httplib2 MIT python3-oauth2client ASL 2.0 python3-pacemaker LGPL-2.1-or-later python3-s3transfer ASL 2.0 python3-uritemplate BSD resource-agents GPLv2+ and LGPLv2+ resource-agents-aliyun GPLv2+ and LGPLv2+ and ASL 2.0 and BSD and MIT resource-agents-gcp GPLv2+ and LGPLv2+ and BSD and ASL 2.0 and MIT and Python resource-agents-paf PostgreSQL spausedd BSD
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/package_manifest/resilient-storage-addon
Chapter 23. OpenShiftControllerManager [operator.openshift.io/v1]
Chapter 23. OpenShiftControllerManager [operator.openshift.io/v1] Description OpenShiftControllerManager provides information to configure an operator to manage openshift-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 23.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 23.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 23.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 23.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 23.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 23.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 23.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 23.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/openshiftcontrollermanagers DELETE : delete collection of OpenShiftControllerManager GET : list objects of kind OpenShiftControllerManager POST : create an OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} DELETE : delete an OpenShiftControllerManager GET : read the specified OpenShiftControllerManager PATCH : partially update the specified OpenShiftControllerManager PUT : replace the specified OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status GET : read status of the specified OpenShiftControllerManager PATCH : partially update status of the specified OpenShiftControllerManager PUT : replace status of the specified OpenShiftControllerManager 23.2.1. /apis/operator.openshift.io/v1/openshiftcontrollermanagers Table 23.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OpenShiftControllerManager Table 23.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 23.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OpenShiftControllerManager Table 23.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 23.5. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManagerList schema 401 - Unauthorized Empty HTTP method POST Description create an OpenShiftControllerManager Table 23.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.7. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 23.8. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 202 - Accepted OpenShiftControllerManager schema 401 - Unauthorized Empty 23.2.2. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} Table 23.9. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager Table 23.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OpenShiftControllerManager Table 23.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 23.12. Body parameters Parameter Type Description body DeleteOptions schema Table 23.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OpenShiftControllerManager Table 23.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 23.15. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OpenShiftControllerManager Table 23.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 23.17. Body parameters Parameter Type Description body Patch schema Table 23.18. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OpenShiftControllerManager Table 23.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.20. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 23.21. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty 23.2.3. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status Table 23.22. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager Table 23.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified OpenShiftControllerManager Table 23.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 23.25. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OpenShiftControllerManager Table 23.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 23.27. Body parameters Parameter Type Description body Patch schema Table 23.28. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OpenShiftControllerManager Table 23.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.30. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 23.31. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operator_apis/openshiftcontrollermanager-operator-openshift-io-v1
8.2. Memory Tuning on Virtual Machines
8.2. Memory Tuning on Virtual Machines 8.2.1. Memory Monitoring Tools Memory usage can be monitored in virtual machines using tools used in bare metal environments. Tools useful for monitoring memory usage and diagnosing memory-related problems include: top vmstat numastat /proc/ Note For details on using these performance tools, see the Red Hat Enterprise Linux 7 Performance Tuning Guide and the man pages for these commands. 8.2.2. Memory Tuning with virsh The optional <memtune> element in the guest XML configuration allows administrators to configure guest virtual machine memory settings manually. If <memtune> is omitted, the VM uses memory based on how it was allocated and assigned during the VM creation. Display or set memory parameters in the <memtune> element in a virtual machine with the virsh memtune command, replacing values according to your environment: Optional parameters include: hard_limit The maximum memory the virtual machine can use, in kibibytes (blocks of 1024 bytes). Warning Setting this limit too low can result in the virtual machine being killed by the kernel. soft_limit The memory limit to enforce during memory contention, in kibibytes (blocks of 1024 bytes). swap_hard_limit The maximum memory plus swap the virtual machine can use, in kibibytes (blocks of 1024 bytes). The swap_hard_limit value must be more than the hard_limit value. min_guarantee The guaranteed minimum memory allocation for the virtual machine, in kibibytes (blocks of 1024 bytes). Note See # virsh help memtune for more information on using the virsh memtune command. The optional <memoryBacking> element may contain several elements that influence how virtual memory pages are backed by host pages. Setting locked prevents the host from swapping out memory pages belonging to the guest. Add the following to the guest XML to lock the virtual memory pages in the host's memory: Important When setting locked , a hard_limit must be set in the <memtune> element to the maximum memory configured for the guest, plus any memory consumed by the process itself. Setting nosharepages prevents the host from merging the same memory used among guests. To instruct the hypervisor to disable share pages for a guest, add the following to the guest's XML: 8.2.3. Huge Pages and Transparent Huge Pages AMD64 and Intel 64 CPUs usually address memory in 4kB pages, but they are capable of using larger 2MB or 1GB pages known as huge pages . KVM guests can be deployed with huge page memory support in order to improve performance by increasing CPU cache hits against the Transaction Lookaside Buffer (TLB). A kernel feature enabled by default in Red Hat Enterprise Linux 7, huge pages can significantly increase performance, particularly for large memory and memory-intensive workloads. Red Hat Enterprise Linux 7 is able to manage large amounts of memory more effectively by increasing the page size through the use of huge pages. To increase the effectiveness and convenience of managing huge pages, Red Hat Enterprise Linux 7 uses Transparent Huge Pages (THP) by default. For more information on huge pages and THP, see the Performance Tuning Guide . Red Hat Enterprise Linux 7 systems support 2MB and 1GB huge pages, which can be allocated at boot or at runtime. See Section 8.2.3.3, "Enabling 1 GB huge pages for guests at boot or runtime" for instructions on enabling multiple huge page sizes. 8.2.3.1. Configuring Transparent Huge Pages Transparent huge pages (THP) are an abstraction layer that automates most aspects of creating, managing, and using huge pages. By default, they automatically optimize system settings for performance. Note Using KSM can reduce the occurrence of transparent huge pages, so it is recommended to disable KSM before enabling THP. For more information, see Section 8.3.4, "Deactivating KSM" . Transparent huge pages are enabled by default. To check the current status, run: To enable transparent huge pages to be used by default, run: This will set /sys/kernel/mm/transparent_hugepage/enabled to always . To disable transparent huge pages: Transparent Huge Page support does not prevent the use of static huge pages. However, when static huge pages are not used, KVM will use transparent huge pages instead of the regular 4kB page size. 8.2.3.2. Configuring Static Huge Pages In some cases, greater control of huge pages is preferable. To use static huge pages on guests, add the following to the guest XML configuration using virsh edit : This instructs the host to allocate memory to the guest using huge pages, instead of using the default page size. View the current huge pages value by running the following command: Procedure 8.1. Setting huge pages The following example procedure shows the commands to set huge pages. View the current huge pages value: Huge pages are set in increments of 2MB. To set the number of huge pages to 25000, use the following command: Note To make the setting persistent, add the following lines to the /etc/sysctl.conf file on the guest machine, with X being the intended number of huge pages: Afterwards, add transparent_hugepage=never to the kernel boot parameters by appending it to the end of the /kernel line in the /etc/grub2.cfg file on the guest. Mount the huge pages: Add the following lines to the memoryBacking section in the virtual machine's XML configuration: Restart libvirtd : Start the VM: Restart the VM if it is already running: Verify the changes in /proc/meminfo : Huge pages can benefit not only the host but also guests, however, their total huge pages value must be less than what is available in the host. 8.2.3.3. Enabling 1 GB huge pages for guests at boot or runtime Red Hat Enterprise Linux 7 systems support 2MB and 1GB huge pages, which can be allocated at boot or at runtime. Procedure 8.2. Allocating 1GB huge pages at boot time To allocate different sizes of huge pages at boot time, use the following command, specifying the number of huge pages. This example allocates four 1GB huge pages and 1024 2MB huge pages: Change this command line to specify a different number of huge pages to be allocated at boot. Note The two steps must also be completed the first time you allocate 1GB huge pages at boot time. Mount the 2MB and 1GB huge pages on the host: Add the following lines to the memoryBacking section in the virtual machine's XML configuration: Restart libvirtd to enable the use of 1GB huge pages on guests: Procedure 8.3. Allocating 1GB huge pages at runtime 1GB huge pages can also be allocated at runtime. Runtime allocation allows the system administrator to choose which NUMA node to allocate those pages from. However, runtime page allocation can be more prone to allocation failure than boot time allocation due to memory fragmentation. To allocate different sizes of huge pages at runtime, use the following command, replacing values for the number of huge pages, the NUMA node to allocate them from, and the huge page size: This example command allocates four 1GB huge pages from node1 and 1024 2MB huge pages from node3 . These huge page settings can be changed at any time with the above command, depending on the amount of free memory on the host system. Note The two steps must also be completed the first time you allocate 1GB huge pages at runtime. Mount the 2MB and 1GB huge pages on the host: Add the following lines to the memoryBacking section in the virtual machine's XML configuration: Restart libvirtd to enable the use of 1GB huge pages on guests:
[ "virsh memtune virtual_machine --parameter size", "<memoryBacking> <locked/> </memoryBacking>", "<memoryBacking> <nosharepages/> </memoryBacking>", "cat /sys/kernel/mm/transparent_hugepage/enabled", "echo always > /sys/kernel/mm/transparent_hugepage/enabled", "echo never > /sys/kernel/mm/transparent_hugepage/enabled", "<memoryBacking> <hugepages/> </memoryBacking>", "cat /proc/sys/vm/nr_hugepages", "cat /proc/meminfo | grep Huge AnonHugePages: 2048 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB", "echo 25000 > /proc/sys/vm/nr_hugepages", "echo 'vm.nr_hugepages = X' >> /etc/sysctl.conf sysctl -p", "mount -t hugetlbfs hugetlbfs /dev/hugepages", "<hugepages> <page size='1' unit='GiB'/> </hugepages>", "systemctl restart libvirtd", "virsh start virtual_machine", "virsh reset virtual_machine", "cat /proc/meminfo | grep Huge AnonHugePages: 0 kB HugePages_Total: 25000 HugePages_Free: 23425 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB", "'default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024'", "mkdir /dev/hugepages1G mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G mkdir /dev/hugepages2M mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M", "<hugepages> <page size='1' unit='GiB'/> </hugepages>", "systemctl restart libvirtd", "echo 4 > /sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages", "mkdir /dev/hugepages1G mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G mkdir /dev/hugepages2M mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M", "<hugepages> <page size='1' unit='GiB'/> </hugepages>", "systemctl restart libvirtd" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-Memory-Tuning
21.5. Guest Virtual Machine Fails to Shutdown
21.5. Guest Virtual Machine Fails to Shutdown Traditionally, executing a virsh shutdown command causes a power button ACPI event to be sent, thus copying the same action as when someone presses a power button on a physical machine. Within every physical machine, it is up to the OS to handle this event. In the past operating systems would just silently shutdown. Today, the most usual action is to show a dialog asking what should be done. Some operating systems even ignore this event completely, especially when no users are logged in. When such operating systems are installed on a guest virtual machine, running virsh shutdown just does not work (it is either ignored or a dialog is shown on a virtual display). However, if a qemu-guest-agent channel is added to a guest virtual machine and this agent is running inside the guest virtual machine's OS, the virsh shutdown command will ask the agent to shutdown the guest OS instead of sending the ACPI event. The agent will call for a shutdown from inside the guest virtual machine OS and everything works as expected. Procedure 21.2. Configuring the guest agent channel in a guest virtual machine Stop the guest virtual machine. Open the Domain XML for the guest virtual machine and add the following snippet: <channel type='unix'> <source mode='bind'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel> Figure 21.1. Configuring the guest agent channel Start the guest virtual machine, by running virsh start [domain] . Install qemu-guest-agent on the guest virtual machine ( yum install qemu-guest-agent ) and make it run automatically at every boot as a service (qemu-guest-agent.service). Refer to Chapter 10, QEMU-img and QEMU Guest Agent for more information.
[ "<channel type='unix'> <source mode='bind'/> <target type='virtio' name='org.qemu.guest_agent.0'/> </channel>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-qemu-agent-vish-shutdown
3.3.2. Converting a remote KVM virtual machine
3.3.2. Converting a remote KVM virtual machine KVM virtual machines can be converted remotely using SSH. Ensure that the host running the virtual machine is accessible using SSH. To convert the virtual machine, run: Where vmhost.example.com is the host running the virtual machine, pool is the local storage pool to hold the image, bridge_name is the name of a local network bridge to connect the converted virtual machine's network to, and guest_name is the name of the Xen virtual machine. You may also use the --network parameter to connect to a locally managed network if your virtual machine only has a single network interface. If your virtual machine has multiple network interfaces, edit /etc/virt-v2v.conf to specify the network mapping for all interfaces. If your virtual machine is Red Hat Enterprise Linux 4 or Red Hat Enterprise Linux 5 and uses a kernel which does not support the KVM VirtIO drivers, virt-v2v will attempt to install a new kernel during the conversion process. You can avoid this requirement by updating the kernel to a recent version of Red Hat Enterprise Linux 6 which supports VirtIO prior to conversion. Note When converting from KVM, virt-v2v requires that the image of the source virtual machine exists within a storage pool. If the image is not currently in a storage pool, you must create one.
[ "virt-v2v -ic qemu+ssh://[email protected]/system -op pool --bridge bridge_name guest_name" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/sub-sect-convert-remote-kvm-virtual-machine
5.320. system-config-date-docs
5.320. system-config-date-docs 5.320.1. RHBA-2012:0934 - system-config-date-docs bug fix update Updated system-config-date-docs packages that fix one bug are now available for Red Hat Enterprise Linux 6. The system-config-date-docs packages contain the online documentation for system-config-date, with which you can configure date, time and the use of time servers on your system. Bug Fix BZ# 691572 Prior to this update, the help documentation contained out-of-date screenshots and the text did not correctly reflect the user interface elements. This update contains updated screenshots and documents the user interface correctly. All users are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/system-config-date-docs
Appendix A. Reference Material
Appendix A. Reference Material A.1. Datasource Statistics Table A.1. Core Pool Statistics Name Description ActiveCount The number of active connections. Each of the connections is either in use by an application or available in the pool. AvailableCount The number of available connections in the pool. AverageBlockingTime The average time spent blocking on obtaining an exclusive lock on the pool. This value is in milliseconds. AverageCreationTime The average time spent creating a connection. This value is in milliseconds. AverageGetTime The average time spent obtaining a connection. This value is in milliseconds. AveragePoolTime The average time that a connection spent in the pool.This value is in milliseconds. AverageUsageTime The average time spent using a connection. This value is in milliseconds. BlockingFailureCount The number of failures trying to obtain a connection. CreatedCount The number of connections created. DestroyedCount The number of connections destroyed. IdleCount The number of connections that are currently idle. InUseCount The number of connections currently in use. MaxCreationTime The maximum time it took to create a connection. This value is in milliseconds. MaxGetTime The maximum time for obtaining a connection. This value is in milliseconds. MaxPoolTime The maximum time for a connection in the pool. This value is in milliseconds. MaxUsageTime The maximum time using a connection. This value is in milliseconds. MaxUsedCount The maximum number of connections used. MaxWaitCount The maximum number of requests waiting for a connection at the same time. MaxWaitTime The maximum time spent waiting for an exclusive lock on the pool. This value is in milliseconds. TimedOut The number of timed out connections. TotalBlockingTime The total time spent waiting for an exclusive lock on the pool. This value is in milliseconds. TotalCreationTime The total time spent creating connections. This value is in milliseconds. TotalGetTime The total time spent obtaining connections. This value is in milliseconds. TotalPoolTime The total time spent by connections in the pool. This value is in milliseconds. TotalUsageTime The total time spent using connections. This value is in milliseconds. WaitCount The number of requests that had to wait to obtain a connection. XACommitAverageTime The average time for an XAResource commit invocation. This value is in milliseconds. XACommitCount The number of XAResource commit invocations. XACommitMaxTime The maximum time for an XAResource commit invocation. This value is in milliseconds. XACommitTotalTime The total time for all XAResource commit invocations. This value is in milliseconds. XAEndAverageTime The average time for an XAResource end invocation. This value is in milliseconds. XAEndCount The number of XAResource end invocations. XAEndMaxTime The maximum time for an XAResource end invocation. This value is in milliseconds. XAEndTotalTime The total time for all XAResource end invocations. This value is in milliseconds. XAForgetAverageTime The average time for an XAResource forget invocation. This value is in milliseconds. XAForgetCount The number of XAResource forget invocations. XAForgetMaxTime The maximum time for an XAResource forget invocation. This value is in milliseconds. XAForgetTotalTime The total time for all XAResource forget invocations. This value is in milliseconds. XAPrepareAverageTime The average time for an XAResource prepare invocation. This value is in milliseconds. XAPrepareCount The number of XAResource prepare invocations. XAPrepareMaxTime The maximum time for an XAResource prepare invocation. This value is in milliseconds. XAPrepareTotalTime The total time for all XAResource prepare invocations. This value is in milliseconds. XARecoverAverageTime The average time for an XAResource recover invocation. This value is in milliseconds. XARecoverCount The number of XAResource recover invocations. XARecoverMaxTime The maximum time for an XAResource recover invocation. This value is in milliseconds. XARecoverTotalTime The total time for all XAResource recover invocations. This value is in milliseconds. XARollbackAverageTime The average time for an XAResource rollback invocation. This value is in milliseconds. XARollbackCount The number of XAResource rollback invocations. XARollbackMaxTime The maximum time for an XAResource rollback invocation. This value is in milliseconds. XARollbackTotalTime The total time for all XAResource rollback invocations. This value is in milliseconds. XAStartAverageTime The average time for an XAResource start invocation. This value is in milliseconds. XAStartCount The number of XAResource start invocations. XAStartMaxTime The maximum time for an XAResource start invocation. This value is in milliseconds. XAStartTotalTime The total time for all XAResource start invocations. This value is in milliseconds. Table A.2. JDBC Statistics Name Description PreparedStatementCacheAccessCount The number of times that the statement cache was accessed. PreparedStatementCacheAddCount The number of statements added to the statement cache. PreparedStatementCacheCurrentSize The number of prepared and callable statements currently cached in the statement cache. PreparedStatementCacheDeleteCount The number of statements discarded from the cache. PreparedStatementCacheHitCount The number of times that statements from the cache were used. PreparedStatementCacheMissCount The number of times that a statement request could not be satisfied with a statement from the cache. A.2. Resource Adapter Statistics Table A.3. Resource Adapter Statistics Name Description ActiveCount The number of active connections. Each of the connections is either in use by an application or available in the pool AvailableCount The number of available connections in the pool. AverageBlockingTime The average time spent blocking on obtaining an exclusive lock on the pool. The value is in milliseconds. AverageCreationTime The average time spent creating a connection. The value is in milliseconds. CreatedCount The number of connections created. DestroyedCount The number of connections destroyed. InUseCount The number of connections currently in use. MaxCreationTime The maximum time it took to create a connection. The value is in milliseconds. MaxUsedCount The maximum number of connections used. MaxWaitCount The maximum number of requests waiting for a connection at the same time. MaxWaitTime The maximum time spent waiting for an exclusive lock on the pool. TimedOut The number of timed out connections. TotalBlockingTime The total time spent waiting for an exclusive lock on the pool. The value is in milliseconds. TotalCreationTime The total time spent creating connections. The value is in milliseconds. WaitCount The number of requests that had to wait for a connection. A.3. IO Subsystem Attributes Note Attribute names in these tables are listed as they appear in the management model, for example, when using the management CLI. See the schema definition file located at EAP_HOME /docs/schema/wildfly-io_3_0.xsd to view the elements as they appear in the XML, as there may be differences from the management model. Table A.4. worker Attributes Attribute Default Description io-threads The number of I/O threads to create for the worker. If not specified, the number of threads is set to the number of CPUs x 2. stack-size 0 The stack size, in bytes, to attempt to use for worker threads. task-keepalive 60000 The number of milliseconds to keep non-core task threads alive. task-core-threads 2 The number of threads for the core task thread pool. task-max-threads The maximum number of threads for the worker task thread pool. If not specified, the maximum number of threads is set to the number of CPUs x 16, taking the MaxFileDescriptorCount Jakarta Management property, if set, into account. Table A.5. buffer-pool Attributes Attribute Default Description Note IO buffer pools are deprecated, but they are still set as the default in the current release. For more information about configuring Undertow byte buffer pools, see the Configuring Byte Buffer Pools section of the Configuration Guide for JBoss EAP. Additionally, see Byte Buffer Pool Attributes in the JBoss EAP Configuration Guide for the byte buffer pool attribute list. buffer-size The size, in bytes, of each buffer slice. If not specified, the size is set based on the available RAM of your system: 512 bytes for less than 64 MB RAM 1024 bytes (1 KB) for 64 MB - 128 MB RAM 16384 bytes (16 KB) for more than 128 MB RAM For performance tuning advice on this attribute, see Configuring Buffer Pools . buffers-per-slice How many slices, or sections, to divide the larger buffer into. This can be more memory efficient than allocating many separate buffers. If not specified, the number of slices is set based on the available RAM of your system: 10 for less than 128 MB RAM 20 for more than 128 MB RAM direct-buffers Whether the buffer pool uses direct buffers, which are faster in many cases with NIO. Note that some platforms do not support direct buffers. Revised on 2024-01-17 05:25:53 UTC
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/performance_tuning_guide/reference_material
4.192. NetworkManager
4.192. NetworkManager 4.192.1. RHBA-2012:1112 - NetworkManager bug fix update Updated NetworkManager packages that fix a bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. NetworkManager is a system network service that manages network devices and connections, attempting to keep active network connectivity when available. It manages Ethernet, wireless, mobile broadband (WWAN), and PPPoE (Point-to-Point Protocol over Ethernet) devices, and provides VPN integration with a variety of different VPN services. Bug Fix BZ# 822271 When an existing DHCP lease was renewed, NetworkManager did not recognize it as a change in DHCP state and failed to run the dispatcher scripts. Consequently, hostnames were purged from DHCP records. With this update the code has been improved and NetworkManager now handles same-state transitions correctly. Now, hostnames are not purged from the DHCP server when a lease is renewed. Users of NetworkManager are advised to upgrade to these updated packages, which fix this bug. 4.192.2. RHBA-2011:1632 - NetworkManager bug fix and enhancement update Updated NetworkManager packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. NetworkManager is a system network service that manages network devices and connections, attempting to keep active network connectivity when available. It manages Ethernet, wireless, mobile broadband (WWAN), and PPPoE devices, and provides VPN integration with a variety of different VPN services. Bug Fixes BZ# 660666 NetworkManager did not recognize IBM CTC (Channel-to-Channel) devices, which made it impossible to install Red Hat Enterprise Linux on IBM S/390 machines which used CTC devices. NetworkManager now detects these devices properly, with the result that Red Hat Enterprise Linux can be installed on such machines. BZ# 696585 When connecting to a WLAN, pressing the Enter key in NetworkManager's dialog box had no effect and the dialog box remained open. However, the WLAN connection could be established by clicking the Connect button with the mouse. This happened because the Connect button was not defined as default action on confirmation in the code. With this update, the Connect button was marked as default and NetworkManager now launches the WLAN connection under these circumstances. BZ# 696916 Due to a memory access error, the connection profile configured in NetworkManager was not stored if an IPv6 address and an IPv6 gateway were specified. The code has been modified to prevent this issue and connection profiles are now stored correctly. BZ# 706338 Due to a timing issue in the libnm-glib library, NetworkManager produced a D-Bus error when a network driver was unloaded from the kernel. This error message was only for informational purposes and therefore did not need to appear in syslog messages. The message has been suppressed in the libnm-glib code, and the error message no longer occurs in any of the system logs. BZ# 747066 NetworkManager did not specify the initial frequency of an ad hoc wireless network when the frequency was not set by the user. If the network frequency was not set when authenticating with wpa_supplicant using the nl80211 supplicant driver, the connection attempt failed. NetworkManager has been modified to set a frequency that is supported by used network device if it is not specified by the user. Users can now connect to ad hoc wireless networks without problems in the scenario described. BZ# 659685 The RHSA-2010-0616 security advisory for the dbus-glib library introduced changes restricting access to D-Bus properties. Therefore under certain circumstances, NetworkManager failed to display the login banner when a user connected to a VPN. NetworkManager has been modified to respect dbus-glib limitations, and the login banner is now displayed correctly. BZ# 743555 The implementation of the wpa_supplicant application has recently been changed to use the nl80211 supplicant driver instead the WEXT wireless extension. Both methods use a different approach to show the level of a wireless network signal. This difference was not reflected in NetworkManager's code, therefore the signal level was shown incorrectly. NetworkManager has been modified to handle this feature correctly when using nl80211, and the signal level is now displayed correctly. Enhancements BZ# 590096 NetworkManager did not send the system hostname to a DHCP server unless it was explicitly configured with a configuration file. NetworkManager now sends the hostname to the DHCP server by default. BZ# 713283 Roaming in RSA token-enabled enterprise Wi-Fi networks did not work properly, which resulted in the wpa_supplicant component upgrade to version 0.7.3. This update required new features to be implemented in NetworkManager. NetworkManager now includes the background scanning feature for the wpa_supplicant component and uses the nl80211 supplicant driver when adding a supplicant interface. All users of NetworkManager are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/networkmanager
Chapter 1. Backing up the undercloud node
Chapter 1. Backing up the undercloud node To back up the undercloud node, you configure the backup node, install the Relax-and-Recover tool on the undercloud node, and create the backup image. You can create backups as a part of your regular environment maintenance. In addition, you must back up the undercloud node before performing updates or upgrades. You can use the backups to restore the undercloud node to its state if an error occurs during an update or upgrade. 1.1. Supported backup formats and protocols The undercloud and backup and restore process uses the open-source tool Relax-and-Recover (ReaR) to create and restore bootable backup images. ReaR is written in Bash and supports multiple image formats and multiple transport protocols. The following list shows the backup formats and protocols that Red Hat OpenStack Platform supports when you use ReaR to back up and restore the undercloud and control plane. Bootable media formats ISO File transport protocols SFTP NFS 1.2. Configuring the backup storage location Before you create a backup of the control plane nodes, configure the backup storage location in the bar-vars.yaml environment file. This file stores the key-value parameters that you want to pass to the backup execution. Procedure In the bar-vars.yaml file, configure the backup storage location. Follow the appropriate steps for your NFS server or SFTP server. If you use an NFS server, add the following parameters the bar-vars.yaml file: tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_server_dir_path> tripleo_backup_and_restore_output_url: "nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}" tripleo_backup_and_restore_backup_url: "nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}" Replace <ip_address> and <backup_server_dir_path> . The default value for tripleo_backup_and_restore_server parameter value is 192.168.24.1 . If you use an SFTP server, add the tripleo_backup_and_restore_output_url parameter and set the values of the URL and credentials of the SFTP server: tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/ Replace <user> , <password> , and <backup_node> with the backup node URL and credentials. 1.3. Installing and configuring an NFS server on the backup node You can install and configure a new NFS server to store the backup file. To install and configure an NFS server on the backup node, create an inventory file, create an SSH key, and run the openstack undercloud backup command with the NFS server options. Important If you previously installed and configured an NFS or SFTP server, you do not need to complete this procedure. You enter the server information when you set up ReaR on the node that you want to back up. By default, the Relax and Recover (ReaR) IP address parameter for the NFS server is 192.168.24.1 . You must add the parameter tripleo_backup_and_restore_server to set the IP address value that matches your environment. Procedure On the undercloud node, source the undercloud credentials: On the undercloud node, create an inventory file for the backup node: (undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF Replace <ip_address> and <user> with the values that apply to your environment. Copy the public SSH key from the undercloud node to the backup node. (undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node> Replace <backup_node> with the path and name of the backup node. Configure the NFS server on the backup node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml 1.4. Installing ReaR on the undercloud node Before you create a backup of the undercloud node, install and configure Relax and Recover (ReaR) on the undercloud. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.3, "Installing and configuring an NFS server on the backup node" . Procedure On the undercloud node, source the undercloud credentials: [stack@undercloud ~]USD source stackrc If you use a custom stack name, add the --stack <stack_name> option to the tripleo-ansible-inventory command. If you have not done so before, create an inventory file and use the tripleo-ansible-inventory command to generate a static inventory file that contains hosts and variables for all the overcloud nodes: (undercloud) [stack@undercloud ~]USD tripleo-ansible-inventory \ --ansible_ssh_user heat-admin \ --static-yaml-inventory /home/stack/tripleo-inventory.yaml Install ReaR on the undercloud node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml If your system uses the UEFI boot loader, perform the following steps on the undercloud node: Install the following tools: USD sudo dnf install dosfstools efibootmgr Enable UEFI backup in the ReaR configuration file located in /etc/rear/local.conf by replacing the USING_UEFI_BOOTLOADER parameter value 0 with the value 1 . 1.5. Creating a standalone database backup of the undercloud nodes If you are upgrading your Red Hat OpenStack Platform environment from 13 to 16.2, you must create a standalone database backup after you perform the undercloud upgrade and before you perform the Leapp upgrade process on the undercloud nodes. You can optionally include standalone undercloud database backups in your routine backup schedule to provide additional data security. A full backup of an undercloud node includes a database backup of the undercloud node. But if a full undercloud restoration fails, you might lose access to the database portion of the full undercloud backup. In this case, you can recover the database from a standalone undercloud database backup. Procedure Create a database backup of the undercloud nodes: openstack undercloud backup --db-only The db backup file is stored in /home/stack with the name openstack-backup-mysql-<timestamp>.sql . Additional resources Framework for Upgrades (13 to 16.2) Section 1.7, "Creating a backup of the undercloud node" Section 3.5, "Restoring the undercloud node database manually" 1.6. Configuring Open vSwitch (OVS) interfaces for backup If you use an Open vSwitch (OVS) bridge in your environment, you must manually configure the OVS interfaces before you create a backup of the undercloud or control plane nodes. The restoration process uses this information to restore the network interfaces. Procedure In the /etc/rear/local.conf file, add the NETWORKING_PREPARATION_COMMANDS parameter in the following format: Replace <command_1> and <command_2> with commands that configure the network interfaces. For example, you can add the ip link add br-ctlplane type bridge command to create the control plane bridge or add the ip link set eth0 up command to change the state of eth0 to up. You can add more commands to the parameter based on your network configuration. For example, if your undercloud has the following configuration: The NETWORKING_PREPARATION_COMMANDS parameter is formatted as follows: 1.7. Creating a backup of the undercloud node To create a backup of the undercloud node, use the openstack undercloud backup command. You can then use the backup to restore the undercloud node to its state in case the node becomes corrupted or inaccessible. The backup of the undercloud node includes the backup of the database that runs on the undercloud node. If you are upgrading your Red Hat OpenStack Platform environment from 13 to 16.2, you must create a separate database backup after you perform the undercloud upgrade and before you perform the Leapp upgrade process on the overcloud nodes. For more information, see Section 1.5, "Creating a standalone database backup of the undercloud nodes" . Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.3, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the undercloud node. For more information, see Section 1.4, "Installing ReaR on the undercloud node" . If you use an OVS bridge for your network interfaces, you have configured the OVS interfaces. For more information, see Section 1.6, "Configuring Open vSwitch (OVS) interfaces for backup" . Procedure Log in to the undercloud as the stack user. Retrieve the MySQL root password: [stack@undercloud ~]USD PASSWORD=USD(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password) Create a database backup of the undercloud node: [stack@undercloud ~]USD sudo podman exec mysql bash -c "mysqldump -uroot -pUSDPASSWORD --opt --all-databases" | sudo tee /root/undercloud-all-databases.sql Source the undercloud credentials: [stack@undercloud ~]USD source stackrc If you have not done so before, create an inventory file and use the tripleo-ansible-inventory command to generate a static inventory file that contains hosts and variables for all the overcloud nodes: (undercloud) [stack@undercloud ~]USD tripleo-ansible-inventory \ --ansible_ssh_user heat-admin \ --static-yaml-inventory /home/stack/tripleo-inventory.yaml Create a backup of the undercloud node: (undercloud) [stack@undercloud ~]USD openstack undercloud backup --inventory /home/stack/tripleo-inventory.yaml 1.8. Scheduling undercloud node backups with cron You can schedule backups of the undercloud nodes with ReaR by using the Ansible backup-and-restore role. You can view the logs in the /var/log/rear-cron directory. Prerequisites You have an NFS or SFTP server installed and configured on the backup node. For more information about creating a new NFS server, see Section 1.3, "Installing and configuring an NFS server on the backup node" . You have installed ReaR on the undercloud and control plane nodes. For more information, see Section 2.3, "Installing ReaR on the control plane nodes" . You have sufficient available disk space at your backup location to store the backup. Procedure To schedule a backup of your control plane nodes, run the following command. The default schedule is Sundays at midnight: openstack undercloud backup --cron Optional: Customize the scheduled backup according to your deployment: To change the default backup schedule, pass a different cron schedule on the tripleo_backup_and_restore_cron parameter: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron": "0 0 * * 0"}' To define additional parameters that are added to the backup command when cron runs the scheduled backup, pass the tripleo_backup_and_restore_cron_extra parameter to the backup command, as shown in the following example: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_extra":"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml"}' To change the default user that executes the backup, pass the tripleo_backup_and_restore_cron_user parameter to the backup command, as shown in the following example: openstack undercloud backup --cron --extra-vars '{"tripleo_backup_and_restore_cron_user": "root"}
[ "tripleo_backup_and_restore_server: <ip_address> tripleo_backup_and_restore_shared_storage_folder: <backup_server_dir_path> tripleo_backup_and_restore_output_url: \"nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}\" tripleo_backup_and_restore_backup_url: \"nfs://{{ tripleo_backup_and_restore_server }}{{ tripleo_backup_and_restore_shared_storage_folder }}\"", "tripleo_backup_and_restore_output_url: sftp://<user>:<password>@<backup_node>/ tripleo_backup_and_restore_backup_url: iso:///backup/", "[stack@undercloud ~]USD source stackrc (undercloud) [stack@undercloud ~]USD", "(undercloud) [stack@undercloud ~]USD cat <<'EOF'> ~/nfs-inventory.yaml [BackupNode] <backup_node> ansible_host=<ip_address> ansible_user=<user> EOF", "(undercloud) [stack@undercloud ~]USD ssh-copy-id -i ~/.ssh/id_rsa.pub <backup_node>", "(undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-nfs --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/nfs-inventory.yaml", "[stack@undercloud ~]USD source stackrc", "(undercloud) [stack@undercloud ~]USD tripleo-ansible-inventory --ansible_ssh_user heat-admin --static-yaml-inventory /home/stack/tripleo-inventory.yaml", "(undercloud) [stack@undercloud ~]USD openstack undercloud backup --setup-rear --extra-vars /home/stack/bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml", "sudo dnf install dosfstools efibootmgr", "openstack undercloud backup --db-only", "NETWORKING_PREPARATION_COMMANDS=('<command_1>' '<command_2>' ...')", "ip -4 addr ls br-ctlplane 8: br-ctlplane: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 inet 172.16.9.1/24 brd 172.16.9.255 scope global br-ctlplane valid_lft forever preferred_lft forever sudo ovs-vsctl show Bridge br-ctlplane Controller \"tcp:127.0.0.1:6633\" is_connected: true fail_mode: secure datapath_type: system Port eth0 Interface eth0 Port br-ctlplane Interface br-ctlplane type: internal Port phy-br-ctlplane Interface phy-br-ctlplane type: patch options: {peer=int-br-ctlplane}", "NETWORKING_PREPARATION_COMMANDS=('ip link add br-ctlplane type bridge' 'ip link set br-ctlplane up' 'ip link set eth0 up' 'ip link set eth0 master br-ctlplane' 'ip addr add 172.16.9.1/24 dev br-ctlplane')", "[stack@undercloud ~]USD PASSWORD=USD(sudo /bin/hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)", "[stack@undercloud ~]USD sudo podman exec mysql bash -c \"mysqldump -uroot -pUSDPASSWORD --opt --all-databases\" | sudo tee /root/undercloud-all-databases.sql", "[stack@undercloud ~]USD source stackrc", "(undercloud) [stack@undercloud ~]USD tripleo-ansible-inventory --ansible_ssh_user heat-admin --static-yaml-inventory /home/stack/tripleo-inventory.yaml", "(undercloud) [stack@undercloud ~]USD openstack undercloud backup --inventory /home/stack/tripleo-inventory.yaml", "openstack undercloud backup --cron", "openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron\": \"0 0 * * 0\"}'", "openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_extra\":\"--extra-vars bar-vars.yaml --inventory /home/stack/tripleo-inventory.yaml\"}'", "openstack undercloud backup --cron --extra-vars '{\"tripleo_backup_and_restore_cron_user\": \"root\"}" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/backing_up_and_restoring_the_undercloud_and_control_plane_nodes/assembly_backing-up-the-undercloud-node_br-undercloud-ctlplane
Chapter 3. Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure
Chapter 3. Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure In OpenShift Container Platform version 4.16, you can install a cluster on Microsoft Azure Stack Hub with an installer-provisioned infrastructure. However, you must manually configure the install-config.yaml file to specify values that are specific to Azure Stack Hub. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 3.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.6. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure Stack Hub 3.6.1. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{"auths": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 10 12 14 17 18 20 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 13 The name of the resource group that contains the DNS zone for your base domain. 15 The name of your Azure Stack Hub local region. 16 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 19 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 21 The pull secret required to authenticate your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 3.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating cloud provider resources with manually maintained credentials 3.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 3.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.12. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console 3.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 3.14. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')", "curl -O -L USD{COMPRESSED_VHD_URL}", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure_stack_hub/installing-azure-stack-hub-default
Chapter 15. Using Ansible to manage the replication topology in IdM
Chapter 15. Using Ansible to manage the replication topology in IdM You can maintain multiple Identity Management (IdM) servers and let them replicate each other for redundancy purposes to mitigate or prevent server loss. For example, if one server fails, the other servers keep providing services to the domain. You can also recover the lost server by creating a new replica based on one of the remaining servers. Data stored on an IdM server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. The data that is replicated is stored in the topology suffixes . When two replicas have a replication agreement between their suffixes, the suffixes form a topology segment . This chapter describes how to use Ansible to manage IdM replication agreements, topology segments, and topology suffixes. 15.1. Using Ansible to ensure a replication agreement exists in IdM Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to use an Ansible playbook to ensure that a replication agreement of the domain type exists between server.idm.example.com and replica.idm.example.com . Prerequisites Ensure that you understand the recommendations for designing your IdM topology listed in Guidelines for connecting IdM replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the add-topologysegment.yml Ansible playbook file provided by the ansible-freeipa package: Open the add-topologysegment-copy.yml file for editing. Adapt the file by setting the following variables in the ipatopologysegment task section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to either domain or ca , depending on what type of segment you want to add. Set the left variable to the name of the IdM server that you want to be the left node of the replication agreement. Set the right variable to the name of the IdM server that you want to be the right node of the replication agreement. Ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 15.2. Using Ansible to ensure replication agreements exist between multiple IdM replicas Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to ensure replication agreements exist between multiple pairs of replicas in IdM. Prerequisites Ensure that you understand the recommendations for designing your IdM topology listed in Connecting the replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the add-topologysegments.yml Ansible playbook file provided by the ansible-freeipa package: Open the add-topologysegments-copy.yml file for editing. Adapt the file by setting the following variables in the vars section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. For every topology segment, add a line in the ipatopology_segments section and set the following variables: Set the suffix variable to either domain or ca , depending on what type of segment you want to add. Set the left variable to the name of the IdM server that you want to be the left node of the replication agreement. Set the right variable to the name of the IdM server that you want to be the right node of the replication agreement. In the tasks section of the add-topologysegments-copy.yml file, ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 15.3. Using Ansible to check if a replication agreement exists between two replicas Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to verify that replication agreements exist between multiple pairs of replicas in IdM. In contrast to Using Ansible to ensure a replication agreement exists in IdM , this procedure does not modify the existing configuration. Prerequisites Ensure that you understand the recommendations for designing your Identity Management (IdM) topology listed in Connecting the replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the check-topologysegments.yml Ansible playbook file provided by the ansible-freeipa package: Open the check-topologysegments-copy.yml file for editing. Adapt the file by setting the following variables in the vars section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. For every topology segment, add a line in the ipatopology_segments section and set the following variables: Set the suffix variable to either domain or ca , depending on the type of segment you are adding. Set the left variable to the name of the IdM server that you want to be the left node of the replication agreement. Set the right variable to the name of the IdM server that you want to be the right node of the replication agreement. In the tasks section of the check-topologysegments-copy.yml file, ensure that the state variable is set to present . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 15.4. Using Ansible to verify that a topology suffix exists in IdM In the context of replication agreements in Identity Management (IdM), topology suffixes store the data that is replicated. IdM supports two types of topology suffixes: domain and ca . Each suffix represents a separate back end, a separate replication topology. When a replication agreement is configured, it joins two topology suffixes of the same type on two different servers. The domain suffix contains all domain-related data, such as data about users, groups, and policies. The ca suffix contains data for the Certificate System component. It is only present on servers with a certificate authority (CA) installed. Follow this procedure to use an Ansible playbook to ensure that a topology suffix exists in IdM. The example describes how to ensure that the domain suffix exists in IdM. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the verify-topologysuffix.yml Ansible playbook file provided by the ansible-freeipa package: Open the verify-topologysuffix-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipatopologysuffix section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to domain . If you are verifying the presence of the ca suffix, set the variable to ca . Ensure that the state variable is set to verified . No other option is possible. This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 15.5. Using Ansible to reinitialize an IdM replica If a replica has been offline for a long period of time or its database has been corrupted, you can reinitialize it. Reinitialization refreshes the replica with an updated set of data. Reinitialization can, for example, be used if an authoritative restore from backup is required. Note In contrast to replication updates, during which replicas only send changed entries to each other, reinitialization refreshes the whole database. The local host on which you run the command is the reinitialized replica. To specify the replica from which the data is obtained, use the direction option. Follow this procedure to use an Ansible playbook to reinitialize the domain data on replica.idm.example.com from server.idm.example.com . Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the reinitialize-topologysegment.yml Ansible playbook file provided by the ansible-freeipa package: Open the reinitialize-topologysegment-copy.yml file for editing. Adapt the file by setting the following variables in the ipatopologysegment section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to domain . If you are reinitializing the ca data, set the variable to ca . Set the left variable to the left node of the replication agreement. Set the right variable to the right node of the replication agreement. Set the direction variable to the direction of the reinitializing data. The left-to-right direction means that data flows from the left node to the right node. Ensure that the state variable is set to reinitialized . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 15.6. Using Ansible to ensure a replication agreement is absent in IdM Data stored on an Identity Management (IdM) server is replicated based on replication agreements: when two servers have a replication agreement configured, they share their data. Replication agreements are always bilateral: the data is replicated from the first replica to the other one as well as from the other replica to the first one. Follow this procedure to ensure a replication agreement between two replicas does not exist in IdM. The example describes how to ensure a replication agreement of the domain type does not exist between the replica01.idm.example.com and replica02.idm.example.com IdM servers. Prerequisites You understand the recommendations for designing your IdM topology listed in Connecting the replicas in a topology . You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password and that you have access to a file that stores the password protecting the secret.yml file. The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Copy the delete-topologysegment.yml Ansible playbook file provided by the ansible-freeipa package: Open the delete-topologysegment-copy.yml file for editing. Adapt the file by setting the following variables in the ipatopologysegment task section: Indicate that the value of the ipaadmin_password variable is defined in the secret.yml Ansible vault file. Set the suffix variable to domain . Alternatively, if you are ensuring that the ca data are not replicated between the left and right nodes, set the variable to ca . Set the left variable to the name of the IdM server that is the left node of the replication agreement. Set the right variable to the name of the IdM server that is the right node of the replication agreement. Ensure that the state variable is set to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Explaining Replication Agreements, Topology Suffixes, and Topology Segments /usr/share/doc/ansible-freeipa/README-topology.md Sample playbooks in /usr/share/doc/ansible-freeipa/playbooks/topology 15.7. Additional resources Planning the replica topology . Installing an IdM replica .
[ "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/topology/add-topologysegment.yml add-topologysegment-copy.yml", "--- - name: Playbook to handle topologysegment hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain left: server.idm.example.com right: replica.idm.example.com state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory add-topologysegment-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/topology/add-topologysegments.yml add-topologysegments-copy.yml", "--- - name: Add topology segments hosts: ipaserver gather_facts: false vars: ipaadmin_password: \"{{ ipaadmin_password }}\" ipatopology_segments: - {suffix: domain, left: replica1.idm.example.com , right: replica2.idm.example.com } - {suffix: domain, left: replica2.idm.example.com , right: replica3.idm.example.com } - {suffix: domain, left: replica3.idm.example.com , right: replica4.idm.example.com } - {suffix: domain+ca, left: replica4.idm.example.com , right: replica1.idm.example.com } vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: \"{{ item.suffix }}\" name: \"{{ item.name | default(omit) }}\" left: \"{{ item.left }}\" right: \"{{ item.right }}\" state: present loop: \"{{ ipatopology_segments | default([]) }}\"", "ansible-playbook --vault-password-file=password_file -v -i inventory add-topologysegments-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/topology/check-topologysegments.yml check-topologysegments-copy.yml", "--- - name: Add topology segments hosts: ipaserver gather_facts: false vars: ipaadmin_password: \"{{ ipaadmin_password }}\" ipatopology_segments: - {suffix: domain, left: replica1.idm.example.com, right: replica2.idm.example.com } - {suffix: domain, left: replica2.idm.example.com , right: replica3.idm.example.com } - {suffix: domain, left: replica3.idm.example.com , right: replica4.idm.example.com } - {suffix: domain+ca, left: replica4.idm.example.com , right: replica1.idm.example.com } vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Check topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: \"{{ item.suffix }}\" name: \"{{ item.name | default(omit) }}\" left: \"{{ item.left }}\" right: \"{{ item.right }}\" state: checked loop: \"{{ ipatopology_segments | default([]) }}\"", "ansible-playbook --vault-password-file=password_file -v -i inventory check-topologysegments-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/topology/ verify-topologysuffix.yml verify-topologysuffix-copy.yml", "--- - name: Playbook to handle topologysuffix hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Verify topology suffix ipatopologysuffix: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain state: verified", "ansible-playbook --vault-password-file=password_file -v -i inventory verify-topologysuffix-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/topology/reinitialize-topologysegment.yml reinitialize-topologysegment-copy.yml", "--- - name: Playbook to handle topologysegment hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Reinitialize topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain left: server.idm.example.com right: replica.idm.example.com direction: left-to-right state: reinitialized", "ansible-playbook --vault-password-file=password_file -v -i inventory reinitialize-topologysegment-copy.yml", "cd ~/ MyPlaybooks /", "cp /usr/share/doc/ansible-freeipa/playbooks/topology/delete-topologysegment.yml delete-topologysegment-copy.yml", "--- - name: Playbook to handle topologysegment hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete topology segment ipatopologysegment: ipaadmin_password: \"{{ ipaadmin_password }}\" suffix: domain left: replica01.idm.example.com right: replica02.idm.example.com: state: absent", "ansible-playbook --vault-password-file=password_file -v -i inventory delete-topologysegment-copy.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/using-ansible-to-manage-the-replication-topology-in-idm_using-ansible-to-install-and-manage-idm
Chapter 10. Using the User Operator to manage Kafka users
Chapter 10. Using the User Operator to manage Kafka users When you create, modify or delete a user using the KafkaUser resource, the User Operator ensures that these changes are reflected in the Kafka cluster. For more information on the KafkaUser resource, see the KafkaUser schema reference . 10.1. Configuring Kafka users Use the properties of the KafkaUser resource to configure Kafka users. You can use oc apply to create or modify users, and oc delete to delete existing users. For example: oc apply -f <user_config_file> oc delete KafkaUser <user_name> Users represent Kafka clients. When you configure Kafka users, you enable the user authentication and authorization mechanisms required by clients to access Kafka. The mechanism used must match the equivalent Kafka configuration. For more information on using Kafka and KafkaUser resources to secure access to Kafka brokers, see Securing access to Kafka brokers . Prerequisites A running Kafka cluster configured with a Kafka broker listener using mTLS authentication and TLS encryption. A running User Operator (typically deployed with the Entity Operator). Procedure Configure the KafkaUser resource. This example specifies mTLS authentication and simple authorization using ACLs. Example Kafka user configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: "*" - resource: type: group name: my-group patternType: literal operations: - Read host: "*" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: "*" Create the KafkaUser resource in OpenShift. oc apply -f <user_config_file> Wait for the ready status of the user to change to True : oc get kafkausers -o wide -w -n <namespace> Kafka user status NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True User creation is successful when the READY output shows True . If the READY column stays blank, get more details on the status from the resource YAML or User Operator logs. Messages provide details on the reason for the current status. oc get kafkausers my-user-2 -o yaml Details on a user with a NotReady status # ... status: conditions: - lastTransitionTime: "2022-06-10T10:07:37.238065Z" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: "True" type: NotReady In this example, the reason the user is not ready is because simple authorization is not enabled in the Kafka configuration. Kafka configuration for simple authorization apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... authorization: type: simple After updating the Kafka configuration, the status shows the user is ready. oc get kafkausers my-user-2 -o wide -w -n <namespace> Status update of the user NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True Fetching the details shows no messages. oc get kafkausers my-user-2 -o yaml Details on a user with a READY status # ... status: conditions: - lastTransitionTime: "2022-06-10T10:33:40.166846Z" status: "True" type: Ready
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user-1 labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls authorization: type: simple acls: # Example consumer Acls for topic my-topic using consumer group my-group - resource: type: topic name: my-topic patternType: literal operations: - Describe - Read host: \"*\" - resource: type: group name: my-group patternType: literal operations: - Read host: \"*\" # Example Producer Acls for topic my-topic - resource: type: topic name: my-topic patternType: literal operations: - Create - Describe - Write host: \"*\"", "apply -f <user_config_file>", "get kafkausers -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-1 my-cluster tls simple True my-user-2 my-cluster tls simple my-user-3 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:07:37.238065Z\" message: Simple authorization ACL rules are configured but not supported in the Kafka cluster configuration. reason: InvalidResourceException status: \"True\" type: NotReady", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: simple", "get kafkausers my-user-2 -o wide -w -n <namespace>", "NAME CLUSTER AUTHENTICATION AUTHORIZATION READY my-user-2 my-cluster tls simple True", "get kafkausers my-user-2 -o yaml", "status: conditions: - lastTransitionTime: \"2022-06-10T10:33:40.166846Z\" status: \"True\" type: Ready" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/assembly-using-the-user-operator-str
Jenkins
Jenkins OpenShift Container Platform 4.16 Jenkins Red Hat OpenShift Documentation Team
[ "podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>", "oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8", "oc describe serviceaccount jenkins", "Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp", "oc describe secret <secret name from above>", "Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA", "pluginId:pluginVersion", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>", "oc new-app jenkins-persistent", "oc new-app jenkins-ephemeral", "oc describe jenkins-ephemeral", "kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange", "def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }", "docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>", "docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>", "podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }", "pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source", "apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)", "steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh", "steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")", "#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }", "apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json", "oc import-image jenkins-agent-nodejs -n openshift", "oc import-image jenkins-agent-maven -n openshift", "oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/jenkins/index
function::sprint_ustack
function::sprint_ustack Name function::sprint_ustack - Return stack for the current task from string. Synopsis Arguments stk String with list of hexadecimal addresses for the current task. Description Perform a symbolic lookup of the addresses in the given string, which is assumed to be the result of a prior call to ubacktrace for the current task. Returns a simple backtrace from the given hex string. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_ustack. NOTE it is recommended to use sprint_usyms instead of this function.
[ "sprint_ustack:string(stk:string)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sprint-ustack
Chapter 2. Upgrading Red Hat Satellite
Chapter 2. Upgrading Red Hat Satellite Use the following procedures to upgrade your existing Red Hat Satellite to Red Hat Satellite 6.15: Review Section 1.1, "Prerequisites" . Section 2.1, "Satellite Server upgrade considerations" Section 2.3, "Synchronizing the new repositories" Section 2.5, "Upgrading Capsule Servers" 2.1. Satellite Server upgrade considerations This section describes how to upgrade Satellite Server from 6.14 to 6.15. You can upgrade from any minor version of Satellite Server 6.14. Before you begin Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Review and update your firewall configuration prior to upgrading your Satellite Server. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . Ensure that you do not delete the manifest from the Customer Portal or in the Satellite web UI because this removes all the entitlements of your content hosts. If you have edited any of the default templates, back up the files either by cloning or exporting them. Cloning is the recommended method because that prevents them being overwritten in future updates or upgrades. To confirm if a template has been edited, you can view its History before you upgrade or view the changes in the audit log after an upgrade. In the Satellite web UI, navigate to Monitor > Audits and search for the template to see a record of changes made. If you use the export method, restore your changes by comparing the exported template and the default template, manually applying your changes. Capsule considerations If you use content views to control updates to a Capsule Server's base operating system, or for Capsule Server repository, you must publish updated versions of those content views. Note that Satellite Server upgraded from 6.14 to 6.15 can use Capsule Servers still at 6.14. Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrade scenarios You cannot upgrade a self-registered Satellite. You must migrate a self-registered Satellite to the Red Hat Content Delivery Network (CDN) and then perform the upgrade. FIPS mode You cannot upgrade Satellite Server from a RHEL base system that is not operating in FIPS mode to a RHEL base system that is operating in FIPS mode. To run Satellite Server on a Red Hat Enterprise Linux base system operating in FIPS mode, you must install Satellite on a freshly provisioned RHEL base system operating in FIPS mode. For more information, see Preparing your environment for installation in Installing Satellite Server in a connected network environment . 2.2. Upgrading a connected Satellite Server Use this procedure for a Satellite Server with access to the public internet Warning If you customize configuration files, manually or using a tool such as Hiera, these changes are overwritten when the maintenance script runs during upgrading or updating. You can use the --noop option with the satellite-installer to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade. Upgrade Satellite Server Stop all Satellite services: Take a snapshot or create a backup: On a virtual machine, take a snapshot. On a physical machine, create a backup. Start all Satellite services: Optional: If you made manual edits to DNS or DHCP configuration in the /etc/zones.conf or /etc/dhcp/dhcpd.conf files, back up the configuration files because the installer only supports one domain or subnet, and therefore restoring changes from these backups might be required. Optional: If you made manual edits to DNS or DHCP configuration files and do not want to overwrite the changes, enter the following command: In the Satellite web UI, navigate to Hosts > Discovered hosts . On the Discovered Hosts page, power off and then delete the discovered hosts. From the Select an Organization menu, select each organization in turn and repeat the process to power off and delete the discovered hosts. Make a note to reboot these hosts when the upgrade is complete. Ensure that the Satellite Maintenance repository is enabled: Enable the maintenance module: Check the available versions to confirm the version you want is listed: Use the health check option to determine if the system is ready for upgrade. When prompted, enter the hammer admin user credentials to configure satellite-maintain with hammer credentials. These changes are applied to the /etc/foreman-maintain/foreman-maintain-hammer.yml file. Review the results and address any highlighted error conditions before performing the upgrade. Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running you can see the logged messages in the /var/log/foreman-installer/satellite.log file to check if the process completed successfully. Perform the upgrade: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: 2.3. Synchronizing the new repositories You must enable and synchronize the new 6.15 repositories before you can upgrade Capsule Servers and Satellite clients. Procedure In the Satellite web UI, navigate to Content > Red Hat Repositories . Toggle the Recommended Repositories switch to the On position. From the list of results, expand the following repositories and click the Enable icon to enable the repositories: To upgrade Satellite clients, enable the Red Hat Satellite Client 6 repositories for all Red Hat Enterprise Linux versions that clients use. If you have Capsule Servers, to upgrade them, enable the following repositories too: Red Hat Satellite Capsule 6.15 (for RHEL 8 x86_64) (RPMs) Red Hat Satellite Maintenance 6.15 (for RHEL 8 x86_64) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - BaseOS) (RPMs) Red Hat Enterprise Linux 8 (for x86_64 - AppStream) (RPMs) Note If the 6.15 repositories are not available, refresh the Red Hat Subscription Manifest. In the Satellite web UI, navigate to Content > Subscriptions , click Manage Manifest , then click Refresh . In the Satellite web UI, navigate to Content > Sync Status . Click the arrow to the product to view the available repositories. Select the repositories for 6.15. Note that Red Hat Satellite Client 6 does not have a 6.15 version. Choose Red Hat Satellite Client 6 instead. Click Synchronize Now . Important If an error occurs when you try to synchronize a repository, refresh the manifest. If the problem persists, raise a support request. Do not delete the manifest from the Customer Portal or in the Satellite web UI; this removes all the entitlements of your content hosts. If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . 2.4. Performing post-upgrade tasks Optional: If the default provisioning templates have been changed during the upgrade, recreate any templates cloned from the default templates. If the custom code is executed before and/or after the provisioning process, use custom provisioning snippets to avoid recreating cloned templates. For more information about configuring custom provisioning snippets, see Creating Custom Provisioning Snippets in Provisioning hosts . 2.5. Upgrading Capsule Servers This section describes how to upgrade Capsule Servers from 6.14 to 6.15. Before you begin You must upgrade Satellite Server before you can upgrade any Capsule Servers. Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . Ensure the Red Hat Satellite Capsule 6.15 repository is enabled in Satellite Server and synchronized. Ensure that you synchronize the required repositories on Satellite Server. For more information, see Section 2.3, "Synchronizing the new repositories" . If you use content views to control updates to the base operating system of Capsule Server, update those content views with new repositories, publish, and promote their updated versions. For more information, see Managing content views in Managing content . Ensure the Capsule's base system is registered to the newly upgraded Satellite Server. Ensure the Capsule has the correct organization and location settings in the newly upgraded Satellite Server. Review and update your firewall configuration prior to upgrading your Capsule Server. For more information, see Preparing Your Environment for Capsule Installation in Installing Capsule Server . Warning If you implemented custom certificates, you must retain the content of both the /root/ssl-build directory and the directory in which you created any source files associated with your custom certificates. Failure to retain these files during an upgrade causes the upgrade to fail. If these files have been deleted, they must be restored from a backup in order for the upgrade to proceed. Upgrading Capsule Servers Create a backup. On a virtual machine, take a snapshot. On a physical machine, create a backup. For information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Clean yum cache: Synchronize the satellite-capsule-6.15-for-rhel-8-x86_64-rpms repository in the Satellite Server. Publish and promote a new version of the content view with which the Capsule is registered. The rubygem-foreman_maintain is installed from the Satellite Maintenance repository or upgraded from the Satellite Maintenance repository if currently installed. Ensure Capsule has access to satellite-maintenance-6.15-for-rhel-8-x86_64-rpms and execute: On Capsule Server, verify that the foreman_url setting points to the Satellite FQDN: Check the available versions to confirm the version you want is listed: Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. If you lose connection to the command shell where the upgrade command is running you can see the logged messages in the /var/log/foreman-installer/capsule.log file to check if the process completed successfully. Use the health check option to determine if the system is ready for upgrade: Review the results and address any highlighted error conditions before performing the upgrade. Perform the upgrade: Determine if the system needs a reboot: If the command told you to reboot, then reboot the system: Optional: If you made manual edits to DNS or DHCP configuration files, check and restore any changes required to the DNS and DHCP configuration files using the backups made earlier. Optional: If you use custom repositories, ensure that you enable these custom repositories after the upgrade completes. Upgrading Capsule Servers using remote execution Create a backup or take a snapshot. For more information on backups, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . In the Satellite web UI, navigate to Monitor > Jobs . Click Run Job . From the Job category list, select Maintenance Operations . From the Job template list, select Capsule Upgrade Playbook . In the Search Query field, enter the host name of the Capsule. Ensure that Apply to 1 host is displayed in the Resolves to field. In the target_version field, enter the target version of the Capsule. In the whitelist_options field, enter the options. Select the schedule for the job execution in Schedule . In the Type of query section, click Static Query . 2.6. Upgrading the external database You can upgrade an external database from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 while upgrading Satellite from 6.14 to 6.15. Prerequisites Create a new Red Hat Enterprise Linux 8 based host for PostgreSQL server that follows the external database on Red Hat Enterprise Linux 8 documentation. For more information, see Using External Databases with Satellite . Procedure Create a backup. Restore the backup on the new server. If Satellite reaches the new database server via the old name, no further changes are required. Otherwise reconfigure Satellite to use the new name:
[ "satellite-maintain service stop", "satellite-maintain service start", "satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dhcp-managed=false", "subscription-manager repos --enable satellite-maintenance-6.15-for-rhel-8-x86_64-rpms", "dnf module enable satellite-maintenance:el8", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15", "satellite-maintain upgrade run --target-version 6.15", "dnf needs-restarting --reboothint", "reboot", "yum clean metadata", "satellite-maintain self-upgrade", "grep foreman_url /etc/foreman-proxy/settings.yml", "satellite-maintain upgrade list-versions", "satellite-maintain upgrade check --target-version 6.15", "satellite-maintain upgrade run --target-version 6.15", "dnf needs-restarting --reboothint", "reboot", "satellite-installer --foreman-db-host newpostgres.example.com --katello-candlepin-db-host newpostgres.example.com --foreman-proxy-content-pulpcore-postgresql-host newpostgres.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/upgrading_connected_red_hat_satellite_to_6.15/Upgrading_satellite_upgrading-connected
Chapter 8. Updating a cluster using the web console
Chapter 8. Updating a cluster using the web console You can perform minor version and patch updates on an OpenShift Container Platform cluster by using the web console. Note Use the web console or oc adm upgrade channel <channel> to change the update channel. You can follow the steps in Updating a cluster using the CLI to complete the update after you change to a 4.12 channel. 8.1. Prerequisites Have access to the cluster as a user with admin privileges. See Using RBAC to define and apply permissions . Have a recent etcd backup in case your update fails and you must restore your cluster to a state . Support for RHEL7 workers is removed in OpenShift Container Platform 4.12. You must replace RHEL7 workers with RHEL8 or RHCOS workers before upgrading to OpenShift Container Platform 4.12. Red Hat does not support in-place RHEL7 to RHEL8 updates for RHEL workers; those hosts must be replaced with a clean operating system install. Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel. Updating the Operators ensures they have a valid update path when the default OperatorHub catalogs switch from the current minor version to the during a cluster update. See Updating installed Operators for more information. Ensure that all machine config pools (MCPs) are running and not paused. Nodes associated with a paused MCP are skipped during the update process. You can pause the MCPs if you are performing a canary rollout update strategy. To accommodate the time it takes to update, you are able to do a partial update by updating the worker or custom pool nodes. You can pause and resume within the progress bar of each pool. If your cluster uses manually maintained credentials, update the cloud provider resources for the new release. For more information, including how to determine if this is a requirement for your cluster, see Preparing to update a cluster with manually maintained credentials . Review the list of APIs that were removed in Kubernetes 1.25, migrate any affected components to use the new API version, and provide the administrator acknowledgment. For more information, see Preparing to update to OpenShift Container Platform 4.12 . If you run an Operator or you have configured any application with the pod disruption budget, you might experience an interruption during the upgrade process. If minAvailable is set to 1 in PodDisruptionBudget , the nodes are drained to apply pending machine configs which might block the eviction process. If several nodes are rebooted, all the pods might run on only one node, and the PodDisruptionBudget field can prevent the node drain. Important When an update is failing to complete, the Cluster Version Operator (CVO) reports the status of any blocking components while attempting to reconcile the update. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. Using the unsupportedConfigOverrides section to modify the configuration of an Operator is unsupported and might block cluster updates. You must remove this setting before you can update your cluster. Additional resources Support policy for unmanaged Operators 8.2. Performing a canary rollout update In some specific use cases, you might want a more controlled update process where you do not want specific nodes updated concurrently with the rest of the cluster. These use cases include, but are not limited to: You have mission-critical applications that you do not want unavailable during the update. You can slowly test the applications on your nodes in small batches after the update. You have a small maintenance window that does not allow the time for all nodes to be updated, or you have multiple maintenance windows. The rolling update process is not a typical update workflow. With larger clusters, it can be a time-consuming process that requires you execute multiple commands. This complexity can result in errors that can affect the entire cluster. It is recommended that you carefully consider whether your organization wants to use a rolling update and carefully plan the implementation of the process before you start. The rolling update process described in this topic involves: Creating one or more custom machine config pools (MCPs). Labeling each node that you do not want to update immediately to move those nodes to the custom MCPs. Pausing those custom MCPs, which prevents updates to those nodes. Performing the cluster update. Unpausing one custom MCP, which triggers the update on those nodes. Testing the applications on those nodes to make sure the applications work as expected on those newly-updated nodes. Optionally removing the custom labels from the remaining nodes in small batches and testing the applications on those nodes. Note Pausing an MCP prevents the Machine Config Operator from applying any configuration changes on the associated nodes. Pausing an MCP also prevents any automatically rotated certificates from being pushed to the associated nodes, including the automatic CA rotation of the kube-apiserver-to-kubelet-signer CA certificate. If the MCP is paused when the kube-apiserver-to-kubelet-signer CA certificate expires and the MCO attempts to automatically renew the certificate, the new certificate is created but not applied across the nodes in the respective machine config pool. This causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. Pausing an MCP should be done with careful consideration about the kube-apiserver-to-kubelet-signer CA certificate expiration and for short periods of time only. If you want to use the canary rollout update process, see Performing a canary rollout update . 8.3. Updating cloud provider resources with manually maintained credentials Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Procedure Extract and examine the CredentialsRequest custom resource for the new release. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. Update the manually maintained credentials on your cluster: Create new secrets for any CredentialsRequest custom resources that are added by the new release image. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on AWS 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6 1 The Machine API Operator CR is required. 2 The Cloud Credential Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Network Operator CR is required. 6 The Storage Operator CR is an optional component and might be disabled in your cluster. Example credrequests directory contents for OpenShift Container Platform 4.12 on GCP 0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Cloud Credential Operator CR is required. 4 The Image Registry Operator CR is required. 5 The Ingress Operator CR is required. 6 The Network Operator CR is required. 7 The Storage Operator CR is an optional component and might be disabled in your cluster. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for GCP 8.4. Pausing a MachineHealthCheck resource by using the web console During the upgrade process, nodes in the cluster might become temporarily unavailable. In the case of worker nodes, the machine health check might identify such nodes as unhealthy and reboot them. To avoid rebooting such nodes, pause all the MachineHealthCheck resources before updating the cluster. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Compute MachineHealthChecks . To pause the machine health checks, add the cluster.x-k8s.io/paused="" annotation to each MachineHealthCheck resource. For example, to add the annotation to the machine-api-termination-handler resource, complete the following steps: Click the Options menu to the machine-api-termination-handler and click Edit annotations . In the Edit annotations dialog, click Add more . In the Key and Value fields, add cluster.x-k8s.io/paused and "" values, respectively, and click Save . 8.5. About updating single node OpenShift Container Platform You can update, or upgrade, a single-node OpenShift Container Platform cluster by using either the console or CLI. However, note the following limitations: The prerequisite to pause the MachineHealthCheck resources is not required because there is no other node to perform the health check. Restoring a single-node OpenShift Container Platform cluster using an etcd backup is not officially supported. However, it is good practice to perform the etcd backup in case your upgrade fails. If your control plane is healthy, you might be able to restore your cluster to a state by using the backup. Updating a single-node OpenShift Container Platform cluster requires downtime and can include an automatic reboot. The amount of downtime depends on the update payload, as described in the following scenarios: If the update payload contains an operating system update, which requires a reboot, the downtime is significant and impacts cluster management and user workloads. If the update contains machine configuration changes that do not require a reboot, the downtime is less, and the impact on the cluster management and user workloads is lessened. In this case, the node draining step is skipped with single-node OpenShift Container Platform because there is no other node in the cluster to reschedule the workloads to. If the update payload does not contain an operating system update or machine configuration changes, a short API outage occurs and resolves quickly. Important There are conditions, such as bugs in an updated package, that can cause the single node to not restart after a reboot. In this case, the update does not rollback automatically. Additional resources For information on which machine configuration changes require a reboot, see the note in Understanding the Machine Config Operator . 8.6. Updating a cluster by using the web console If updates are available, you can update your cluster from the web console. You can find information about available OpenShift Container Platform advisories and updates in the errata section of the Customer Portal. Prerequisites Have access to the web console as a user with admin privileges. Pause all MachineHealthCheck resources. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. For production clusters, ensure that the Channel is set to the correct channel for the version that you want to update to, such as stable-4.12 . Important For production clusters, you must subscribe to a stable-* , eus-* or fast-* channel. Note When you are ready to move to the minor version, choose the channel that corresponds to that minor version. The sooner the update channel is declared, the more effectively the cluster can recommend update paths to your target version. The cluster might take some time to evaluate all the possible updates that are available and offer the best update recommendations to choose from. Update recommendations can change over time, as they are based on what update options are available at the time. If you cannot see an update path to your target minor version, keep updating your cluster to the latest patch release for your current version until the minor version is available in the path. If the Update status is not Updates available , you cannot update your cluster. Select channel indicates the cluster version that your cluster is running or is updating to. Select a version to update to, and click Save . The Input channel Update status changes to Update to <product-version> in progress , and you can review the progress of the cluster update by watching the progress bars for the Operators and nodes. Note If you are upgrading your cluster to the minor version, like version 4.y to 4.(y+1), it is recommended to confirm your nodes are updated before deploying workloads that rely on a new feature. Any pools with worker nodes that are not yet updated are displayed on the Cluster Settings page. After the update completes and the Cluster Version Operator refreshes the available updates, check if more updates are available in your current channel. If updates are available, continue to perform updates in the current channel until you can no longer update. If no updates are available, change the Channel to the stable-* , eus-* or fast-* channel for the minor version, and update to the version that you want in that channel. You might need to perform several intermediate updates until you reach the version that you want. 8.7. Changing the update server by using the web console Changing the update server is optional. If you have an OpenShift Update Service (OSUS) installed and configured locally, you must set the URL for the server as the upstream to use the local server during updates. Procedure Navigate to Administration Cluster Settings , click version . Click the YAML tab and then edit the upstream parameter value: Example output ... spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1 ... 1 The <update-server-url> variable specifies the URL for the update server. The default upstream is https://api.openshift.com/api/upgrades_info/v1/graph . Click Save . Additional resources Understanding update channels and releases
[ "0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6", "0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7", "spec: clusterID: db93436d-7b05-42cc-b856-43e11ad2d31a upstream: '<update-server-url>' 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/updating_clusters/updating-cluster-within-minor
Chapter 1. Managing applications
Chapter 1. Managing applications Review the following topics to learn more about creating, deploying, and managing your applications. This guide assumes familiarity with Kubernetes concepts and terminology. Key Kubernetes terms and components are not defined. For more information about Kubernetes concepts, see Kubernetes Documentation . The application management functions provide you with unified and simplified options for constructing and deploying applications and application updates. With these functions, your developers and DevOps personnel can create and manage applications across environments through channel and subscription-based automation. Important: An application name cannot exceed 37 characters. See the following topics: Application model and definitions Application console Subscription reports Managing application resources Managing apps with Git repositories Managing apps with Helm repositories Managing apps with Object storage repositories Application advanced configuration Subscribing Git resources Granting subscription admin privilege Creating an allow and deny list as subscription administrator Adding reconcile options Configuring application channel and subscription for a secure Git connection Setting up Ansible Automation Platform tasks Scheduling a deployment Configuring package overrides Channel samples Subscription samples Application samples 1.1. Application model and definitions The application model is based on subscribing to one or more Kubernetes resource repositories ( channel resources) that contains resources that are deployed on managed clusters. Both single and multicluster applications use the same Kubernetes specifications, but multicluster applications involve more automation of the deployment and application management lifecycle. See the following image to understand more about the application model: View the following application resource sections: Applications Subscriptions ApplicationSet Application documentation Best practice: Use the GitOps Operator or Argo CD integration instead of the Channel and Subscription model. Learn more from the GitOps overview . 1.1.1. Applications Applications ( application.app.k8s.io ) in Red Hat Advanced Cluster Management for Kubernetes are used for grouping Kubernetes resources that make up an application. All of the application component resources for Red Hat Advanced Cluster Management for Kubernetes applications are defined in YAML file specification sections. When you need to create or update an application component resource, you need to create or edit the appropriate section to include the labels for defining your resource. You can also work with Discovered applications, which are applications that are discovered by the OpenShift Container Platform GitOps or an Argo CD operator that is installed in your clusters. Applications that share the same repository are grouped together in this view. 1.1.2. Subscriptions Subscriptions ( subscription.apps.open-cluster-management.io ) allow clusters to subscribe to a source repository (channel) that can be the following types: Git repository, Helm release registry, or Object storage repository. Subscriptions can deploy application resources locally to the hub cluster if the hub cluster is self-managed. You can then view the local-cluster (the self-managed hub cluster) subscription in the topology. Resource requirements might adversely impact hub cluster performance. Subscriptions can point to a channel or storage location for identifying new or updated resource templates. The subscription operator can then download directly from the storage location and deploy to targeted managed clusters without checking the hub cluster first. With a subscription, the subscription operator can monitor the channel for new or updated resources instead of the hub cluster. See the following subscription architecture image: 1.1.2.1. Channels Channels ( channel.apps.open-cluster-management.io ) define the source repositories that a cluster can subscribe to with a subscription, and can be the following types: Git, Helm release, and Object storage repositories, and resource templates on the hub cluster. If you have applications that require Kubernetes resources or Helm charts from channels that require authorization, such as entitled Git repositories, you can use secrets to provide access to these channels. Your subscriptions can access Kubernetes resources and Helm charts for deployment from these channels, while maintaining data security. Channels use a namespace within the hub cluster and point to a physical place where resources are stored for deployment. Clusters can subscribe to channels for identifying the resources to deploy to each cluster. Notes: It is best practice to create each channel in a unique namespace. However, a Git channel can share a namespace with another type of channel, including Git, Helm, and Object storage. Resources within a channel can be accessed by only the clusters that subscribe to that channel. 1.1.2.1.1. Supported Git repository servers GitHub GitLab Bitbucket Gogs 1.1.3. ApplicationSet ApplicationSet is a sub-project of Argo CD that is supported by the GitOps Operator. ApplicationSet adds multicluster support for Argo CD applications. You can create an application set from the Red Hat Advanced Cluster Management console. Note: For more details on the prerequisites for deploying ApplicationSet , see Registering managed clusters to GitOps . OpenShift Container Platform GitOps uses Argo CD to maintain cluster resources. Argo CD is an open-source declarative tool for the continuous integration and continuous deployment (CI/CD) of applications. OpenShift Container Platform GitOps implements Argo CD as a controller (OpenShift Container Platform GitOps Operator) so that it continuously monitors application definitions and configurations defined in a Git repository. Then, Argo CD compares the specified state of these configurations with their live state on the cluster. The ApplicationSet controller is installed on the cluster through a GitOps operator instance and supplements it by adding additional features in support of cluster-administrator-focused scenarios. The ApplicationSet controller provides the following function: The ability to use a single Kubernetes manifest to target multiple Kubernetes clusters with the GitOps operator. The ability to use a single Kubernetes manifest to deploy multiple applications from one or multiple Git repositories with the GitOps operator. Improved support for monorepo, which is in the context of Argo CD, multiple Argo CD Application resources that are defined within a single Git repository. Within multitenant clusters, improved ability of individual cluster tenants to deploy applications using Argo CD without needing to involve privileged cluster administrators in enabling the destination clusters/namespaces. The ApplicationSet operator leverages the cluster decision generator to interface Kubernetes custom resources that use custom resource-specific logic to decide which managed clusters to deploy to. A cluster decision resource generates a list of managed clusters, which are then rendered into the template fields of the ApplicationSet resource. This is done using duck-typing, which does not require knowledge of the full shape of the referenced Kubernetes resource. See the following example of a generators.clusterDecisionResource value within an ApplicationSet : apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: sample-application-set namespace: sample-gitops-namespace spec: generators: - clusterDecisionResource: configMapRef: acm-placement labelSelector: matchLabels: cluster.open-cluster-management.io/placement: sample-application-placement requeueAfterSeconds: 180 template: metadata: name: sample-application-{{name}} spec: project: default sources: [ { repoURL: https://github.com/sampleapp/apprepo.git targetRevision: main path: sample-application } ] destination: namespace: sample-application server: "{{server}}" syncPolicy: syncOptions: - CreateNamespace=true - PruneLast=true - Replace=true - ApplyOutOfSyncOnly=true - Validate=false automated: prune: true allowEmpty: true selfHeal: true See the following Placement : apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: sample-application-placement namespace: sample-gitops-namespace spec: clusterSets: - sampleclusterset If you would like to learn more about ApplicationSets , see Cluster Decision Resource Generator . 1.1.4. Application documentation Learn more from the following documentation: Application console Managing application resources Managing apps with Git repositories Managing apps with Helm repositories Managing apps with Object storage repositories Application advanced configuration Subscribing Git resources Setting up Ansible Automation Platform tasks Channel samples Subscription samples Application samples 1.2. Application console The console includes a dashboard for managing the application lifecycle. You can use the console dashboard to create and manage applications and view the status of applications. Enhanced capabilities help your developers and operations personnel create, deploy, update, manage, and visualize applications across your clusters. See some of the console capability in the following list and see the console for guided information about terms, actions, and how to read the Topology: Important: Available actions are based on your assigned role. Learn about access requirements from the Role-based access control documentation. Visualize deployed applications across your clusters, including any associated resource repositories, subscriptions, and placement configurations. Create and edit applications, and subscribe resources. From the Actions menu, you can search, edit, or delete. Ensure you select YAML:On to view and edit the YAML as you update the fields. From the main Overview tab, you can click an application name to view details and application resources, including resource repositories, subscriptions, placements, and deployed resources such as any optional predeployment and postdeployment hooks that are using Ansible Automation Platform tasks (for Git repositories). You can also create an application from the overview. Create and view applications, such as ApplicationSet , Subscription , OpenShift , Flux , and Argo CD types. An ApplicationSet represents Argo applications that are generated from the controller. For an Argo CD ApplicationSet to be created, you need to enable Automatically sync when cluster state changes from the Sync policy . For Flux with the kustomization controller, find Kubernetes resources with the label kustomize.toolkit.fluxcd.io/name=<app_name> . For Flux with the helm controller, find Kubernetes resources with the label helm.toolkit.fluxcd.io/name=<app_name> . From the main Overview , when you click on an application name in the table to view a single application overview, you can see the following information: Cluster details, such as resource status Resource topology Subscription details Access to the Editor tab to edit Click the Topology tab for visual representation of all the applications and resources in your project. For Helm subscriptions, see Configuring package overrides to define the appropriate packageName and the packageAlias to get an accurate topology display. Click the Advanced configuration tab to view terminology and tables of resources for all applications. You can find resources and you can filter subscriptions, placement, and channels. If you have access, you can also click multiple Actions , such as Edit, Search, and Delete. View a successful Ansible Automation Platform deployment if you are using Ansible tasks as prehook or posthook for the deployed application. Click Launch resource in Search to search for related resources. Use Search to find application resources by the component kind for each resource. To search for resources, use the following values: Application resource Kind (search parameter) Subscription Subscription Channel Channel Secret Secret Placement Placement Application Application You can also search by other fields, including name, namespace, cluster, label, and more. For more information about using search, see Searching in the console . 1.3. Subscription reports Subscription reports are collections of application statuses from all the managed clusters in your fleet. Specifically, the parent application resource can hold reports from a scalable amount of managed clusters. Detailed application status is available on the managed clusters, while the subscriptionReports on the hub cluster are lightweight and more scalable. See the following three types of subsription status reports: Package-level SubscriptionStatus : This is the application package status on the managed cluster with detailed status for all the resources that are deployed by the application in the appsub namespace. Cluster-level SubscriptionReport : This is the overall status report on all the applications that are deployed to a particular cluster. Application-level SubscriptionReport : This is the overall status report on all the managed clusters to which a particular application is deployed. SubscriptionStatus package-level SubscriptionReport cluster-level SubscriptionReport application-level managedClusterView CLI application-level status CLI Last Update Time 1.3.1. SubscriptionStatus package-level The package-level managed cluster status is located in <namespace:<your-appsub-namespace> on the managed cluster and contains detailed status for all the resources that are deployed by the application. For every appsub that is deployed to a managed cluster, there is a SubscriptionStatus CR created in the appsub namespace on the managed cluster. Every resource is reported with detailed errors if errors exist. The package status only indicates the status of an individual package. You can view the overall subscription status by referencing the field, .status.subscription . See the following SubscriptionStatus sample YAML file: apiVersion: apps.open-cluster-management.io/v1alpha1 kind: SubscriptionStatus metadata: labels: apps.open-cluster-management.io/cluster: <your-managed-cluster> apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> statuses: packages: - apiVersion: v1 kind: Service lastUpdateTime: "2021-09-13T20:12:34Z" Message: <detailed error. visible only if the package fails> name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: "2021-09-13T20:12:34Z" name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: "2021-09-13T20:12:34Z" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: "2021-09-13T20:12:34Z" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: "2021-09-13T20:12:34Z" name: redis-slave namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: "2021-09-13T20:12:34Z" name: redis-slave namespace: test-ns-2 phase: Deployed subscription: lastUpdateTime: "2021-09-13T20:12:34Z" phase: Deployed 1.3.2. SubscriptionReport cluster-level The cluster-level status is located in <namespace:<your-managed-cluster-1> on the hub cluster and only contains the overall status on each application on that managed cluster. The subscriptionReport in each cluster namespace on the hub cluster reports one of the following statuses: Deployed Failed propagationFailed See the following SubscriptionReport sample YAML file: apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/cluster: "true" name: <your-managed-cluster-1> namespace: <your-managed-cluster-1> reportType: Cluster results: - result: deployed source: appsub-1-ns/appsub-1 // appsub 1 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: failed source: appsub-2-ns/appsub-2 // appsub 2 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: propagationFailed source: appsub-3-ns/appsub-3 // appsub 3 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 1.3.3. SubscriptionReport application-level One application-level subscriptionReport for each application is located in <namespace:<your-appsub-namespace> in appsub namespace on the hub cluster and contains the following information: The overall status of the application for each managed cluster A list of all resources for the application A report summary with the total number of total clusters A report summary with the total number of clusters where the application is in the status: deployed , failed , propagationFailed , and inProgress . Note: The inProcess status is the total minus deployed , minus failed `, and minus `propagationFailed . See the following SubscriptionReport sample YAML file: apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> reportType: Application resources: - apiVersion: v1 kind: Service name: redis-master2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-master2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: redis-slave2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-slave2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: frontend2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: frontend2 namespace: playback-ns-2 results: - result: deployed source: cluster-1 //cluster 1 status timestamp: nanos: 0 seconds: 0 - result: failed source: cluster-3 //cluster 2 status timestamp: nanos: 0 seconds: 0 - result: propagationFailed source: cluster-4 //cluster 3 status timestamp: nanos: 0 seconds: 0 summary: deployed: 8 failed: 1 inProgress: 0 propagationFailed: 1 clusters: 10 1.3.4. ManagedClusterView A ManagedClusterView CR is reported on the first failed cluster. If an application is deployed on multiple clusters with resource deployment failures, only one managedClusterView CR is created for the first failed cluster namespace on the hub cluster. The managedClusterView CR retrieves the detailed subscription status from the failed cluster so that the application owner does not need to access the failed remote cluster. See the following command that you can run to get the status: 1.3.5. CLI application-level status If you cannot access the managed clusters to get a subscription status, you can use the CLI. The cluster-level or the application-level subscription report provides the overall status, but not the detailed error messages for an application. Download the CLI from multicloud-operators-subscription . Run the following command to create a managedClusterView resource to see the managed cluster application SubscriptionStatus so that you can identify the error: 1.3.6. CLI Last Update Time You can also get the Last Update Time of an AppSub on a given managed cluster when it is not practical to log in to each managed cluster to retrieve this information. Thus, an utility script was created to simplify the retrieval of the Last Update Time of an AppSub on a managed cluster. This script is designed to run on the Hub cluster. It creates a managedClusterView resource to get the AppSub from the managed cluster, and parses the data to get the Last Update Time. Download the CLI from multicloud-operators-subscription . Run the following command to retriev the Last Update Time of an AppSub on a managed cluster. This script is designed to run on the hub cluster. It creates a managedClusterView resource to get the AppSub from the managed cluster, and parses the data to get the Last Update Time: 1.4. Managing application resources From the console, you can create applications by using Git repositories, Helm repositories, and Object storage repositories. Important: Git Channels can share a namespace with all other channel types: Helm, Object storage, and other Git namespaces. See the following topics to start managing apps: Managing apps with Git repositories Managing apps with Helm repositories Managing apps with Object storage repositories 1.4.1. Managing apps with Git repositories When you deploy Kubernetes resources using an application, the resources are located in specific repositories. Learn how to deploy resources from Git repositories in the following procedure. Learn more about the application model at Application model and definitions . User required access: A user role that can create applications. You can only perform actions that your role is assigned. Learn about access requirements from the Role-based access control documentation. From the console navigation menu, click Applications to see listed applications and to create new applications. Optional: After you choose the kind of application you want to create, you can select YAML: On to view the YAML in the console as you create and edit your application. See the YAML samples later in the topic. Choose Git from the list of repositories that you can use and enter the values in the correct fields. Follow the guidance in the console and see the YAML editor change values based on your input. Notes: If you select an existing Git repository path, you do not need to specify connection information if this is a private repository. The connection information is pre-set and you do not need to view these values. If you enter a new Git repository path, you can optionally enter Git connection information if this is a private Git repository. Notice the reconcile option. The merge option is the default selection, which means that new fields are added and existing fields are updated in the resource. You can choose to replace . With the replace option, the existing resource is replaced with the Git source. When the subscription reconcile rate is set to low , it can take up to one hour for the subscribed application resources to reconcile. On the card on the single application view, click Sync to reconcile manually. If set to off , it never reconciles. Set any optional pre-deployment and post-deployment tasks. Set the Ansible Automation Platform secret if you have Ansible Automation Platform jobs that you want to run before or after the subscription deploys the application resources. The Ansible Automation Platform tasks that define jobs must be placed within prehook and posthook folders in this repository. You can click Add credential if you need to add a credential using the console. Follow the directions in the console. See more information at Managing credentials overview . Click Create . You are redirected to the Overview page where you can view the details and topology. 1.4.1.1. More examples For an example of root-subscription/ , see application-subscribe-all . For examples of subscriptions that point to other folders in the same repository, see subscribe-all . See an example of the common-managed folder with application artifacts in the nginx-apps repository. See policy examples in Policy collection . 1.4.1.2. Keeping deployed resources after deleting subscription with Git When creating subscriptions using a Git repository, you can add a do-not-delete annotation to keep specific deployed resources after you delete the subscription. The do-not-delete annotation only works with top-level deployment resources. To add the do-not-delete annotation, complete the following steps: Create a subscription that deploys at least one resource. Add the following annotation to the resource or resources that you want to keep, even after you delete the subscription: apps.open-cluster-management.io/do-not-delete: 'true' See the following example: apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: "yes" After deleting the subscription, the resources with the do-not-delete annotation still exist, while other resources are deleted. Note: The resources that remain deployed by using the do-not-delete annotation bind to the namespace. As a result, you cannot delete the namespace until you remove the remaining resources. 1.4.2. Managing apps with Helm repositories When you deploy Kubernetes resources using an application, the resources are located in specific repositories. Learn how to deploy resources from Helm repositories in the following procedure. Learn more about the application model at Application model and definitions . User required access: A user role that can create applications. You can only perform actions that your role is assigned. Learn about access requirements from the Role-based access control documentation. From the console navigation menu, click Applications to see listed applications and to create new applications. Optional: After you choose the kind of application you want to create, you can select YAML: On to view the YAML in the console as you create and edit your application. See the YAML samples later in the topic. Choose Helm from the list of repositories that you can use and enter the values in the correct fields. Follow the guidance in the console and see the YAML editor change values based on your input. Click Create . You are redirected to the Overview page where you can view the details and topology. 1.4.2.1. Sample YAML The following example channel definition abstracts a Helm repository as a channel: Note: For Helm, all Kubernetes resources contained within the Helm chart must have the label release. {{ .Release.Name }}` for the application topology to be displayed properly. apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm namespace: hub-repo spec: pathname: [https://kubernetes-charts.storage.googleapis.com/] # URL references a valid chart URL. type: HelmRepo The following channel definition shows another example of a Helm repository channel: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/ Note: To see REST APIs, use the APIs . 1.4.2.2. Keeping deployed resources after deleting subscription with Helm Helm provides an annotation to keep specific deployed resources after you delete a subscription. See Tell Helm Not To Uninstall a Resource for more information. Note: The annotation must be in the Helm chart. 1.4.3. Managing apps with Object storage repositories When you deploy Kubernetes resources using an application, the resources are located in specific repositories. Learn more about the application model at Application model and definitions : User required access: A user role that can create applications. You can only perform actions that your role is assigned. Learn about access requirements from the Role-based access control documentation. From the console navigation menu, click Applications to see listed applications and to create new applications. Optional: After you choose the kind of application you want to create, you can select YAML: On to view the YAML in the console as you create and edit your application. See the YAML samples later in the topic. Choose Object store from the list of repositories that you can use and enter the values in the correct fields. Follow the guidance in the console and see the YAML editor change values based on your input. Click Create . You are redirected to the Overview page where you can view the details and topology. 1.4.3.1. Sample YAML The following example channel definition abstracts an object storage as a channel: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: Object storage pathname: [http://sample-ip:#####/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true Note: To see REST API, use the APIs . 1.4.3.2. Creating your Amazon Web Services (AWS) S3 object storage bucket You can set up subscriptions to subscribe resources that are defined in the Amazon Simple Storage Service (Amazon S3) object storage service. See the following procedure: Log in to the AWS console with your AWS account, user name, and password. Navigate to Amazon S3 > Buckets to the bucket home page. Click Create Bucket to create your bucket. Select the AWS region , which is essential for connecting your AWS S3 object bucket. Create the bucket access token. Navigate to your user name in the navigation bar, then from the drop-down menu, select My Security Credentials . Navigate to Access keys for CLI, SDK, & API access in the AWS IAM credentials tab and click on Create access key . Save your Access key ID , Secret access key . Upload your object YAML files to the bucket. 1.4.3.3. Subscribing to the object in the AWS bucket Create an object bucket type channel with a secret to specify the AccessKeyID , SecretAccessKey , and Region for connecting the AWS bucket. The three fields are created when the AWS bucket is created. Add the URL. The URL identifies the channel in a AWS S3 bucket if the URL contains s3:// or s3 and aws keywords. For example, see all of the following bucket URLs have AWS s3 bucket identifiers: Note: The AWS S3 object bucket URL is not necessary to connect the bucket with the AWS S3 API. 1.4.3.4. Sample AWS subscription See the following complete AWS S3 object bucket channel sample YAML file: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: object-dev namespace: ch-object-dev spec: type: ObjectBucket pathname: https://s3.console.aws.amazon.com/s3/buckets/sample-bucket-1 secretRef: name: secret-dev --- apiVersion: v1 kind: Secret metadata: name: secret-dev namespace: ch-object-dev stringData: AccessKeyID: <your AWS bucket access key id> SecretAccessKey: <your AWS bucket secret access key> Region: <your AWS bucket region> type: Opaque Deprecated: You can continue to create other AWS subscription and placement rule objects, as you see in the following sample YAML with kind: PlacementRule and kind: Subscription added: apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: obj-sub-ns spec: clusterSelector: {} --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: obj-sub namespace: obj-sub-ns spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster You can also subscribe to objects within a specific subfolder in the object bucket. Add the subfolder annotation to the subscription, which forces the object bucket subscription to only apply all the resources in the subfolder path. See the annotation with subfolder-1 as the bucket-path : annotations: apps.open-cluster-management.io/bucket-path: <subfolder-1> See the following complete sample for a subfolder: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/bucket-path: subfolder1 name: obj-sub namespace: obj-sub-ns labels: name: obj-sub spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster 1.4.3.5. Keeping deployed resources after deleting subscription with Object storage When creating subscriptions using an Object storage repository, you can add a do-not-delete annotation to keep specific deployed resources after you delete the subscription. The do-not-delete annotation only works with top-level deployment resources. To add the do-not-delete annotation, complete the following steps: Create a subscription that deploys at least one resource. Add the following annotation to the resource or resources that you want to keep, even after you delete the subscription: apps.open-cluster-management.io/do-not-delete: 'true' See the following example: apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: "yes" After deleting the subscription, the resources with the do-not-delete annotation still exist, while other resources are deleted. Note: The resources that remain deployed by using the do-not-delete annotation bind to the namespace. As a result, you cannot delete the namespace until you remove the remaining resources. 1.5. Ansible Automation Platform integration and introduction Red Hat Advanced Cluster Management is integrated with Red Hat Ansible Automation Platform so that you can create prehook and posthook AnsibleJob instances for Git subscription application management. Learn about the components and how to configure Ansible Automation Platform. Required access: Cluster administrator 1.5.1. Integration and components You can integrate Ansible Automation Platform jobs into Git subscriptions. For instance, for a database front end and back end application, the database is required to be instantiated by using Ansible Automation Platform with an Ansible Automation Platform Job. The application is installed by a Git subscription. The database is instantiated before you deploy the front end and back end application with the subscription. The application subscription operator is enhanced to define two subfolders named prehook and posthook . Both folders are in the Git repository resource root path and contain all prehook and posthook Ansible Automation Platform jobs, respectively. When the Git subscription is created, all of the prehook and posthook AnsibleJob resources are parsed and stored in memory as an object. The application subscription controller decides when to create the prehook and posthook AnsibleJob instances. When you create a subscription custom resource, the Git branch and Git path references a Git repository root location. In the Git root location, the two subfolders prehook and posthook should contain at least one Kind:AnsibleJob resource. 1.5.1.1. Prehook The application subscription controller searches all the kind:AnsibleJob CRs in the prehook folder as the prehook AnsibleJob objects, then generates a new prehook AnsibleJob instance. The new instance name is the prehook AnsibleJob object name and a random suffix string. See the following example instance name: database-sync-1-2913063 . The application subscription controller queues the reconcile request again in a one minute loop, where it checks the prehook AnsibleJob status.AnsibleJobResult . When the prehook status is successful , the application subscription continues to deploy the main subscription. 1.5.1.2. Posthook When the application subscription status is updated, if the subscription status is subscribed or propagated to all target clusters in subscribed status, the application subscription controller searches all of the AnsibleJob kind custom resources in the posthook folder as the posthook AnsibleJob objects. Then, it generates new posthook AnsibleJob instances. The new instance name is the posthook AnsibleJob object name and a random suffix string. See the following example instance name: service-ticket-1-2913849 . See the following topics to enable Ansible Automation Platform: Setting up Ansible Automation Platform Configuring Ansible Automation Platform 1.5.2. Setting up Ansible Automation Platform With Ansible Automation Platform jobs, you can automate tasks and integrate with external services, such as Slack and PagerDuty services. Your Git repository resource root path will contain prehook and posthook directories for Ansible Automation Platform jobs that run as part of deploying the application, updating the application, or removing the application from a cluster. Required access: Cluster administrator Prerequisites Installing Ansible Automation Platform Resource Operator 1.5.2.1. Prerequisites Install a supported OpenShift Container Platform version. Install Ansible Automation Platform. See Red Hat Ansible Automation Platform documentation to install the latest supported version. Install the Ansible Automation Platform Resource Operator to connect Ansible Automation Platform jobs to the lifecycle of Git subscriptions. Best practice: The Ansible Automation Platform job template should be idempotent. Check PROMPT ON LAUNCH on the template for both INVENTORY and EXTRA VARIABLES . See Job templates for more information. 1.5.2.2. Installing Ansible Automation Platform Resource Operator Log in to your OpenShift Container Platform cluster console. Click OperatorHub in the console navigation. Search for and install the Ansible Automation Platform Resource Operator . Note: To submit prehook and posthook AnsibleJobs , install Red Hat Ansible Automation Platform Resource Operator with corresponding version available on the following OpenShift Container Platform versions: OpenShift Container Platform 4.8 needs (AAP) Resource Operator early-access, stable-2.1, stable-2.2 OpenShift Container Platform 4.9 needs (AAP) Resource Operator early-access, stable-2.1, stable-2.2 OpenShift Container Platform 4.10 and later needs (AAP) Resource Operator stable-2.1, stable-2.2 You can then create the credential from the Credentials page in the console. Click Add credential , or access the page from the navigation. See Creating a credential for Ansible Automation Platform for credential information. 1.5.3. Configuring Ansible Automation Platform With Ansible Automation Platform jobs, you can automate tasks and integrate with external services, such as Slack and PagerDuty services. Your Git repository resource root path will contain prehook and posthook directories for Ansible Automation Platform jobs that run as part of deploying the application, updating the application, or removing the application from a cluster. Required access: Cluster administrator Setting up Ansible Automation Platform secrets Setting secret reconciliation Using Ansible Automation Platform sample YAML files Launching Workflow You can configure Ansible Automation Platform configurations with the following tasks: 1.5.3.1. Setting up Ansible Automation Platform secrets You must create an Ansible Automation Platform secret custom resources in the same subscription namespace. The Ansible Automation Platform secret is limited to the same subscription namespace. Create the secret from the console by filling in the Ansible Automation Platform secret name section. To create the secret using terminal, edit and apply the sample yaml file: Note: The namespace is the same namespace as the subscription namespace. The stringData:token and host are from the Ansible Automation Platform. apiVersion: v1 kind: Secret metadata: name: toweraccess namespace: same-as-subscription type: Opaque stringData: token: ansible-tower-api-token host: https://ansible-tower-host-url Run the following command to add your YAML file: When the app subscription controller creates prehook and posthook Ansible jobs, if the secret from subscription spec.hooksecretref is available, then it is sent to the AnsibleJob custom resources spec.tower_auth_secret and the AnsibleJob can access the Ansible Automation Platform. 1.5.3.2. Setting secret reconciliation For a main-sub subscription with prehook and posthook AnsibleJob , the main-sub subscription should be reconciled after all prehook and posthook AnsibleJob or main subscription are updated in the Git repository. Prehook AnsibleJob and the main subscription continuously reconcile and relaunch a new pre AnsibleJob instance. After the pre AnsibleJob is complete, re-run the main subscription. If there is any specification change in the main subscription, redeploy the subscription. The main subscription status should be updated to align with the redeployment procedure. Reset the hub cluster subscription status to nil . The subscription is refreshed along with the subscription deployment on target clusters. When the deployment is finished on the target cluster, the subscription status on the target cluster is updated to "subscribed" or "failed" , and is synced to the hub cluster subscription status. After the main subscription is complete, relaunch a new post- AnsibleJob instance. Verify that the subscription is updated. See the following output: subscription.status == "subscribed" subscription.status == "propagated" with all of the target clusters "subscribed" When an AnsibleJob custom resources is created, A Kubernetes job custom resources is created to launch an Ansible Automation Platform job by communicating to the target Ansible Automation Platform. When the job is complete, the final status for the job is returned to AnsibleJob status.AnsibleJob Result . Notes: The AnsibleJob status.conditions is reserved by the Ansible Automation Platform Job operator for storing the creation of Kubernetes job result. The status.conditions does not reflect the actual Ansible Automation Platform job status. The subscription controller checks the Ansible Automation Platform job status by the AnsibleJob.status.AnsibleJob.Result instead of AnsibleJob.status.conditions . As previously mentioned in the prehook and posthook AnsibleJob workflow, when the main subscription is updated in Git repository, a new prehook and posthook AnsibleJob instance is created. As a result, one main subscription can link to multiple AnsibleJob instances. Four fields are defined in subscription.status.ansiblejobs : lastPrehookJobs : The most recent prehook Ansible jobs prehookJobsHistory : All the prehook Ansible jobs history lastPosthookJobs : The most recent posthook Ansible jobs posthookJobsHistory : All the posthook Ansible jobs history 1.5.3.3. Using Ansible Automation Platform sample YAML files See the following sample of an AnsibleJob YAML file in a Git prehook and posthook folder: apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess job_template_name: Demo Job Template extra_vars: cost: 6.88 ghosts: ["inky","pinky","clyde","sue"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45 job_tags: "provision,install,configuration" skip_tags: "configuration,restart" 1.5.3.4. Launching Workflow To launch an Ansible Automation Platform Workflow by using the AnsibleJob custom resource, replace the job_template_name field with the workflow_template_name , which is displayed in the following example. 1.5.3.5. Using Ansible Automation Platform sample YAML Workflow See the following sample of a Workflow AnsibleJob YAML file in a Git prehook and Git posthook folder: apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess workflow_template_name: Demo Workflow Template extra_vars: cost: 6.88 ghosts: ["inky","pinky","clyde","sue"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45 See Workflows to learn more about Ansible Workflow. 1.6. Application advanced configuration Within Red Hat Advanced Cluster Management for Kubernetes, applications are composed of multiple application resources. You can use channel, subscription, and placements to help you deploy, update, and manage your overall applications. Both single and multicluster applications use the same Kubernetes specifications, but multicluster applications involve more automation of the deployment and application management lifecycle. All of the application component resources for Red Hat Advanced Cluster Management for Kubernetes applications are defined in YAML file specification sections. When you need to create or update an application component resource, you need to create or edit the appropriate section to include the labels for defining your resource. View the following application advanced configuration topics: Subscribing Git resources Granting subscription admin privilege Creating an allow and deny list as subscription administrator Adding reconcile options Configuring leader election Configuring application channel and subscription for a secure Git connection Setting up Ansible Automation Platform tasks Configuring Helm to watch namespace resources Configuring package overrides Channel samples overview Subscription samples overview Application samples overview 1.6.1. Subscribing Git resources By default, when a subscription deploys subscribed applications to target clusters, the applications are deployed to that subscription namespace, even if the application resources are associated with other namespaces. A subscription administrator can change default behavior, as described in Granting subscription admin privilege . Additionally, if an application resource exists in the cluster and was not created by the subscription, the subscription cannot apply a new resource on that existing resource. See the following processes to change default settings as the subscription administrator: Required access: Cluster administrator Creating application resources in Git Subscribing specific Git elements Application namespace example Resource overwrite example 1.6.1.1. Creating application resources in Git You need to specify the full group and version for apiVersion in resource YAML when you subscribe. For example, if you subscribe to apiVersion: v1 , the subscription controller fails to validate the subscription and you receive an error: Resource /v1, Kind=ImageStream is not supported . If the apiVersion is changed to image.openshift.io/v1 , as in the following sample, it passes the validation in the subscription controller and the resource is applied successfully. apiVersion: `image.openshift.io/v1` kind: ImageStream metadata: name: default namespace: default spec: lookupPolicy: local: true tags: - name: 'latest' from: kind: DockerImage name: 'quay.io/repository/open-cluster-management/multicluster-operators-subscription:community-latest' , see more useful examples of how a subscription administrator can change default behavior. 1.6.1.2. Application namespace example In this following examples, you are logged in as a subscription administrator. 1.6.1.2.1. Application to different namespaces Create a subscription to subscribe the sample resource YAML file from a Git repository. The example file contains subscriptions that are located within the following different namespaces: Applicable channel types: Git ConfigMap test-configmap-1 gets created in multins namespace. ConfigMap test-configmap-2 gets created in default namespace. ConfigMap test-configmap-3 gets created in the subscription namespace. --- apiVersion: v1 kind: Namespace metadata: name: multins --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: multins data: path: resource1 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-2 namespace: default data: path: resource2 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-3 data: path: resource3 If the subscription was created by other users, all the ConfigMaps get created in the same namespace as the subscription. 1.6.1.2.2. Application to same namespace As a subscription administrator, you might want to deploy all application resources into the same namespace. You can deploy all application resources into the subscription namespace by Creating an allow and deny list as subscription administrator . Add apps.open-cluster-management.io/current-namespace-scoped: true annotation to the subscription YAML. For example, when a subscription administrator creates the following subscription, all three ConfigMaps in the example are created in subscription-ns namespace. apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: subscription-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge apps.open-cluster-management.io/current-namespace-scoped: "true" spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters 1.6.1.3. Resource overwrite example Applicable channel types: Git, ObjectBucket (Object storage in the console) Note: The resource overwrite option is not applicable to helm charts from the Git repository because the helm chart resources are managed by Helm. In this example, the following ConfigMap already exists in the target cluster. apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 19 Subscribe the following sample resource YAML file from a Git repository and replace the existing ConfigMap. See the change in the data specification: apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20 1.6.1.3.1. Default merge option See the following sample resource YAML file from a Git repository with the default apps.open-cluster-management.io/reconcile-option: merge annotation. See the following example: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters When this subscription is created by a subscription administrator and subscribes the ConfigMap resource, the existing ConfigMap is merged, as you can see in the following example: apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 20 When the merge option is used, entries from subscribed resource are either created or updated in the existing resource. No entry is removed from the existing resource. Important: If the existing resource you want to overwrite with a subscription is automatically reconciled by another operator or controller, the resource configuration is updated by both subscription and the controller or operator. Do not use this method in this case. 1.6.1.3.2. mergeAndOwn option With mergeAndOwn , entries from subscribed resource are either created or updated in the existing resource. Log in as a subscription administrator and create a subscription with apps.open-cluster-management.io/reconcile-option: mergeAndOwn annotation. See the following example: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: mergeAndOwn spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters When this subscription is created by a subscription administrator and subscribes the ConfigMap resource, the existing ConfigMap is merged, as you can see in the following example: apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns annotations: apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example data: name: user1 age: 20 As previosly mentioned, when the mergeAndOwn option is used, entries from subscribed resource are either created or updated in the existing resource. No entry is removed from the existing resource. It also adds the apps.open-cluster-management.io/hosting-subscription annotation to indicate that the resource is now owned by the subscription. Deleting the subscription deletes the ConfigMap. 1.6.1.3.3. Replace option You log in as a subscription administrator and create a subscription with apps.open-cluster-management.io/reconcile-option: replace annotation. See the following example: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: replace spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters When this subscription is created by a subscription administrator and subscribes the ConfigMap resource, the existing ConfigMap is replaced by the following: apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20 1.6.1.4. Subscribing specific Git elements You can subscribe to a specific Git branch, commit, or tag. 1.6.1.4.1. Subscribing to a specific branch The subscription operator that is included in the multicloud-operators-subscription repository subscribes to the default branch of a Git repository. If you want to subscribe to a different branch, you need to specify the branch name annotation in the subscription. The following example, the YAML file displays how to specify a different branch with apps.open-cluster-management.io/git-branch: <branch1> : apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-branch: <branch1> 1.6.1.4.2. Subscribing to a specific commit The subscription operator that is included in the multicloud-operators-subscription repository subscribes to the latest commit of specified branch of a Git repository by default. If you want to subscribe to a specific commit, you need to specify the desired commit annotation with the commit hash in the subscription. The following example, the YAML file displays how to specify a different commit with apps.open-cluster-management.io/git-desired-commit: <full commit number> : apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-desired-commit: <full commit number> apps.open-cluster-management.io/git-clone-depth: 100 The git-clone-depth annotation is optional and set to 20 by default, which means the subscription controller retrieves the 20 commit histories from the Git repository. If you specify a much older git-desired-commit , you need to specify git-clone-depth accordingly for the desired commit. 1.6.1.4.3. Subscribing to a specific tag The subscription operator that is included in the multicloud-operators-subscription repository subscribes to the latest commit of specified branch of a Git repository by default. If you want to subscribe to a specific tag, you need to specify the tag annotation in the subscription. The following example, the YAML file displays how to specify a different tag with apps.open-cluster-management.io/git-tag: <v1.0> : apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-tag: <v1.0> apps.open-cluster-management.io/git-clone-depth: 100 Note: If both Git desired commit and tag annotations are specified, the tag is ignored. The git-clone-depth annotation is optional and set to 20 by default, which means the subscription controller retrieves the 20 commit history from the Git repository. If you specify much older git-tag , you need to specify git-clone-depth accordingly for the desired commit of the tag. 1.6.2. Granting subscription administrator privilege Learn how to grant subscription administrator access. A subscription administrator can change default behavior. Learn more in the following process: From the console, log in to your Red Hat OpenShift Container Platform cluster. Create one or more users. See Preparing for users for information about creating users. You can also prepare groups or service accounts. Users that you create are administrators for the app.open-cluster-management.io/subscription application. With OpenShift Container Platform, a subscription administrator can change default behavior. You can group these users to represent a subscription administrative group, which is demonstrated in later examples. From the terminal, log in to your Red Hat Advanced Cluster Management cluster. If open-cluster-management:subscription-admin ClusterRoleBinding does not exist, you need to create it. See the following example: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: open-cluster-management:subscription-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: open-cluster-management:subscription-admin Add the following subjects into open-cluster-management:subscription-admin ClusterRoleBinding with the following command: Note: Initially, open-cluster-management:subscription-admin ClusterRoleBinding has no subject. Your subjects might display as the following example: subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: example-name - apiGroup: rbac.authorization.k8s.io kind: Group name: example-group-name - kind: ServiceAccount name: my-service-account namespace: my-service-account-namespace - apiGroup: rbac.authorization.k8s.io kind: User name: 'system:serviceaccount:my-service-account-namespace:my-service-account' Service Account can be used as a user subject. 1.6.3. Creating an allow and deny list as subscription administrator As a subscription administrator, you can create an application from a Git repository application subscription that contains an allow list to allow deployment of only specified Kubernetes kind resources. You can also create a deny list in the application subscription to deny deployment of specific Kubernetes kind resources. By default, policy.open-cluster-management.io/v1 resources are not deployed by an application subscription. To avoid this default behavior, application subscription needs deployed by a subscription administrator. See the following example of allow and deny specifications: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: sub2 name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel allow: - apiVersion: policy.open-cluster-management.io/v1 kinds: - Policy - apiVersion: v1 kinds: - Deployment deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: local: true The following application subscription YAML specifies that when the application is deployed from the myapplication directory from the source repository, it deploys only v1/Deployment resources, even if there are other resources in the source repository: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: myapplication name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: placementRef: name: demo-placement kind: Placement This example application subscription YAML specifies deployments of all valid resources except v1/Service and v1/ConfigMap resources. Instead of listing individual resource kinds within an API group, you can add "*" to allow or deny all resource kinds in the API Group. 1.6.4. Adding reconcile options You can use the apps.open-cluster-management.io/reconcile-option annotation in individual resources to override the subscription-level reconcile option. For example, if you add apps.open-cluster-management.io/reconcile-option: replace annotation in the subscription and add apps.open-cluster-management.io/reconcile-option: merge annotation in a resource YAML in the subscribed Git repository, the resource is merged on the target cluster while other resources are replaced. 1.6.4.1. Reconcile frequency Git channel You can select reconcile frequency options: high , medium , low , and off in channel configuration to avoid unnecessary resource reconciliations and therefore prevent overload on subscription operator. Required access: Administrator and cluster administrator See the following definitions of the settings:attribute:<value> : Off : The deployed resources are not automatically reconciled. A change in the Subscription custom resource initiates a reconciliation. You can add or update a label or annotation. Low : The deployed resources are automatically reconciled every hour, even if there is no change in the source Git repository. Medium : This is the default setting. The subscription operator compares the currently deployed commit ID to the latest commit ID of the source repository every 3 minutes, and applies changes to target clusters. Every 15 minutes, all resources are reapplied from the source Git repository to the target clusters, even if there is no change in the repository. High : The deployed resources are automatically reconciled every two minutes, even if there is no change in the source Git repository. You can set this by using the apps.open-cluster-management.io/reconcile-rate annotation in the channel custom resource that is referenced by subscription. See the following name: git-channel example: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: <value from the list> spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: <application1> apps.open-cluster-management.io/git-branch: <branch1> spec: channel: sample/git-channel placement: local: true In the example, all subscriptions that use sample/git-channel are assigned low reconciliation frequency. When the subscription reconcile rate is set to low , it can take up to one hour for the subscribed application resources to reconcile. On the card on the single application view, click Sync to reconcile manually. If set to off , it never reconciles. Regardless of the reconcile-rate setting in the channel, a subscription can turn the auto-reconciliation off by specifying apps.open-cluster-management.io/reconcile-rate: off annotation in the Subscription custom resource. See the following git-channel example: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: application1 apps.open-cluster-management.io/git-branch: branch1 apps.open-cluster-management.io/reconcile-rate: "off" spec: channel: sample/git-channel placement: local: true See that the resources deployed by git-subscription are never automatically reconciled even if the reconcile-rate is set to high in the channel. 1.6.4.2. Reconcile frequency Helm channel Every 15 minutes, the subscription operator compares currently deployed hash of your Helm chart to the hash from the source repository. Changes are applied to target clusters. The frequency of resource reconciliation impacts the performance of other application deployments and updates. For example, if there are hundreds of application subscriptions and you want to reconcile all subscriptions more frequently, the response time of reconciliation is slower. Depending on the Kubernetes resources of the application, appropriate reconciliation frequency can improve performance. Off : The deployed resources are not automatically reconciled. A change in the Subscription custom resource initiates a reconciliation. You can add or update a label or annotation. Low : The subscription operator compares currently deployed hash to the hash of the source repository every hour and apply changes to target clusters when there is change. Medium : This is the default setting. The subscription operator compares currently deployed hash to the hash of the source repository every 15 minutes and apply changes to target clusters when there is change. High : The subscription operator compares currently deployed hash to the hash of the source repository every 2 minutes and apply changes to target clusters when there is change. You can set this using apps.open-cluster-management.io/reconcile-rate annotation in the Channel custom resource that is referenced by subscription. See the following helm-channel example: See the following helm-channel example: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: low spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true In this example, all subscriptions that uses sample/helm-channel are assigned a low reconciliation frequency. Regardless of the reconcile-rate setting in the channel, a subscription can turn the auto-reconciliation off by specifying apps.open-cluster-management.io/reconcile-rate: off annotation in the Subscription custom resource, as displayed in the following example: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription annotations: apps.open-cluster-management.io/reconcile-rate: "off" spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true In this example, the resources deployed by helm-subscription are never automatically reconciled, even if the reconcile-rate is set to high in the channel. 1.6.5. Configuring leader election With LeaderElection , you can change how the controllers make requests to choose a new leader in case of a failure, which ensures only one leader instance handles the reconciliation at a time. You can increase or decrease the amount of time a controller takes to acquire LeaderElection . With decreased time, a new leader is chosen quicker during a failure. Note: Changes to the default values for the controllers might impact system performance during that task. You can reduce your etcd load by changing the default values for leaseDuration , renewDeadline , or retryPeriod of controllers. Required access: Cluster administrator 1.6.5.1. Editing the controller flag To configure LeaderElection , you change the following default values: leader-election-lease-duration: 137 seconds renew-deadline: 107 seconds retry-period: 26 seconds See the following steps to change the multicluster-operators-application , multicluster-operators-channel , multicluster-operators-standalone-subscription , or multicluster-operators-hub-subscription controllers: Run the following command to pause your multiclusterhub : Edit the deployment file by adding the controller name to the oc edit command. See the following example command: Locate the controller command flags by searching for - command . From the containers section in the controller, insert a - command flag. For instance, insert RetryPeriod . Save the file. The controller automatically restarts to apply the flag. Repeat this procedure for each controller that you want to change. Run the following command to resume your multiclusterhub : See the following example output of a successful edit to the -command , where the retryPeriod flag doubles the previously mentioned default time to 52 , which is allotted to retry acquiring leaderElection : 1.6.6. Configuring application channel and subscription for a secure Git connection Git channels and subscriptions connect to the specified Git repository through HTTPS or SSH. The following application channel configurations can be used for secure Git connections: Connecting to a private repo with user and access token Making an insecure HTTPS connection to a Git server Using custom CA certificates for a secure HTTPS connection Making an SSH connection to a Git server Updating certificates and SSH keys 1.6.6.1. Connecting to a private repo with user and access token You can connect to a Git server using channel and subscription. See the following procedures for connecting to a private repository with a user and access token: Create a secret in the same namespace as the channel. Set the user field to a Git user ID and the accessToken field to a Git personal access token. The values should be base64 encoded. See the following sample with user and accessToken populated: apiVersion: v1 kind: Secret metadata: name: my-git-secret namespace: channel-ns data: user: dXNlcgo= accessToken: cGFzc3dvcmQK Configure the channel with a secret. See the following sample with the secretRef populated: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: sample-channel namespace: channel-ns spec: type: Git pathname: <Git HTTPS URL> secretRef: name: my-git-secret 1.6.6.2. Making an insecure HTTPS connection to a Git server You can use the following connection method in a development environment to connect to a privately-hosted Git server with SSL certificates that are signed by custom or self-signed certificate authority. However, this procedure is not recommended for production: Specify insecureSkipVerify: true in the channel specification. Otherwise, the connection to the Git server fails with an error similar to the following: See the following sample with the channel specification addition for this method: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: labels: name: sample-channel namespace: sample spec: type: GitHub pathname: <Git HTTPS URL> insecureSkipVerify: true 1.6.6.3. Using custom CA certificates for a secure HTTPS connection You can use this connection method to securely connect to a privately-hosted Git server with SSL certificates that are signed by custom or self-signed certificate authority. Create a ConfigMap to contain the Git server root and intermediate CA certificates in PEM format. The ConfigMap must be in the same namespace as the channel CR. The field name must be caCerts and use | . From the following sample, notice that caCerts can contain multiple certificates, such as root and intermediate CAs: Configure the channel with this ConfigMap. See the following sample with the git-ca name from the step: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: configMapRef: name: git-ca pathname: <Git HTTPS URL> type: Git 1.6.6.4. Making an SSH connection to a Git server Create a secret to contain your private SSH key in sshKey field in data . If the key is passphrase-protected, specify the password in passphrase field. This secret must be in the same namespace as the channel CR. Create this secret using a oc command to create a secret generic git-ssh-key --from-file=sshKey=./.ssh/id_rsa , then add base64 encoded passphrase . See the following sample: apiVersion: v1 kind: Secret metadata: name: git-ssh-key namespace: channel-ns data: sshKey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQ21GbGN6STFOaTFqZEhJQUFBQUdZbU55ZVhCMEFBQUFHQUFBQUJDK3YySHhWSIwCm8zejh1endzV3NWODMvSFVkOEtGeVBmWk5OeE5TQUgcFA3Yk1yR2tlRFFPd3J6MGIKOUlRM0tKVXQzWEE0Zmd6NVlrVFVhcTJsZWxxVk1HcXI2WHF2UVJ5Mkc0NkRlRVlYUGpabVZMcGVuaGtRYU5HYmpaMmZOdQpWUGpiOVhZRmd4bTNnYUpJU3BNeTFLWjQ5MzJvOFByaDZEdzRYVUF1a28wZGdBaDdndVpPaE53b0pVYnNmYlZRc0xMS1RrCnQwblZ1anRvd2NEVGx4TlpIUjcwbGVUSHdGQTYwekM0elpMNkRPc3RMYjV2LzZhMjFHRlMwVmVXQ3YvMlpMOE1sbjVUZWwKSytoUWtxRnJBL3BUc1ozVXNjSG1GUi9PV25FPQotLS0tLUVORCBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0K passphrase: cGFzc3cwcmQK type: Opaque Configure the channel with the secret. See the following sample: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git The subscription controller does an ssh-keyscan with the provided Git hostname to build the known_hosts list to prevent an Man-in-the-middle (MITM) attack in the SSH connection. If you want to skip this and make insecure connection, use insecureSkipVerify: true in the channel configuration. This is not best practice, especially in production environments. apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git insecureSkipVerify: true 1.6.6.5. Updating certificates and SSH keys If a Git channel connection configuration requires an update, such as CA certificates, credentials, or SSH key, you need to create a new secret and ConfigMap in the same namespace and update the channel to reference that new secret and ConfigMap. For more information, see Using custom CA certificates for a secure HTTPS connection . 1.6.7. Configuring Helm to watch namespace resources By default, when a subscription deploys subscribed Helm resources to target clusters, the application resources are watched. You can configure the Helm channel type to watch namespace-scoped resources. When enabled, manual changes to those watched namespace-scoped resources are reverted. 1.6.7.1. Configuring Required access: Cluster administrator To configure the Helm application to watch namespace scoped resources, set the value for the watchHelmNamespaceScopedResources field in your subscription definition to true . See the following sample. apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 spec: watchHelmNamespaceScopedResources: true channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: "1.36.x" 1.6.8. Scheduling a deployment If you need to deploy new or change Helm charts or other resources during only specific times, you can define subscriptions for those resources to begin deployments during only those specific times. Alternatively, you can restrict deployments. For instance, you can define time windows between 10:00 PM and 11:00 PM each Friday to serve as scheduled maintenance windows for applying patches or other application updates to your clusters. You can restrict or block deployments from beginning during specific time windows, such as to avoid unexpected deployments during peak business hours. For instance, to avoid peak hours you can define a time window for a subscription to avoid beginning deployments between 8:00 AM and 8:00 PM. By defining time windows for your subscriptions, you can coordinate updates for all of your applications and clusters. For instance, you can define subscriptions to deploy only new application resources between 6:01 PM and 11:59 PM and define other subscriptions to deploy only updated versions of existing resources between 12:00 AM to 7:59 AM. When a time window is defined for a subscription, the time ranges when a subscription is active changes. As part of defining a time window, you can define the subscription to be active or blocked during that window. The deployment of new or changed resources begins only when the subscription is active. Regardless of whether a subscription is active or blocked, the subscription continues to monitor for any new or changed resource. The active and blocked setting affects only deployments. When a new or changed resource is detected, the time window definition determines the action for the subscription. For subscriptions to HelmRepo , ObjectBucket , and Git type channels: If the resource is detected during the time range when the subscription is active , the resource deployment begins. If the resource is detected outside the time range when the subscription is blocked from running deployments, the request to deploy the resource is cached. When the time range that the subscription is active occurs, the cached requests are applied and any related deployments begin. When a time window is blocked , all resources that were previously deployed by the application subscription remain. Any new update is blocked until the time window is active again. End user may wrongly think when the app sub time window is blocked, all deployed resources will be removed. And they will be back when the app sub time window is active again. If a deployment begins during a defined time window and is running when the defined end of the time window elapses, the deployment continues to run to completion. To define a time window for a subscription, you need to add the required fields and values to the subscription resource definition YAML. As part of defining a time window, you can define the days and hours for the time window. You can also define the time window type, which determines whether the time window when deployments can begin occurs during, or outside, the defined time frame. If the time window type is active , deployments can begin only during the defined time frame. You can use this setting when you want deployments to occur within only specific maintenance windows. If the time window type is block , deployments cannot begin during the defined time frame, but can begin at any other time. You can use this setting when you have critical updates that are required, but still need to avoid deployments during specific time ranges. For instance, you can use this type to define a time window to allow security-related updates to be applied at any time except between 10:00 AM and 2:00 PM. You can define multiple time windows for a subscription, such as to define a time window every Monday and Wednesday. 1.6.9. Configuring package overrides Configure package overrides for a subscription override value for the Helm chart or Kubernetes resource that is subscribed to by the subscription. To configure a package override, specify the field within the Kubernetes resource spec to override as the value for the path field. Specify the replacement value as the value for the value field. For example, if you need to override the values field within the spec for a Helm release for a subscribed Helm chart, you need to set the value for the path field in your subscription definition to spec . packageOverrides: - packageName: nginx-ingress packageOverrides: - path: spec value: my-override-values 1 1 The contents for the value field are used to override the values within the spec field of the Helm spec. For a Helm release, override values for the spec field are merged into the Helm release values.yaml file to override the existing values. This file is used to retrieve the configurable variables for the Helm release. If you need to override the release name for a Helm release, include the packageOverride section within your definition. Define the packageAlias for the Helm release by including the following fields: packageName to identify the Helm chart. packageAlias to indicate that you are overriding the release name. By default, if no Helm release name is specified, the Helm chart name is used to identify the release. In some cases, such as when there are multiple releases subscribed to the same chart, conflicts can occur. The release name must be unique among the subscriptions within a namespace. If the release name for a subscription that you are creating is not unique, an error occurs. You must set a different release name for your subscription by defining a packageOverride . If you want to change the name within an existing subscription, you must first delete that subscription and then recreate the subscription with the preferred release name. packageOverrides: - packageName: nginx-ingress packageAlias: my-helm-release-name 1.6.10. Channel samples overview View samples and YAML definitions that you can use to build your files. Channels ( channel.apps.open-cluster-management.io ) provide you with improved continuous integration and continuous delivery capabilities for creating and managing your Red Hat Advanced Cluster Management for Kubernetes applications. To use the OpenShift CLI tool, see the following procedure: Compose and save your application YAML file with your preferred editing tool. Run the following command to apply your file to an API server. Replace filename with the name of your file: Verify that your application resource is created by running the following command: Channel YAML structure Channel YAML table Object storage bucket (ObjectBucket) channel Helm repository ( HelmRepo ) channel Git ( Git ) repository channel 1.6.10.1. Channel YAML structure For application samples that you can deploy, see the stolostron repository. The following YAML structures show the required fields for a channel and some of the common optional fields. Your YAML structure needs to include some required fields and values. Depending on your application management requirements, you might need to include other optional fields and values. You can compose your own YAML content with any tool and in the product console. apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: namespace: # Each channel needs a unique namespace, except Git channel. spec: sourceNamespaces: type: pathname: secretRef: name: gates: annotations: labels: 1.6.10.2. Channel YAML table Field Optional or required Description apiVersion Required Set the value to apps.open-cluster-management.io/v1 . kind Required Set the value to Channel to indicate that the resource is a channel. metadata.name Required The name of the channel. metadata.namespace Required The namespace for the channel; Each channel needs a unique namespace, except the Git channel. spec.sourceNamespaces Optional Identifies the namespace that the channel controller monitors for new or updated deployables to retrieve and promote to the channel. spec.type Required The channel type. The supported types are: HelmRepo , Git , and ObjectBucket (Object storage in the console) spec.pathname Required for HelmRepo , Git , ObjectBucket channels For a HelmRepo channel, set the value to be the URL for the Helm repository. For an ObjectBucket channel, set the value to be the URL for the Object storage. For a Git channel, set the value to be the HTTPS URL for the Git repository. spec.secretRef.name Optional Identifies a Kubernetes Secret resource to use for authentication, such as for accessing a repository or chart. You can use a secret for authentication with only HelmRepo , ObjectBucket , and Git type channels. spec.gates Optional Defines requirements for promoting a deployable within the channel. If no requirements are set, any deployable that is added to the channel namespace or source is promoted to the channel. The gates value is only for ObjectBucket channel types and does not apply to HelmRepo and Git channel types, . spec.gates.annotations Optional The annotations for the channel. Deployables must have matching annotations to be included in the channel. metadata.labels Optional The labels for the channel. spec.insecureSkipVerify Optional Default value is false , if set true , the channel connection is built by skipping the authentication The definition structure for a channel can resemble the following YAML content: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/ 1.6.10.3. Object storage bucket (ObjectBucket) channel The following example channel definition abstracts an Object storage bucket as a channel: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: ObjectBucket pathname: [http://9.28.236.243:xxxx/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true 1.6.10.4. Helm repository ( HelmRepo ) channel The following example channel definition abstracts a Helm repository as a channel: Deprecation notice: For 2.11, specifying insecureSkipVerify: "true" in channel ConfigMap reference to skip Helm repo SSL certificate is deprecated. See the replacement in the following current sample, with spec.insecureSkipVerify: true that is used in the channel instead: apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: Helm namespace: hub-repo spec: pathname: [https://9.21.107.150:8443/helm-repo/charts] # URL references a valid chart URL. insecureSkipVerify: true type: HelmRepo The following channel definition shows another example of a Helm repository channel: Note: For Helm, all Kubernetes resources contained within the Helm chart must have the label release {{ .Release.Name }} for the application topology to display properly. apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/ 1.6.10.5. Git ( Git ) repository channel The following example channel definition displays an example of a channel for the Git Repository. In the following example, secretRef refers to the user identity that is used to access the Git repo that is specified in the pathname . If you have a public repo, you do not need the secretRef label and value: apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: hive-cluster-gitrepo namespace: gitops-cluster-lifecycle spec: type: Git pathname: https://github.com/open-cluster-management/gitops-clusters.git secretRef: name: github-gitops-clusters --- apiVersion: v1 kind: Secret metadata: name: github-gitops-clusters namespace: gitops-cluster-lifecycle data: user: dXNlcgo= # Value of user and accessToken is Base 64 coded. accessToken: cGFzc3dvcmQ 1.6.11. Subscription samples overview View samples and YAML definitions that you can use to build your files. As with channels, subscriptions ( subscription.apps.open-cluster-management.io ) provide you with improved continuous integration and continuous delivery capabilities for application management. To use the OpenShift CLI tool, see the following procedure: Compose and save your application YAML file with your preferred editing tool. Run the following command to apply your file to an api server. Replace filename with the name of your file: oc apply -f filename.yaml Verify that your application resource is created by running the following command: oc get application.app Subscription YAML structure Subscription YAML table Subscription file samples Subscription time window example Subscription with overrides example Helm repository subscription example Git repository subscription example 1.6.11.1. Subscription YAML structure The following YAML structure shows the required fields for a subscription and some of the common optional fields. Your YAML structure needs to include certain required fields and values. Depending on your application management requirements, you might need to include other optional fields and values. You can compose your own YAML content with any tool: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: namespace: labels: spec: sourceNamespace: source: channel: name: packageFilter: version: labelSelector: matchLabels: package: component: annotations: packageOverrides: - packageName: packageAlias: - path: value: placement: local: clusters: name: clusterSelector: placementRef: name: kind: Placement overrides: clusterName: clusterOverrides: path: value: 1.6.11.2. Subscription YAML table Field Required or Optional Description apiVersion Required Set the value to apps.open-cluster-management.io/v1 . kind Required Set the value to Subscription to indicate that the resource is a subscription. metadata.name Required The name for identifying the subscription. metadata.namespace Required The namespace resource to use for the subscription. metadata.labels Optional The labels for the subscription. spec.channel Optional The namespace name ("Namespace/Name") that defines the channel for the subscription. Define either the channel , or the source , or the sourceNamespace field. In general, use the channel field to point to the channel instead of using the source or sourceNamespace fields. If more than one field is defined, the first field that is defined is used. spec.sourceNamespace Optional The source namespace where deployables are stored on the hub cluster. Use this field only for namespace channels. Define either the channel , or the source , or the sourceNamespace field. In general, use the channel field to point to the channel instead of using the source or sourceNamespace fields. spec.source Optional The path name ("URL") to the Helm repository where deployables are stored. Use this field for only Helm repository channels. Define either the channel , or the source , or the sourceNamespace field. In general, use the channel field to point to the channel instead of using the source or sourceNamespace fields. spec.name Required for HelmRepo type channels, optional for ObjectBucket type channels The specific name for the target Helm chart or deployable within the channel. If neither the name or packageFilter are defined for channel types where the field is optional, all deployables are found and the latest version of each deployable is retrieved. spec.packageFilter Optional Defines the parameters to use to find target deployables or a subset of a deployables. If multiple filter conditions are defined, a deployable must meet all filter conditions. spec.packageFilter.version Optional The version or versions for the deployable. You can use a range of versions in the form >1.0 , or <3.0 . By default, the version with the most recent "creationTimestamp" value is used. spec.packageFilter.annotations Optional The annotations for the deployable. spec.packageOverrides Optional Section for defining overrides for the Kubernetes resource that is subscribed to by the subscription, such as a Helm chart, deployable, or other Kubernetes resource within a channel. spec.packageOverrides.packageName Optional, but required for setting override Identifies the Kubernetes resource that is being overwritten. spec.packageOverrides.packageAlias Optional Gives an alias to the Kubernetes resource that is being overwritten. spec.packageOverrides.packageOverrides Optional The configuration of parameters and replacement values to use to override the Kubernetes resource. spec.placement Required Identifies the subscribing clusters where deployables need to be placed, or the placement rule that defines the clusters. Use the placement configuration to define values for multicluster deployments. spec.placement.local Optional, but required for a stand-alone cluster or cluster that you want to manage directly Defines whether the subscription must be deployed locally. Set the value to true to have the subscription synchronize with the specified channel. Set the value to false to prevent the subscription from subscribing to any resources from the specified channel. Use this field when your cluster is a stand-alone cluster or you are managing this cluster directly. If your cluster is part of a multicluster and you do not want to manage the cluster directly, use only one of clusters , clusterSelector , or placementRef to define where your subscription is to be placed. If your cluster is the Hub of a multicluster and you want to manage the cluster directly, you must register the Hub as a managed cluster before the subscription operator can subscribe to resources locally. spec.placement.clusters Optional Defines the clusters where the subscription is to be placed. Only one of clusters , clusterSelector , or placementRef is used to define where your subscription is to be placed for a multicluster. If your cluster is a stand-alone cluster that is not your hub cluster, you can also use local cluster . spec.placement.clusters.name Optional, but required for defining the subscribing clusters The name or names of the subscribing clusters. spec.placement.clusterSelector Optional Defines the label selector to use to identify the clusters where the subscription is to be placed. Use only one of clusters , clusterSelector , or placementRef to define where your subscription is to be placed for a multicluster. If your cluster is a stand-alone cluster that is not your hub cluster, you can also use local cluster . spec.placement.placementRef Optional Defines the placement rule to use for the subscription. Use only one of clusters , clusterSelector , or placementRef to define where your subscription is to be placed for a multicluster. If your cluster is a stand-alone cluster that is not your Hub cluster, you can also use local cluster . spec.placement.placementRef.name Optional, but required for using a placement rule The name of the placement rule for the subscription. spec.placement.placementRef.kind Optional, but required for using a placement rule. Set the value to Placement to indicate that a placement rule is used for deployments with the subscription. spec.overrides Optional Any parameters and values that need to be overridden, such as cluster-specific settings. spec.overrides.clusterName Optional The name of the cluster or clusters where parameters and values are being overridden. spec.overrides.clusterOverrides Optional The configuration of parameters and values to override. spec.timeWindow Optional Defines the settings for configuring a time window when the subscription is active or blocked. spec.timeWindow.type Optional, but required for configuring a time window Indicates whether the subscription is active or blocked during the configured time window. Deployments for the subscription occur only when the subscription is active. spec.timeWindow.location Optional, but required for configuring a time window The time zone of the configured time range for the time window. All time zones must use the Time Zone (tz) database name format. For more information, see Time Zone Database . spec.timeWindow.daysofweek Optional, but required for configuring a time window Indicates the days of the week when the time range is applied to create a time window. The list of days must be defined as an array, such as daysofweek: ["Monday", "Wednesday", "Friday"] . spec.timeWindow.hours Optional, but required for configuring a time window Defined the time range for the time window. A start time and end time for the hour range must be defined for each time window. You can define multiple time window ranges for a subscription. spec.timeWindow.hours.start Optional, but required for configuring a time window The timestamp that defines the beginning of the time window. The timestamp must use the Go programming language Kitchen format "hh:mmpm" . For more information, see Constants . spec.timeWindow.hours.end Optional, but required for configuring a time window The timestamp that defines the ending of the time window. The timestamp must use the Go programming language Kitchen format "hh:mmpm" . For more information, see Constants . Notes: When you are defining your YAML, a subscription can use packageFilters to point to multiple Helm charts, deployables, or other Kubernetes resources. The subscription, however, only deploys the latest version of one chart, or deployable, or other resource. For time windows, when you are defining the time range for a window, the start time must be set to occur before the end time. If you are defining multiple time windows for a subscription, the time ranges for the windows cannot overlap. The actual time ranges are based on the subscription-controller container time, which can be set to a different time and location than the time and location that you are working within. Within your subscription specification, you can also define the placement of a Helm release as part of the subscription definition. Each subscription can reference an existing placement rule, or define a placement rule directly within the subscription definition. When you are defining where to place your subscription in the spec.placement section, use only one of clusters , clusterSelector , or placementRef for a multicluster environment. If you include more than one placement setting, one setting is used and others are ignored. The following priority is used to determine which setting the subscription operator uses: placementRef clusters clusterSelector Your subscription can resemble the following YAML content: apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: "1.36.x" placement: placementRef: kind: Placement name: towhichcluster overrides: - clusterName: "/" clusterOverrides: - path: "metadata.namespace" value: default 1.6.11.3. Subscription file samples For application samples that you can deploy, see the stolostron repository. apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress 1.6.11.4. Secondary channel sample If there is a mirrored channel (application source repository), you can specify a secondaryChannel in the subscription YAML. When an application subscription fails to connect to the repository server using the primary channel, it connects to the repository server using the secondary channel. Ensure that the application manifests stored in the secondary channel are in sync with the primary channel. See the following sample subscription YAML with the secondaryChannel . apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch secondaryChannel: ns-ch-2/predev-ch-2 name: nginx-ingress 1.6.11.4.1. Subscription time window example The following example subscription includes multiple configured time windows. A time window occurs between 10:20 AM and 10:30 AM every Monday, Wednesday, and Friday. A time window also occurs between 12:40 PM and 1:40 PM every Monday, Wednesday, and Friday. The subscription is active only during these six weekly time windows for deployments to begin. apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: "1.36.x" placement: placementRef: kind: Placement name: towhichcluster timewindow: windowtype: "active" location: "America/Los_Angeles" daysofweek: ["Monday", "Wednesday", "Friday"] hours: - start: "10:20AM" end: "10:30AM" - start: "12:40PM" end: "1:40PM" For timewindow , enter active or blocked , depending on the purpose of the type. 1.6.11.4.2. Subscription with overrides example The following example includes package overrides to define a different release name of the Helm release for Helm chart. A package override setting is used to set the name my-nginx-ingress-releaseName as the different release name for the nginx-ingress Helm release. apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: simple namespace: default spec: channel: ns-ch/predev-ch name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: my-nginx-ingress-releaseName packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: false 1.6.11.4.3. Helm repository subscription example The following subscription automatically pulls the latest nginx Helm release for the version 1.36.x . The Helm release deployable is placed on the my-development-cluster-1 cluster when a new version is available in the source Helm repository. The spec.packageOverrides section shows optional parameters for overriding values for the Helm release. The override values are merged into the Helm release values.yaml file, which is used to retrieve the configurable variables for the Helm release. apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: "1.36.x" placement: clusters: - name: my-development-cluster-1 packageOverrides: - packageName: my-server-integration-prod packageOverrides: - path: spec value: persistence: enabled: false useDynamicProvisioning: false license: accept tls: hostname: my-mcm-cluster.icp sso: registrationImage: pullSecret: hub-repo-docker-secret 1.6.11.4.4. Git repository subscription example 1.6.11.4.4.1. Subscribing specific branch and directory of Git repository apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sample-subscription namespace: default annotations: apps.open-cluster-management.io/git-path: sample_app_1/dir1 apps.open-cluster-management.io/git-branch: branch1 spec: channel: default/sample-channel placement: placementRef: kind: Placement name: dev-clusters In this example subscription, the annotation apps.open-cluster-management.io/git-path indicates that the subscription subscribes to all Helm charts and Kubernetes resources within the sample_app_1/dir1 directory of the Git repository that is specified in the channel. The subscription subscribes to master branch by default. In this example subscription, the annotation apps.open-cluster-management.io/git-branch: branch1 is specified to subscribe to branch1 branch of the repository. Note: When you are using a Git channel subscription that subscribes to Helm charts, the resource topology view might show an additional Helmrelease resource. This resource is an internal application management resource and can be safely ignored. 1.6.11.4.4.2. Adding a .kubernetesignore file You can include a .kubernetesignore file within your Git repository root directory, or within the apps.open-cluster-management.io/git-path directory that is specified in subscription's annotations. You can use this .kubernetesignore file to specify patterns of files or subdirectories, or both, to ignore when the subscription deploys Kubernetes resources or Helm charts from the repository. You can also use the .kubernetesignore file for fine-grain filtering to selectively apply Kubernetes resources. The pattern format of the .kubernetesignore file is the same as a .gitignore file. If the apps.open-cluster-management.io/git-path annotation is not defined, the subscription looks for a .kubernetesignore file in the repository root directory. If the apps.open-cluster-management.io/git-path field is defined, the subscription looks for the .kubernetesignore file in the apps.open-cluster-management.io/git-path directory. Subscriptions do not search in any other directory for a .kubernetesignore file. 1.6.11.4.4.3. Applying Kustomize If there is kustomization.yaml or kustomization.yml file in a subscribed Git folder, kustomize is applied. You can use spec.packageOverrides to override kustomization at the subscription deployment time. apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: example-subscription namespace: default spec: channel: some/channel packageOverrides: - packageName: kustomization packageOverrides: - value: | patchesStrategicMerge: - patch.yaml In order to override kustomization.yaml file, packageName: kustomization is required in packageOverrides . The override either adds new entries or updates existing entries. It does not remove existing entries. 1.6.11.4.4.4. Enabling Git WebHook By default, a Git channel subscription clones the Git repository specified in the channel every minute and applies changes when the commit ID has changed. Alternatively, you can configure your subscription to apply changes only when the Git repository sends repo PUSH and PULL webhook event notifications. In order to configure webhook in a Git repository, you need a target webhook payload URL and optionally a secret. 1.6.11.4.4.4.1. Payload URL Create a route (ingress) in the hub cluster to expose the subscription operator's webhook event listener service. oc create route passthrough --service=multicluster-operators-subscription -n open-cluster-management Then, use oc get route multicluster-operators-subscription -n open-cluster-management command to find the externally-reachable hostname. The webhook payload URL is https://<externally-reachable hostname>/webhook . 1.6.11.4.4.4.2. Webhook secret Webhook secret is optional. Create a Kubernetes secret in the channel namespace. The secret must contain data.secret . See the following example: apiVersion: v1 kind: Secret metadata: name: my-github-webhook-secret data: secret: BASE64_ENCODED_SECRET The value of data.secret is the base-64 encoded WebHook secret you are going to use. Best practice: Use a unique secret for each Git repository. 1.6.11.4.4.4.3. Configuring WebHook in Git repository Use the payload URL and webhook secret to configure WebHook in your Git repository. 1.6.11.4.4.4.4. Enable WebHook event notification in channel Annotate the subscription channel. See the following example: oc annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-enabled="true" If you used a secret to configure WebHook, annotate the channel with this as well where <the_secret_name> is the kubernetes secret name containing webhook secret. oc annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-secret="<the_secret_name>" No webhook specific configuration is needed in subscriptions. 1.6.12. Placement rule samples overview (Deprecated) Deprecated: PlacementRules is deprecated. Use Placement instead. Placement rules ( placementrule.apps.open-cluster-management.io ) define the target clusters where deployables can be deployed. Use placement rules to help you facilitate the multicluster deployment of your deployables. To use the OpenShift CLI tool, see the following procedure: Compose and save your application YAML file with your preferred editing tool. Run the following command to apply your file to an API server. Replace filename with the name of your file: oc apply -f filename.yaml Verify that your application resource is created by running the following command: oc get application.app Placement rule YAML structure Placement rule YAML values table Placement rule sample files 1.6.12.1. Placement rule YAML structure The following YAML structure shows the required fields for a placement rule and some of the common optional fields. Your YAML structure needs to include some required fields and values. Depending on your application management requirements, you might need to include other optional fields and values. You can compose your own YAML content with any tool and in the product console apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: namespace: resourceVersion: labels: app: chart: release: heritage: selfLink: uid: spec: clusterSelector: matchLabels: datacenter: environment: clusterReplicas: clusterConditions: ResourceHint: type: order: Policies: 1.6.12.2. Placement rule YAML values table Field Required or Optional Description apiVersion Required Set the value to apps.open-cluster-management.io/v1 . kind Required Set the value to PlacementRule to indicate that the resource is a placement rule. metadata.name Required The name for identifying the placement rule. metadata.namespace Required The namespace resource to use for the placement rule. metadata.resourceVersion Optional The version of the placement rule resource. metadata.labels Optional The labels for the placement rule. spec.clusterSelector Optional The labels for identifying the target clusters spec.clusterSelector.matchLabels Optional The labels that must exist for the target clusters. spec.clusterSelector.matchExpressions Optional The labels that must exist for the target clusters. status.decisions Optional Defines the target clusters where deployables are placed. status.decisions.clusterName Optional The name of a target cluster status.decisions.clusterNamespace Optional The namespace for a target cluster. spec.clusterReplicas Optional The number of replicas to create. spec.clusterConditions Optional Define any conditions for the cluster. spec.ResourceHint Optional If more than one cluster matches the labels and values that you provided in the fields, you can specify a resource specific criteria to select the clusters. For example, you can select the cluster with the most available CPU cores. spec.ResourceHint.type Optional Set the value to either cpu to select clusters based on available CPU cores or memory to select clusters based on available memory resources. spec.ResourceHint.order Optional Set the value to either asc for ascending order, or desc for descending order. spec.Policies Optional The policy filters for the placement rule. 1.6.12.3. Placement rule sample files For application samples that you can deploy, see the stolostron repository. Existing placement rules can include the following fields that indicate the status for the placement rule. This status section is appended after the spec section in the YAML structure for a rule. status: decisions: clusterName: clusterNamespace: Field Description status The status information for the placement rule. status.decisions Defines the target clusters where deployables are placed. status.decisions.clusterName The name of a target cluster status.decisions.clusterNamespace The namespace for a target cluster. Example 1 apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: gbapp-gbapp namespace: development labels: app: gbapp spec: clusterSelector: matchLabels: environment: Dev clusterReplicas: 1 status: decisions: - clusterName: local-cluster clusterNamespace: local-cluster Example 2 apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: ns-sub-1 labels: app: nginx-app-details spec: clusterReplicas: 1 clusterConditions: - type: ManagedClusterConditionAvailable status: "True" clusterSelector: matchExpressions: - key: environment operator: In values: - dev 1.6.13. Application samples View samples and YAML definitions that you can use to build your files. Applications ( Application.app.k8s.io ) in Red Hat Advanced Cluster Management for Kubernetes are used for viewing the application components. To use the OpenShift CLI tool, see the following procedure: Compose and save your application YAML file with your preferred editing tool. Run the following command to apply your file to an API server. Replace filename with the name of your file: oc apply -f filename.yaml Verify that your application resource is created by running the following command: oc get application.app Application YAML structure Application YAML table Application file samples 1.6.13.1. Application YAML structure To compose the application definition YAML content for creating or updating an application resource, your YAML structure needs to include some required fields and values. Depending on your application requirements or application management requirements, you might need to include other optional fields and values. The following YAML structure shows the required fields for an application and some of the common optional fields. apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: namespace: spec: selector: matchLabels: label_name: label_value 1.6.13.2. Application YAML table Field Value Description apiVersion app.k8s.io/v1beta1 Required kind Application Required metadata name : The name for identifying the application resource. Required namespace : The namespace resource to use for the application. spec selector.matchLabels key:value pair that are a Kubernetes label and value found on the subscription or subscriptions this application will be associated with. The label allows the application resource to find the related subscriptions by performing a label name and value match. Required The spec for defining these applications is based on the Application metadata descriptor custom resource definition that is provided by the Kubernetes Special Interest Group (SIG). Only the values shown in the table are required. You can use this definition to help you compose your own application YAML content. For more information about this definition, see Kubernetes SIG Application CRD community specification . 1.6.13.3. Application file samples For application samples that you can deploy, see the stolostron repository. The definition structure for an application can resemble the following example YAML content: apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: my-application namespace: my-namespace spec: selector: matchLabels: my-label: my-label-value
[ "apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: sample-application-set namespace: sample-gitops-namespace spec: generators: - clusterDecisionResource: configMapRef: acm-placement labelSelector: matchLabels: cluster.open-cluster-management.io/placement: sample-application-placement requeueAfterSeconds: 180 template: metadata: name: sample-application-{{name}} spec: project: default sources: [ { repoURL: https://github.com/sampleapp/apprepo.git targetRevision: main path: sample-application } ] destination: namespace: sample-application server: \"{{server}}\" syncPolicy: syncOptions: - CreateNamespace=true - PruneLast=true - Replace=true - ApplyOutOfSyncOnly=true - Validate=false automated: prune: true allowEmpty: true selfHeal: true", "apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: sample-application-placement namespace: sample-gitops-namespace spec: clusterSets: - sampleclusterset", "apiVersion: apps.open-cluster-management.io/v1alpha1 kind: SubscriptionStatus metadata: labels: apps.open-cluster-management.io/cluster: <your-managed-cluster> apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> statuses: packages: - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" Message: <detailed error. visible only if the package fails> name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-slave namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-slave namespace: test-ns-2 phase: Deployed subscription: lastUpdateTime: \"2021-09-13T20:12:34Z\" phase: Deployed", "apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/cluster: \"true\" name: <your-managed-cluster-1> namespace: <your-managed-cluster-1> reportType: Cluster results: - result: deployed source: appsub-1-ns/appsub-1 // appsub 1 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: failed source: appsub-2-ns/appsub-2 // appsub 2 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: propagationFailed source: appsub-3-ns/appsub-3 // appsub 3 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362", "apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> reportType: Application resources: - apiVersion: v1 kind: Service name: redis-master2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-master2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: redis-slave2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-slave2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: frontend2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: frontend2 namespace: playback-ns-2 results: - result: deployed source: cluster-1 //cluster 1 status timestamp: nanos: 0 seconds: 0 - result: failed source: cluster-3 //cluster 2 status timestamp: nanos: 0 seconds: 0 - result: propagationFailed source: cluster-4 //cluster 3 status timestamp: nanos: 0 seconds: 0 summary: deployed: 8 failed: 1 inProgress: 0 propagationFailed: 1 clusters: 10", "% oc get managedclusterview -n <failing-clusternamespace> \"<app-name>-<app name>\"", "% getAppSubStatus.sh -c <your-managed-cluster> -s <your-appsub-namespace> -n <your-appsub-name>", "% getLastUpdateTime.sh -c <your-managed-cluster> -s <your-appsub-namespace> -n <your-appsub-name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: \"yes\"", "apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm namespace: hub-repo spec: pathname: [https://kubernetes-charts.storage.googleapis.com/] # URL references a valid chart URL. type: HelmRepo", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: Object storage pathname: [http://sample-ip:#####/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true", "https://s3.console.aws.amazon.com/s3/buckets/sample-bucket-1 s3://sample-bucket-1/ https://sample-bucket-1.s3.amazonaws.com/", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: object-dev namespace: ch-object-dev spec: type: ObjectBucket pathname: https://s3.console.aws.amazon.com/s3/buckets/sample-bucket-1 secretRef: name: secret-dev --- apiVersion: v1 kind: Secret metadata: name: secret-dev namespace: ch-object-dev stringData: AccessKeyID: <your AWS bucket access key id> SecretAccessKey: <your AWS bucket secret access key> Region: <your AWS bucket region> type: Opaque", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: obj-sub-ns spec: clusterSelector: {} --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: obj-sub namespace: obj-sub-ns spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster", "annotations: apps.open-cluster-management.io/bucket-path: <subfolder-1>", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/bucket-path: subfolder1 name: obj-sub namespace: obj-sub-ns labels: name: obj-sub spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: \"yes\"", "apiVersion: v1 kind: Secret metadata: name: toweraccess namespace: same-as-subscription type: Opaque stringData: token: ansible-tower-api-token host: https://ansible-tower-host-url", "apply -f", "apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess job_template_name: Demo Job Template extra_vars: cost: 6.88 ghosts: [\"inky\",\"pinky\",\"clyde\",\"sue\"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45 job_tags: \"provision,install,configuration\" skip_tags: \"configuration,restart\"", "apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess workflow_template_name: Demo Workflow Template extra_vars: cost: 6.88 ghosts: [\"inky\",\"pinky\",\"clyde\",\"sue\"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45", "apiVersion: `image.openshift.io/v1` kind: ImageStream metadata: name: default namespace: default spec: lookupPolicy: local: true tags: - name: 'latest' from: kind: DockerImage name: 'quay.io/repository/open-cluster-management/multicluster-operators-subscription:community-latest'", "--- apiVersion: v1 kind: Namespace metadata: name: multins --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: multins data: path: resource1 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-2 namespace: default data: path: resource2 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-3 data: path: resource3", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: subscription-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge apps.open-cluster-management.io/current-namespace-scoped: \"true\" spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters", "apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 19", "apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters", "apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 20", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: mergeAndOwn spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters", "apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns annotations: apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example data: name: user1 age: 20", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: replace spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters", "apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-branch: <branch1>", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-desired-commit: <full commit number> apps.open-cluster-management.io/git-clone-depth: 100", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-tag: <v1.0> apps.open-cluster-management.io/git-clone-depth: 100", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: open-cluster-management:subscription-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: open-cluster-management:subscription-admin", "edit clusterrolebinding open-cluster-management:subscription-admin", "subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: example-name - apiGroup: rbac.authorization.k8s.io kind: Group name: example-group-name - kind: ServiceAccount name: my-service-account namespace: my-service-account-namespace - apiGroup: rbac.authorization.k8s.io kind: User name: 'system:serviceaccount:my-service-account-namespace:my-service-account'", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: sub2 name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel allow: - apiVersion: policy.open-cluster-management.io/v1 kinds: - Policy - apiVersion: v1 kinds: - Deployment deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: local: true", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: myapplication name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: placementRef: name: demo-placement kind: Placement", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: <value from the list> spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: <application1> apps.open-cluster-management.io/git-branch: <branch1> spec: channel: sample/git-channel placement: local: true", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: application1 apps.open-cluster-management.io/git-branch: branch1 apps.open-cluster-management.io/reconcile-rate: \"off\" spec: channel: sample/git-channel placement: local: true", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: low spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription annotations: apps.open-cluster-management.io/reconcile-rate: \"off\" spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true", "annotate mch -n open-cluster-management multiclusterhub mch-pause=true --overwrite=true", "edit deployment -n open-cluster-management multicluster-operators-hub-subscription", "annotate mch -n open-cluster-management multiclusterhub mch-pause=false --overwrite=true", "command: - /usr/local/bin/multicluster-operators-subscription - --sync-interval=60 - --retry-period=52", "apiVersion: v1 kind: Secret metadata: name: my-git-secret namespace: channel-ns data: user: dXNlcgo= accessToken: cGFzc3dvcmQK", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: sample-channel namespace: channel-ns spec: type: Git pathname: <Git HTTPS URL> secretRef: name: my-git-secret", "x509: certificate is valid for localhost.com, not localhost", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: labels: name: sample-channel namespace: sample spec: type: GitHub pathname: <Git HTTPS URL> insecureSkipVerify: true", "apiVersion: v1 kind: ConfigMap metadata: name: git-ca namespace: channel-ns data: caCerts: | # Git server root CA -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE----- # Git server intermediate CA 1 -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE----- # Git server intermediate CA 2 -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE-----", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: configMapRef: name: git-ca pathname: <Git HTTPS URL> type: Git", "apiVersion: v1 kind: Secret metadata: name: git-ssh-key namespace: channel-ns data: sshKey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQ21GbGN6STFOaTFqZEhJQUFBQUdZbU55ZVhCMEFBQUFHQUFBQUJDK3YySHhWSIwCm8zejh1endzV3NWODMvSFVkOEtGeVBmWk5OeE5TQUgcFA3Yk1yR2tlRFFPd3J6MGIKOUlRM0tKVXQzWEE0Zmd6NVlrVFVhcTJsZWxxVk1HcXI2WHF2UVJ5Mkc0NkRlRVlYUGpabVZMcGVuaGtRYU5HYmpaMmZOdQpWUGpiOVhZRmd4bTNnYUpJU3BNeTFLWjQ5MzJvOFByaDZEdzRYVUF1a28wZGdBaDdndVpPaE53b0pVYnNmYlZRc0xMS1RrCnQwblZ1anRvd2NEVGx4TlpIUjcwbGVUSHdGQTYwekM0elpMNkRPc3RMYjV2LzZhMjFHRlMwVmVXQ3YvMlpMOE1sbjVUZWwKSytoUWtxRnJBL3BUc1ozVXNjSG1GUi9PV25FPQotLS0tLUVORCBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0K passphrase: cGFzc3cwcmQK type: Opaque", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git insecureSkipVerify: true", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 spec: watchHelmNamespaceScopedResources: true channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\"", "packageOverrides: - packageName: nginx-ingress packageOverrides: - path: spec value: my-override-values 1", "packageOverrides: - packageName: nginx-ingress packageAlias: my-helm-release-name", "apply -f filename.yaml", "get application.app", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: namespace: # Each channel needs a unique namespace, except Git channel. spec: sourceNamespaces: type: pathname: secretRef: name: gates: annotations: labels:", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: ObjectBucket pathname: [http://9.28.236.243:xxxx/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true", "apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: Helm namespace: hub-repo spec: pathname: [https://9.21.107.150:8443/helm-repo/charts] # URL references a valid chart URL. insecureSkipVerify: true type: HelmRepo", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/", "apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: hive-cluster-gitrepo namespace: gitops-cluster-lifecycle spec: type: Git pathname: https://github.com/open-cluster-management/gitops-clusters.git secretRef: name: github-gitops-clusters --- apiVersion: v1 kind: Secret metadata: name: github-gitops-clusters namespace: gitops-cluster-lifecycle data: user: dXNlcgo= # Value of user and accessToken is Base 64 coded. accessToken: cGFzc3dvcmQ", "apply -f filename.yaml", "get application.app", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: namespace: labels: spec: sourceNamespace: source: channel: name: packageFilter: version: labelSelector: matchLabels: package: component: annotations: packageOverrides: - packageName: packageAlias: - path: value: placement: local: clusters: name: clusterSelector: placementRef: name: kind: Placement overrides: clusterName: clusterOverrides: path: value:", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: placementRef: kind: Placement name: towhichcluster overrides: - clusterName: \"/\" clusterOverrides: - path: \"metadata.namespace\" value: default", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch secondaryChannel: ns-ch-2/predev-ch-2 name: nginx-ingress", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: placementRef: kind: Placement name: towhichcluster timewindow: windowtype: \"active\" location: \"America/Los_Angeles\" daysofweek: [\"Monday\", \"Wednesday\", \"Friday\"] hours: - start: \"10:20AM\" end: \"10:30AM\" - start: \"12:40PM\" end: \"1:40PM\"", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: simple namespace: default spec: channel: ns-ch/predev-ch name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: my-nginx-ingress-releaseName packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: false", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: clusters: - name: my-development-cluster-1 packageOverrides: - packageName: my-server-integration-prod packageOverrides: - path: spec value: persistence: enabled: false useDynamicProvisioning: false license: accept tls: hostname: my-mcm-cluster.icp sso: registrationImage: pullSecret: hub-repo-docker-secret", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sample-subscription namespace: default annotations: apps.open-cluster-management.io/git-path: sample_app_1/dir1 apps.open-cluster-management.io/git-branch: branch1 spec: channel: default/sample-channel placement: placementRef: kind: Placement name: dev-clusters", "apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: example-subscription namespace: default spec: channel: some/channel packageOverrides: - packageName: kustomization packageOverrides: - value: | patchesStrategicMerge: - patch.yaml", "create route passthrough --service=multicluster-operators-subscription -n open-cluster-management", "apiVersion: v1 kind: Secret metadata: name: my-github-webhook-secret data: secret: BASE64_ENCODED_SECRET", "annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-enabled=\"true\"", "annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-secret=\"<the_secret_name>\"", "apply -f filename.yaml", "get application.app", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: namespace: resourceVersion: labels: app: chart: release: heritage: selfLink: uid: spec: clusterSelector: matchLabels: datacenter: environment: clusterReplicas: clusterConditions: ResourceHint: type: order: Policies:", "status: decisions: clusterName: clusterNamespace:", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: gbapp-gbapp namespace: development labels: app: gbapp spec: clusterSelector: matchLabels: environment: Dev clusterReplicas: 1 status: decisions: - clusterName: local-cluster clusterNamespace: local-cluster", "apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: ns-sub-1 labels: app: nginx-app-details spec: clusterReplicas: 1 clusterConditions: - type: ManagedClusterConditionAvailable status: \"True\" clusterSelector: matchExpressions: - key: environment operator: In values: - dev", "apply -f filename.yaml", "get application.app", "apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: namespace: spec: selector: matchLabels: label_name: label_value", "apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: my-application namespace: my-namespace spec: selector: matchLabels: my-label: my-label-value" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/applications/managing-applications
Chapter 3. Getting support
Chapter 3. Getting support Windows Container Support for Red Hat OpenShift is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements . You must have this separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal . For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers . If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, a distribution that lacks official support.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/windows_container_support_for_openshift/windows-containers-support
Chapter 8. Applying security policies
Chapter 8. Applying security policies During the in-place upgrade process, the SELinux policy must be switched to permissive mode. Furthermore, security profiles might contain changes between major releases. To restore system security, switch SELinux to enforcing mode again and verify the system-wide cryptographic policy. You may also want to remediate the system to be compliant with a specific security profile. Also, some security-related components require pre-update steps for a correct upgrade. 8.1. Changing SELinux mode to enforcing During the in-place upgrade process, the Leapp utility sets SELinux mode to permissive. When the system is successfully upgraded, you have to manually change SELinux mode to enforcing. Prerequisites The system has been upgraded and you have performed the Verification described in Verifying the post-upgrade state . Procedure Ensure that there are no SELinux denials, for example, by using the ausearch utility: Note that the step covers only the most common scenario. To check for all possible SELinux denials, see the Identifying SELinux denials section in the Using SELinux title, which provides a complete procedure. Open the /etc/selinux/config file in a text editor of your choice, for example: Configure the SELINUX=enforcing option: Save the change, and restart the system: Verification After the system restarts, confirm that the getenforce command returns Enforcing : Additional resources Troubleshooting problems related to SELinux Changing SELinux states and modes 8.2. System-wide cryptographic policies The system-wide cryptographic policies is a system component that configures the core cryptographic subsystems, covering the TLS, IPSec, SSH, DNSSec, and Kerberos protocols. The in-place upgrade process preserves the cryptographic policy you used in RHEL 8. For example, if you used the DEFAULT cryptographic policy in RHEL 8, your system upgraded to RHEL 9 also uses DEFAULT . Note that specific settings in predefined policies differ, and RHEL 9 cryptographic policies contain more strict and more secure default values. For example, the RHEL 9 DEFAULT cryptographic policy restricts SHA-1 usage for signatures and the LEGACY policy no longer allows DH and RSA ciphers with less than 2048 bits. See the Using system-wide cryptographic policies section in the Security hardening document for more information. Custom cryptographic policies are preserved across the in-place upgrade. To view or change the current system-wide cryptographic policy, use the update-crypto-policies tool: For example, the following command switches the system-wide crypto policy level to FUTURE , which should withstand any near-term future attacks: If your scenario requires the use of SHA-1 for verifying existing or third-party cryptographic signatures, you can enable it by entering the following command: Alternatively, you can switch the system-wide crypto policies to the LEGACY policy. However, LEGACY also enables many other algorithms that are not secure. Warning Enabling the SHA subpolicy makes your system more vulnerable than the default RHEL 9 settings. Switching to the LEGACY policy is even less secure, and you should use it with caution. You can also customize system-wide cryptographic policies. For details, see the Customizing system-wide cryptographic policies with subpolicies and Creating and setting a custom system-wide cryptographic policy sections. If you use a custom cryptographic policy, consider reviewing and updating the policy to mitigate threats brought by advances in cryptography and computer hardware. Additional resources Using system-wide cryptographic policies update-crypto-policies(8) man page on your system 8.3. Upgrading a system hardened to a security baseline To get a fully hardened system after a successful upgrade to RHEL 9, you can use automated remediation provided by the OpenSCAP suite. OpenSCAP remediations align your system with security baselines, such as PCI-DSS, OSPP, or ACSC Essential Eight. The configuration compliance recommendations differ among major versions of RHEL due to the evolution of the security offering. When upgrading a hardened RHEL 8 system, the Leapp tool does not provide direct means to retain the full hardening. Depending on the changes in the component configuration, the system might diverge from the recommendations for the RHEL 9 during the upgrade. Note You cannot use the same SCAP content for scanning RHEL 8 and RHEL 9. Update the management platforms if the compliance of the system is managed by tools such as Red Hat Satellite or Red Hat Insights. As an alternative to automated remediations, you can make the changes manually by following an OpenSCAP-generated report. For information about generating a compliance report, see Scanning the system for security compliance and vulnerabilities . Important Automated remediations support RHEL systems in the default configuration. Because the system configuration has been altered after the upgrade, running automated remediations might not make the system fully compliant with the required security profile. You might need to fix some requirements manually. The following example procedure hardens your system settings according to the PCI-DSS profile. Prerequisites The scap-security-guide package is installed on your RHEL 9 system. Procedure Find the appropriate security compliance data stream .xml file: See the Viewing compliance profiles section for more information. Remediate the system according to the selected profile from the appropriate data stream: You can replace the pci-dss value in the --profile argument with the ID of the profile according to which you want to harden your system. For a full list of profiles supported in RHEL 9, see SCAP security profiles supported in RHEL . Warning If not used carefully, running the system evaluation with the --remediate option enabled might render the system non-functional. Red Hat does not provide any automated method to revert changes made by security-hardening remediations. Remediations are supported on RHEL systems in the default configuration. If your system has been altered after the installation, running remediation might not make it compliant with the required security profile. Restart your system: Verification Verify that the system is compliant with the profile, and save the results in an HTML file: Additional resources scap-security-guide(8) and oscap(8) man pages on your system Scanning the system for security compliance and vulnerabilities Red Hat Insights Security Policy Red Hat Satellite Security Policy 8.4. Verifying USBGuard policies With the USBGuard software framework, you can protect your systems against intrusive USB devices by using lists of permitted and forbidden devices based on the USB device authorization feature in the kernel. Prerequisites You have created a rule set for USB devices that reflected the requirements of your scenario before the upgrade. The usbguard service is installed and running on your RHEL 9 system. Procedure Back up your *.conf files stored in the /etc/usbguard/ directory. Use the usbguard generate-policy to generate a new policy file. Note that the command generates rules for the currently present USB devices only. Compare the newly generated rules against the rules in the policy: If you identify differences in the rules for the devices that were present when you generated the new policy and the pre-upgrade rules for the same devices, modify the original rules correspondingly also for devices that might be inserted later. If there are no differences between the newly generated and the pre-upgrade rules, you can use the policy files created in RHEL 8 without any modification. Additional resources Protecting systems against intrusive USB devices 8.5. Updating fapolicyd databases The fapolicyd software framework controls the execution of applications based on a user-defined policy. In rare cases, a problem with the fapolicyd trust database format can occur. To rebuild the database: Stop the service: Delete the database: Start the service: If you added custom trust files to the trust database, update them either individually by using the fapolicyd-cli -f update <FILE> command or altogether by using fapolicyd-cli -f update . To apply the changes, use either the fapolicyd-cli --update command or restart the fapolicyd service. Additionally, custom binaries might require a rebuild for the new RHEL version. Perform any such updates before you update the fapolicyd database. Additional resources Blocking and allowing applications using fapolicyd 8.6. Updating NSS databases from DBM to SQLite Many applications automatically convert the NSS database format from DBM to SQLite after you set the NSS_DEFAULT_DB_TYPE environment variable to the sql value on the system. You can ensure that all databases are converted by using the certutil tool. Note Convert your NSS databases stored in the DBM format before you upgrade to RHEL 9. In other words, perform the following steps on RHEL systems (6, 7, and 8) from which you want to upgrade to RHEL 9. Prerequisites The nss-tools package is installed on your system. Procedure Set NSS_DEFAULT_DB_TYPE to sql on the system: Use the conversion command in every directory [1] that contains NSS database files in the DBM format, for example: Note that you have to provide a password or a path to a password file as a value of the -f option if your database file is password-protected, for example: Additional resources certutil(1) man page on your system 8.7. Migrating Cyrus SASL databases from the Berkeley DB format to GDBM The RHEL 9 cyrus-sasl package is built without the libdb dependency, and the sasldb plugin uses the GDBM database format instead of Berkeley DB. Prerequisites The cyrus-sasl-lib package is installed on your system. Procedure To migrate your existing Simple Authentication and Security Layer (SASL) databases stored in the old Berkeley DB format, use the cyrusbdb2current tool with the following syntax: Additional resources cyrusbdb2current(1) man page on your system [1] RHEL contains a system-wide NSS database in the /etc/pki/nssdb directory. Other locations depend on applications you use. For example, Libreswan stores its database in the /etc/ipsec.d/ directory and Firefox uses the /home/<username>/.mozilla/firefox/ directory.
[ "ausearch -m AVC,USER_AVC -ts boot", "vi /etc/selinux/config", "This file controls the state of SELinux on the system. SELINUX= can take one of these three values: enforcing - SELinux security policy is enforced. permissive - SELinux prints warnings instead of enforcing. disabled - No SELinux policy is loaded. SELINUX= enforcing SELINUXTYPE= can take one of these two values: targeted - Targeted processes are protected, mls - Multi Level Security protection. SELINUXTYPE=targeted", "reboot", "getenforce Enforcing", "update-crypto-policies --show DEFAULT", "update-crypto-policies --set FUTURE Setting system policy to FUTURE", "update-crypto-policies --set DEFAULT:SHA1", "ls /usr/share/xml/scap/ssg/content/ ssg-rhel9-ds.xml", "oscap xccdf eval --profile pci-dss --remediate /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "reboot", "oscap xccdf eval --report pcidss_report.html --profile pci-dss /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "systemctl stop fapolicyd", "fapolicyd-cli --delete-db", "systemctl start fapolicyd", "export NSS_DEFAULT_DB_TYPE=sql", "certutil -K -X -d /etc/ipsec.d/", "certutil -K -X -f /etc/ipsec.d/nsspassword -d /etc/ipsec.d/", "cyrusbdb2current <sasldb_path> <new_path>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/upgrading_from_rhel_8_to_rhel_9/applying-security-policies_upgrading-from-rhel-8-to-rhel-9
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/making-open-source-more-inclusive_datagrid
4.4. RHEA-2012:0814 - new package: i2c-tools
4.4. RHEA-2012:0814 - new package: i2c-tools A new i2c-tools package is now available for Red Hat Enterprise Linux 6. The i2c-tools package contains a set of I2C tools for Linux: a bus probing tool, a chip dumper, register-level SMBus access helpers, EEPROM (Electrically Erasable Programmable Read-Only Memory) decoding scripts, EEPROM programming tools, and a python module for SMBus access. Note EEPROM decoding scripts can render your system unusable. Make sure to use these tools wisely. This enhancement update adds the i2c-tools package to Red Hat Enterprise Linux 6. (BZ# 773267 ) All users who require i2c-tools should install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/i2c-tools
A.15. taskset
A.15. taskset The taskset tool is provided by the util-linux package. It allows administrators to retrieve and set the processor affinity of a running process, or launch a process with a specified processor affinity. Important taskset does not guarantee local memory allocation. If you require the additional performance benefits of local memory allocation, Red Hat recommends using numactl instead of taskset. To set the CPU affinity of a running process, run the following command: Replace processors with a comma delimited list of processors or ranges of processors (for example, 1,3,5-7 . Replace pid with the process identifier of the process that you want to reconfigure. To launch a process with a specified affinity, run the following command: Replace processors with a comma delimited list of processors or ranges of processors. Replace application with the command, options and arguments of the application you want to run. For more information about taskset , see the man page:
[ "taskset -pc processors pid", "taskset -c processors -- application", "man taskset" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Tool_Reference-taskset
Chapter 1. Configure data sources
Chapter 1. Configure data sources Use a unified configuration model to define data sources for Java Database Connectivity (JDBC) and Reactive drivers in Quarkus. Applications use datasources to access relational databases. Quarkus provides a unified configuration model to define datasources for Java Database Connectivity (JDBC) and Reactive database drivers. Quarkus uses Agroal and Vert.x to provide high-performance, scalable data source connection pooling for JDBC and reactive drivers. The jdbc-* and reactive-* extensions provide build time optimizations and integrate configured data sources with Quarkus features like security, health checks, and metrics. For more information about consuming and using a reactive datasource, see the Quarkus Reactive SQL clients guide. Additionally, refer to the Quarkus Hibernate ORM guide for information on consuming and using a JDBC datasource. 1.1. Get started with configuring datasources in Quarkus For users familiar with the fundamentals, this section provides an overview and code samples to set up data sources quickly. For more advanced configuration with examples, see References . 1.1.1. Zero-config setup in development mode Quarkus simplifies database configuration by offering the Dev Services feature, enabling zero-config database setup for testing or running in development (dev) mode. In dev mode, the suggested approach is to use DevServices and let Quarkus handle the database for you, whereas for production mode, you provide explicit database configuration details pointing to a database managed outside of Quarkus. To use Dev Services, add the appropriate driver extension, such as jdbc-postgresql , for your desired database type to the pom.xml file. In dev mode, if you do not provide any explicit database connection details, Quarkus automatically handles the database setup and provides the wiring between the application and the database. If you provide user credentials, the underlying database will be configured to use them. This is useful if you want to connect to the database with an external tool. To use this feature, ensure a Docker or Podman container runtime is installed, depending on the database type. Certain databases, such as H2, operate in in-memory mode and do not require a container runtime. Tip Prefix the actual connection details for prod mode with %prod. to ensure they are not applied in dev mode. For more information, see the Profiles section of the "Configuration reference" guide. For more information about Dev Services, see Dev Services overview . For more details and optional configurations, see Dev Services for databases . 1.1.2. Configure a JDBC datasource Add the correct JDBC extension for the database of your choice. jdbc-db2 jdbc-derby jdbc-h2 jdbc-mariadb jdbc-mssql jdbc-mysql jdbc-oracle jdbc-postgresql Configure your JDBC datasource: quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test quarkus.datasource.jdbc.max-size=16 1 This configuration value is only required if there is more than one database extension on the classpath. If only one viable extension is available, Quarkus assumes this is the correct one. When you add a driver to the test scope, Quarkus automatically includes the specified driver in testing. 1.1.2.1. JDBC connection pool size adjustment To protect your database from overloading during load peaks, size the pool adequately to throttle the database load. The optimal pool size depends on many factors, such as the number of parallel application users or the nature of the workload. Be aware that setting the pool size too low might cause some requests to time out while waiting for a connection. For more information about pool size adjustment properties, see the JDBC configuration reference section. 1.1.3. Configure a reactive datasource Add the correct reactive extension for the database of your choice. reactive-db2-client reactive-mssql-client reactive-mysql-client reactive-oracle-client reactive-pg-client Configure your reactive datasource: quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20 1 This configuration value is only required if there is more than one Reactive driver extension on the classpath. 1.2. Configure datasources The following section describes the configuration for single or multiple datasources. For simplicity, we will reference a single datasource as the default (unnamed) datasource. 1.2.1. Configure a single datasource A datasource can be either a JDBC datasource, reactive, or both. This depends on the configuration and the selection of project extensions. Define a datasource with the following configuration property, where db-kind defines which database platform to connect to, for example, h2 : quarkus.datasource.db-kind=h2 Quarkus deduces the JDBC driver class it needs to use from the specified value of the db-kind database platform attribute. Note This step is required only if your application depends on multiple database drivers. If the application operates with a single driver, this driver is detected automatically. Quarkus currently includes the following built-in database kinds: DB2: db2 Derby: derby H2: h2 MariaDB: mariadb Microsoft SQL Server: mssql MySQL: mysql Oracle: oracle PostgreSQL: postgresql , pgsql or pg To use a database kind that is not built-in, use other and define the JDBC driver explicitly Note You can use any JDBC driver in a Quarkus app in JVM mode as described in Using other databases . However, using a non-built-in database kind is unlikely to work when compiling your application to a native executable. For native executable builds, it is recommended to either use the available JDBC Quarkus extensions or contribute a custom extension for your specific driver. Configure the following properties to define credentials: quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> You can also retrieve the password from Vault by using a credential provider for your datasource. Until now, the configuration has been the same regardless of whether you are using a JDBC or a reactive driver. When you have defined the database kind and the credentials, the rest depends on what type of driver you are using. It is possible to use JDBC and a reactive driver simultaneously. 1.2.1.1. JDBC datasource JDBC is the most common database connection pattern, typically needed when used in combination with non-reactive Hibernate ORM. To use a JDBC datasource, start with adding the necessary dependencies: For use with a built-in JDBC driver, choose and add the Quarkus extension for your relational database driver from the list below: Derby - jdbc-derby H2 - jdbc-h2 Note H2 and Derby databases can be configured to run in "embedded mode"; however, the Derby extension does not support compiling the embedded database engine into native executables. Read Testing with in-memory databases for suggestions regarding integration testing. DB2 - jdbc-db2 MariaDB - jdbc-mariadb Microsoft SQL Server - jdbc-mssql MySQL - jdbc-mysql Oracle - jdbc-oracle PostgreSQL - jdbc-postgresql Other JDBC extensions, such as SQLite and its documentation , can be found in the Quarkiverse . For example, to add the PostgreSQL driver dependency: ./mvnw quarkus:add-extension -Dextensions="jdbc-postgresql" Note Using a built-in JDBC driver extension automatically includes the Agroal extension, which is the JDBC connection pool implementation applicable for custom and built-in JDBC drivers. However, for custom drivers, Agroal needs to be added explicitly. For use with a custom JDBC driver, add the quarkus-agroal dependency to your project alongside the extension for your relational database driver: ./mvnw quarkus:add-extension -Dextensions="agroal" To use a JDBC driver for another database, use a database with no built-in extension or with a different driver . Configure the JDBC connection by defining the JDBC URL property: quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test Note Note the jdbc prefix in the property name. All the configuration properties specific to JDBC have the jdbc prefix. For reactive datasources, the prefix is reactive . For more information about configuring JDBC, see JDBC URL format reference and Quarkus extensions and database drivers reference . 1.2.1.1.1. Custom databases and drivers If you need to connect to a database for which Quarkus does not provide an extension with the JDBC driver, you can use a custom driver instead. For example, if you are using the OpenTracing JDBC driver in your project. Without an extension, the driver will work correctly in any Quarkus app running in JVM mode. However, the driver is unlikely to work when compiling your application to a native executable. If you plan to make a native executable, use the existing JDBC Quarkus extensions, or contribute one for your driver. An example with the OpenTracing driver: quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver An example for defining access to a database with no built-in support in JVM mode: quarkus.datasource.db-kind=other quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver quarkus.datasource.jdbc.url=jdbc:oracle:thin:@192.168.1.12:1521/ORCL_SVC quarkus.datasource.username=scott quarkus.datasource.password=tiger For all the details about the JDBC configuration options and configuring other aspects, such as the connection pool size, refer to the JDBC configuration reference section. 1.2.1.1.2. Consuming the datasource With Hibernate ORM, the Hibernate layer automatically picks up the datasource and uses it. For the in-code access to the datasource, obtain it as any other bean as follows: @Inject AgroalDataSource defaultDataSource; In the above example, the type is AgroalDataSource , a javax.sql.DataSource subtype. Because of this, you can also use javax.sql.DataSource as the injected type. 1.2.1.2. Reactive datasource Quarkus offers several reactive clients for use with a reactive datasource. Add the corresponding extension to your application: DB2: quarkus-reactive-db2-client MariaDB/MySQL: quarkus-reactive-mysql-client Microsoft SQL Server: quarkus-reactive-mssql-client Oracle: quarkus-reactive-oracle-client PostgreSQL: quarkus-reactive-pg-client The installed extension must be consistent with the quarkus.datasource.db-kind you define in your datasource configuration. After adding the driver, configure the connection URL and define a proper size for your connection pool. quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20 1.2.1.2.1. Reactive connection pool size adjustment To protect your database from overloading during load peaks, size the pool adequately to throttle the database load. The proper size always depends on many factors, such as the number of parallel application users or the nature of the workload. Be aware that setting the pool size too low might cause some requests to time out while waiting for a connection. For more information about pool size adjustment properties, see the Reactive datasource configuration reference section. 1.2.1.3. JDBC and reactive datasources simultaneously When a JDBC extension - along with Agroal - and a reactive datasource extension handling the given database kind are included, they will both be created by default. To disable the JDBC datasource explicitly: quarkus.datasource.jdbc=false To disable the reactive datasource explicitly: quarkus.datasource.reactive=false Tip In most cases, the configuration above will be optional as either a JDBC driver or a reactive datasource extension will be present, not both. 1.2.2. Configure multiple datasources Note The Hibernate ORM extension supports defining persistence units by using configuration properties. For each persistence unit, point to the datasource of your choice. Defining multiple datasources works like defining a single datasource, with one important change - you have to specify a name (configuration key) for each datasource. The following example provides three different datasources: the default one a datasource named users a datasource named inventory Each with its configuration: quarkus.datasource.db-kind=h2 quarkus.datasource.username=username-default quarkus.datasource.jdbc.url=jdbc:h2:mem:default quarkus.datasource.jdbc.max-size=13 quarkus.datasource.users.db-kind=h2 quarkus.datasource.users.username=username1 quarkus.datasource.users.jdbc.url=jdbc:h2:mem:users quarkus.datasource.users.jdbc.max-size=11 quarkus.datasource.inventory.db-kind=h2 quarkus.datasource.inventory.username=username2 quarkus.datasource.inventory.jdbc.url=jdbc:h2:mem:inventory quarkus.datasource.inventory.jdbc.max-size=12 Notice there is an extra section in the configuration key. The syntax is as follows: quarkus.datasource.[optional name.][datasource property] . Note Even when only one database extension is installed, named databases need to specify at least one build-time property so that Quarkus can detect them. Generally, this is the db-kind property, but you can also specify Dev Services properties to create named datasources according to the Dev Services for Databases guide. 1.2.2.1. Named datasource injection When using multiple datasources, each DataSource also has the io.quarkus.agroal.DataSource qualifier with the name of the datasource as the value. By using the properties mentioned in the section to configure three different datasources, inject each one of them as follows: @Inject AgroalDataSource defaultDataSource; @Inject @DataSource("users") AgroalDataSource usersDataSource; @Inject @DataSource("inventory") AgroalDataSource inventoryDataSource; 1.3. Datasource integrations 1.3.1. Datasource health check If you use the quarkus-smallrye-health extension, the quarkus-agroal and reactive client extensions automatically add a readiness health check to validate the datasource. When you access your application's health readiness endpoint, /q/health/ready by default, you receive information about the datasource validation status. If you have multiple datasources, all datasources are checked, and if a single datasource validation failure occurs, the status changes to DOWN . This behavior can be disabled by using the quarkus.datasource.health.enabled property. To exclude only a particular datasource from the health check, use: quarkus.datasource."datasource-name".health-exclude=true 1.3.2. Datasource metrics If you are using the quarkus-micrometer or quarkus-smallrye-metrics extension, quarkus-agroal can contribute some datasource-related metrics to the metric registry. This can be activated by setting the quarkus.datasource.metrics.enabled property to true . For the exposed metrics to contain any actual values, a metric collection must be enabled internally by the Agroal mechanisms. By default, this metric collection mechanism is enabled for all datasources when a metrics extension is present, and metrics for the Agroal extension are enabled. To disable metrics for a particular data source, set quarkus.datasource.jdbc.enable-metrics to false , or apply quarkus.datasource.<datasource name>.jdbc.enable-metrics for a named datasource. This disables collecting the metrics and exposing them in the /q/metrics endpoint if the mechanism to collect them is disabled. Conversely, setting quarkus.datasource.jdbc.enable-metrics to true , or quarkus.datasource.<datasource name>.jdbc.enable-metrics for a named datasource explicitly enables metrics collection even if a metrics extension is not in use. This can be useful if you need to access the collected metrics programmatically. They are available after calling dataSource.getMetrics() on an injected AgroalDataSource instance. If the metrics collection for this datasource is disabled, all values result in zero. 1.3.3. Narayana transaction manager integration Integration is automatic if the Narayana JTA extension is also available. You can override this by setting the transactions configuration property: quarkus.datasource.jdbc.transactions for default unnamed datasource quarkus.datasource. <datasource-name> .jdbc.transactions for named datasource For more information, see the Configuration reference section below. To facilitate the storage of transaction logs in a database by using JDBC, see Configuring transaction logs to be stored in a datasource section of the Using transactions in Quarkus guide. 1.3.3.1. Named datasources When using Dev Services, the default datasource will always be created, but to specify a named datasource, you need to have at least one build time property so Quarkus can detect how to create the datasource. You will usually specify the db-kind property or explicitly enable Dev Services by setting quarkus.datasource."name".devservices.enabled=true . 1.3.4. Testing with in-memory databases Some databases like H2 and Derby are commonly used in the embedded mode as a facility to run integration tests quickly. The recommended approach is to use the real database you intend to use in production, especially when Dev Services provide a zero-config database for testing , and running tests against a container is relatively quick and produces expected results on an actual environment. However, it is also possible to use JVM-powered databases for scenarios when the ability to run simple integration tests is required. 1.3.4.1. Support and limitations Embedded databases (H2 and Derby) work in JVM mode. For native mode, the following limitations apply: Derby cannot be embedded into the application in native mode. However, the Quarkus Derby extension allows native compilation of the Derby JDBC client , supporting remote connections. Embedding H2 within your native image is not recommended. Consider using an alternative approach, for example, using a remote connection to a separate database instead. 1.3.4.2. Run an integration test Add a dependency on the artifacts providing the additional tools that are under the following Maven coordinates: io.quarkus:quarkus-test-h2 for H2 io.quarkus:quarkus-test-derby for Derby This will allow you to test your application even when it is compiled into a native executable while the database will run as a JVM process. Add the following specific annotation on any class in your integration tests for running integration tests in both JVM or native executables: @QuarkusTestResource(H2DatabaseTestResource.class) @QuarkusTestResource(DerbyDatabaseTestResource.class) This ensures that the test suite starts and terminates the managed database in a separate process as required for test execution. H2 example package my.app.integrationtests.db; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.h2.H2DatabaseTestResource; @QuarkusTestResource(H2DatabaseTestResource.class) public class TestResources { } Configure the connection to the managed database: quarkus.datasource.db-kind=h2 quarkus.datasource.jdbc.url=jdbc:h2:tcp://localhost/mem:test 1.4. References 1.4.1. Common datasource configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.db-kind The kind of database we will connect to (e.g. h2, postgresql... ). Environment variable: QUARKUS_DATASOURCE_DB_KIND string quarkus.datasource.db-version The version of the database we will connect to (e.g. '10.0'). Caution The version number set here should follow the same numbering scheme as the string returned by java.sql.DatabaseMetaData#getDatabaseProductVersion() for your database's JDBC driver. This numbering scheme may be different from the most popular one for your database; for example Microsoft SQL Server 2016 would be version 13 . As a rule, the version set here should be as high as possible, but must be lower than or equal to the version of any database your application will connect to. A high version will allow better performance and using more features (e.g. Hibernate ORM may generate more efficient SQL, avoid workarounds and take advantage of more database features), but if it is higher than the version of the database you want to connect to, it may lead to runtime exceptions (e.g. Hibernate ORM may generate invalid SQL that your database will reject). Some extensions (like the Hibernate ORM extension) will try to check this version against the actual database version on startup, leading to a startup failure when the actual version is lower or simply a warning in case the database cannot be reached. The default for this property is specific to each extension; the Hibernate ORM extension will default to the oldest version it supports. Environment variable: QUARKUS_DATASOURCE_DB_VERSION string quarkus.datasource.devservices.enabled If DevServices has been explicitly enabled or disabled. DevServices is generally enabled by default unless an existing configuration is present. When DevServices is enabled, Quarkus will attempt to automatically configure and start a database when running in Dev or Test mode. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_ENABLED boolean quarkus.datasource.devservices.image-name The container image name for container-based DevServices providers. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_IMAGE_NAME string quarkus.datasource.devservices.port Optional fixed port the dev service will listen to. If not defined, the port will be chosen randomly. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_PORT int quarkus.datasource.devservices.command The container start command to use for container-based DevServices providers. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_COMMAND string quarkus.datasource.devservices.db-name The database name to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_DB_NAME string quarkus.datasource.devservices.username The username to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_USERNAME string quarkus.datasource.devservices.password The password to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_PASSWORD string quarkus.datasource.devservices.init-script-path The path to a SQL script to be loaded from the classpath and applied to the Dev Service database. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_INIT_SCRIPT_PATH string quarkus.datasource.health-exclude Whether this particular data source should be excluded from the health check if the general health check for data sources is enabled. By default, the health check includes all configured data sources (if it is enabled). Environment variable: QUARKUS_DATASOURCE_HEALTH_EXCLUDE boolean false quarkus.datasource.health.enabled Whether or not a health check is published in case the smallrye-health extension is present. This is a global setting and is not specific to a datasource. Environment variable: QUARKUS_DATASOURCE_HEALTH_ENABLED boolean true quarkus.datasource.metrics.enabled Whether or not datasource metrics are published in case a metrics extension is present. This is a global setting and is not specific to a datasource. Note This is different from the "jdbc.enable-metrics" property that needs to be set on the JDBC datasource level to enable collection of metrics for that datasource. Environment variable: QUARKUS_DATASOURCE_METRICS_ENABLED boolean false quarkus.datasource.username The datasource username Environment variable: QUARKUS_DATASOURCE_USERNAME string quarkus.datasource.password The datasource password Environment variable: QUARKUS_DATASOURCE_PASSWORD string quarkus.datasource.credentials-provider The credentials provider name Environment variable: QUARKUS_DATASOURCE_CREDENTIALS_PROVIDER string quarkus.datasource.credentials-provider-name The credentials provider bean name. This is a bean name (as in @Named ) of a bean that implements CredentialsProvider . It is used to select the credentials provider bean when multiple exist. This is unnecessary when there is only one credentials provider available. For Vault, the credentials provider bean name is vault-credentials-provider . Environment variable: QUARKUS_DATASOURCE_CREDENTIALS_PROVIDER_NAME string quarkus.datasource.devservices.container-env Environment variables that are passed to the container. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_CONTAINER_ENV Map<String,String> quarkus.datasource.devservices.container-properties Generic properties that are passed for additional container configuration. Properties defined here are database-specific and are interpreted specifically in each database dev service implementation. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_CONTAINER_PROPERTIES Map<String,String> quarkus.datasource.devservices.properties Generic properties that are added to the database connection URL. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_PROPERTIES Map<String,String> quarkus.datasource.devservices.volumes The volumes to be mapped to the container. The map key corresponds to the host location; the map value is the container location. If the host location starts with "classpath:", the mapping loads the resource from the classpath with read-only permission. When using a file system location, the volume will be generated with read-write permission, potentially leading to data loss or modification in your file system. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE_DEVSERVICES_VOLUMES Map<String,String> Additional named datasources Type Default quarkus.datasource."datasource-name".db-kind The kind of database we will connect to (e.g. h2, postgresql... ). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DB_KIND string quarkus.datasource."datasource-name".db-version The version of the database we will connect to (e.g. '10.0'). Caution The version number set here should follow the same numbering scheme as the string returned by java.sql.DatabaseMetaData#getDatabaseProductVersion() for your database's JDBC driver. This numbering scheme may be different from the most popular one for your database; for example Microsoft SQL Server 2016 would be version 13 . As a rule, the version set here should be as high as possible, but must be lower than or equal to the version of any database your application will connect to. A high version will allow better performance and using more features (e.g. Hibernate ORM may generate more efficient SQL, avoid workarounds and take advantage of more database features), but if it is higher than the version of the database you want to connect to, it may lead to runtime exceptions (e.g. Hibernate ORM may generate invalid SQL that your database will reject). Some extensions (like the Hibernate ORM extension) will try to check this version against the actual database version on startup, leading to a startup failure when the actual version is lower or simply a warning in case the database cannot be reached. The default for this property is specific to each extension; the Hibernate ORM extension will default to the oldest version it supports. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DB_VERSION string quarkus.datasource."datasource-name".devservices.enabled If DevServices has been explicitly enabled or disabled. DevServices is generally enabled by default unless an existing configuration is present. When DevServices is enabled, Quarkus will attempt to automatically configure and start a database when running in Dev or Test mode. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_ENABLED boolean quarkus.datasource."datasource-name".devservices.image-name The container image name for container-based DevServices providers. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_IMAGE_NAME string quarkus.datasource."datasource-name".devservices.container-env Environment variables that are passed to the container. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_CONTAINER_ENV Map<String,String> quarkus.datasource."datasource-name".devservices.container-properties Generic properties that are passed for additional container configuration. Properties defined here are database-specific and are interpreted specifically in each database dev service implementation. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_CONTAINER_PROPERTIES Map<String,String> quarkus.datasource."datasource-name".devservices.properties Generic properties that are added to the database connection URL. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_PROPERTIES Map<String,String> quarkus.datasource."datasource-name".devservices.port Optional fixed port the dev service will listen to. If not defined, the port will be chosen randomly. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_PORT int quarkus.datasource."datasource-name".devservices.command The container start command to use for container-based DevServices providers. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_COMMAND string quarkus.datasource."datasource-name".devservices.db-name The database name to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_DB_NAME string quarkus.datasource."datasource-name".devservices.username The username to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_USERNAME string quarkus.datasource."datasource-name".devservices.password The password to use if this Dev Service supports overriding it. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_PASSWORD string quarkus.datasource."datasource-name".devservices.init-script-path The path to a SQL script to be loaded from the classpath and applied to the Dev Service database. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_INIT_SCRIPT_PATH string quarkus.datasource."datasource-name".devservices.volumes The volumes to be mapped to the container. The map key corresponds to the host location; the map value is the container location. If the host location starts with "classpath:", the mapping loads the resource from the classpath with read-only permission. When using a file system location, the volume will be generated with read-write permission, potentially leading to data loss or modification in your file system. This has no effect if the provider is not a container-based database, such as H2 or Derby. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__DEVSERVICES_VOLUMES Map<String,String> quarkus.datasource."datasource-name".health-exclude Whether this particular data source should be excluded from the health check if the general health check for data sources is enabled. By default, the health check includes all configured data sources (if it is enabled). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__HEALTH_EXCLUDE boolean false quarkus.datasource."datasource-name".username The datasource username Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__USERNAME string quarkus.datasource."datasource-name".password The datasource password Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__PASSWORD string quarkus.datasource."datasource-name".credentials-provider The credentials provider name Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__CREDENTIALS_PROVIDER string quarkus.datasource."datasource-name".credentials-provider-name The credentials provider bean name. This is a bean name (as in @Named ) of a bean that implements CredentialsProvider . It is used to select the credentials provider bean when multiple exist. This is unnecessary when there is only one credentials provider available. For Vault, the credentials provider bean name is vault-credentials-provider . Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__CREDENTIALS_PROVIDER_NAME string 1.4.2. JDBC configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.jdbc If we create a JDBC datasource for this datasource. Environment variable: QUARKUS_DATASOURCE_JDBC boolean true quarkus.datasource.jdbc.driver The datasource driver class name Environment variable: QUARKUS_DATASOURCE_JDBC_DRIVER string quarkus.datasource.jdbc.transactions Whether we want to use regular JDBC transactions, XA, or disable all transactional capabilities. When enabling XA you will need a driver implementing javax.sql.XADataSource . Environment variable: QUARKUS_DATASOURCE_JDBC_TRANSACTIONS enabled , xa , disabled enabled quarkus.datasource.jdbc.enable-metrics Enable datasource metrics collection. If unspecified, collecting metrics will be enabled by default if a metrics extension is active. Environment variable: QUARKUS_DATASOURCE_JDBC_ENABLE_METRICS boolean quarkus.datasource.jdbc.tracing Enable JDBC tracing. Disabled by default. Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING boolean false quarkus.datasource.jdbc.telemetry Enable OpenTelemetry JDBC instrumentation. Environment variable: QUARKUS_DATASOURCE_JDBC_TELEMETRY boolean false quarkus.datasource.jdbc.url The datasource URL Environment variable: QUARKUS_DATASOURCE_JDBC_URL string quarkus.datasource.jdbc.initial-size The initial size of the pool. Usually you will want to set the initial size to match at least the minimal size, but this is not enforced so to allow for architectures which prefer a lazy initialization of the connections on boot, while being able to sustain a minimal pool size after boot. Environment variable: QUARKUS_DATASOURCE_JDBC_INITIAL_SIZE int quarkus.datasource.jdbc.min-size The datasource pool minimum size Environment variable: QUARKUS_DATASOURCE_JDBC_MIN_SIZE int 0 quarkus.datasource.jdbc.max-size The datasource pool maximum size Environment variable: QUARKUS_DATASOURCE_JDBC_MAX_SIZE int 20 quarkus.datasource.jdbc.background-validation-interval The interval at which we validate idle connections in the background. Set to 0 to disable background validation. Environment variable: QUARKUS_DATASOURCE_JDBC_BACKGROUND_VALIDATION_INTERVAL Duration 2M quarkus.datasource.jdbc.foreground-validation-interval Perform foreground validation on connections that have been idle for longer than the specified interval. Environment variable: QUARKUS_DATASOURCE_JDBC_FOREGROUND_VALIDATION_INTERVAL Duration quarkus.datasource.jdbc.acquisition-timeout The timeout before cancelling the acquisition of a new connection Environment variable: QUARKUS_DATASOURCE_JDBC_ACQUISITION_TIMEOUT Duration 5 quarkus.datasource.jdbc.leak-detection-interval The interval at which we check for connection leaks. Environment variable: QUARKUS_DATASOURCE_JDBC_LEAK_DETECTION_INTERVAL Duration This feature is disabled by default. quarkus.datasource.jdbc.idle-removal-interval The interval at which we try to remove idle connections. Environment variable: QUARKUS_DATASOURCE_JDBC_IDLE_REMOVAL_INTERVAL Duration 5M quarkus.datasource.jdbc.max-lifetime The max lifetime of a connection. Environment variable: QUARKUS_DATASOURCE_JDBC_MAX_LIFETIME Duration By default, there is no restriction on the lifespan of a connection. quarkus.datasource.jdbc.transaction-isolation-level The transaction isolation level. Environment variable: QUARKUS_DATASOURCE_JDBC_TRANSACTION_ISOLATION_LEVEL undefined , none , read-uncommitted , read-committed , repeatable-read , serializable quarkus.datasource.jdbc.extended-leak-report Collect and display extra troubleshooting info on leaked connections. Environment variable: QUARKUS_DATASOURCE_JDBC_EXTENDED_LEAK_REPORT boolean false quarkus.datasource.jdbc.flush-on-close Allows connections to be flushed upon return to the pool. It's not enabled by default. Environment variable: QUARKUS_DATASOURCE_JDBC_FLUSH_ON_CLOSE boolean false quarkus.datasource.jdbc.detect-statement-leaks When enabled, Agroal will be able to produce a warning when a connection is returned to the pool without the application having closed all open statements. This is unrelated with tracking of open connections. Disable for peak performance, but only when there's high confidence that no leaks are happening. Environment variable: QUARKUS_DATASOURCE_JDBC_DETECT_STATEMENT_LEAKS boolean true quarkus.datasource.jdbc.new-connection-sql Query executed when first using a connection. Environment variable: QUARKUS_DATASOURCE_JDBC_NEW_CONNECTION_SQL string quarkus.datasource.jdbc.validation-query-sql Query executed to validate a connection. Environment variable: QUARKUS_DATASOURCE_JDBC_VALIDATION_QUERY_SQL string quarkus.datasource.jdbc.pooling-enabled Disable pooling to prevent reuse of Connections. Use this when an external pool manages the life-cycle of Connections. Environment variable: QUARKUS_DATASOURCE_JDBC_POOLING_ENABLED boolean true quarkus.datasource.jdbc.transaction-requirement Require an active transaction when acquiring a connection. Recommended for production. WARNING: Some extensions acquire connections without holding a transaction for things like schema updates and schema validation. Setting this setting to STRICT may lead to failures in those cases. Environment variable: QUARKUS_DATASOURCE_JDBC_TRANSACTION_REQUIREMENT off , warn , strict quarkus.datasource.jdbc.tracing.enabled Enable JDBC tracing. Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING_ENABLED boolean false if quarkus.datasource.jdbc.tracing=false and true if quarkus.datasource.jdbc.tracing=true quarkus.datasource.jdbc.tracing.trace-with-active-span-only Trace calls with active Spans only Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING_TRACE_WITH_ACTIVE_SPAN_ONLY boolean false quarkus.datasource.jdbc.tracing.ignore-for-tracing Ignore specific queries from being traced Environment variable: QUARKUS_DATASOURCE_JDBC_TRACING_IGNORE_FOR_TRACING string Ignore specific queries from being traced, multiple queries can be specified separated by semicolon, double quotes should be escaped with \ quarkus.datasource.jdbc.telemetry.enabled Enable OpenTelemetry JDBC instrumentation. Environment variable: QUARKUS_DATASOURCE_JDBC_TELEMETRY_ENABLED boolean false if quarkus.datasource.jdbc.telemetry=false and true if quarkus.datasource.jdbc.telemetry=true quarkus.datasource.jdbc.additional-jdbc-properties Other unspecified properties to be passed to the JDBC driver when creating new connections. Environment variable: QUARKUS_DATASOURCE_JDBC_ADDITIONAL_JDBC_PROPERTIES Map<String,String> Additional named datasources Type Default quarkus.datasource."datasource-name".jdbc If we create a JDBC datasource for this datasource. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC boolean true quarkus.datasource."datasource-name".jdbc.driver The datasource driver class name Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_DRIVER string quarkus.datasource."datasource-name".jdbc.transactions Whether we want to use regular JDBC transactions, XA, or disable all transactional capabilities. When enabling XA you will need a driver implementing javax.sql.XADataSource . Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TRANSACTIONS enabled , xa , disabled enabled quarkus.datasource."datasource-name".jdbc.enable-metrics Enable datasource metrics collection. If unspecified, collecting metrics will be enabled by default if a metrics extension is active. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_ENABLE_METRICS boolean quarkus.datasource."datasource-name".jdbc.tracing Enable JDBC tracing. Disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TRACING boolean false quarkus.datasource."datasource-name".jdbc.telemetry Enable OpenTelemetry JDBC instrumentation. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TELEMETRY boolean false quarkus.datasource."datasource-name".jdbc.url The datasource URL Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_URL string quarkus.datasource."datasource-name".jdbc.initial-size The initial size of the pool. Usually you will want to set the initial size to match at least the minimal size, but this is not enforced so to allow for architectures which prefer a lazy initialization of the connections on boot, while being able to sustain a minimal pool size after boot. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_INITIAL_SIZE int quarkus.datasource."datasource-name".jdbc.min-size The datasource pool minimum size Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_MIN_SIZE int 0 quarkus.datasource."datasource-name".jdbc.max-size The datasource pool maximum size Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_MAX_SIZE int 20 quarkus.datasource."datasource-name".jdbc.background-validation-interval The interval at which we validate idle connections in the background. Set to 0 to disable background validation. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_BACKGROUND_VALIDATION_INTERVAL Duration 2M quarkus.datasource."datasource-name".jdbc.foreground-validation-interval Perform foreground validation on connections that have been idle for longer than the specified interval. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_FOREGROUND_VALIDATION_INTERVAL Duration quarkus.datasource."datasource-name".jdbc.acquisition-timeout The timeout before cancelling the acquisition of a new connection Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_ACQUISITION_TIMEOUT Duration 5 quarkus.datasource."datasource-name".jdbc.leak-detection-interval The interval at which we check for connection leaks. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_LEAK_DETECTION_INTERVAL Duration This feature is disabled by default. quarkus.datasource."datasource-name".jdbc.idle-removal-interval The interval at which we try to remove idle connections. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_IDLE_REMOVAL_INTERVAL Duration 5M quarkus.datasource."datasource-name".jdbc.max-lifetime The max lifetime of a connection. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_MAX_LIFETIME Duration By default, there is no restriction on the lifespan of a connection. quarkus.datasource."datasource-name".jdbc.transaction-isolation-level The transaction isolation level. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TRANSACTION_ISOLATION_LEVEL undefined , none , read-uncommitted , read-committed , repeatable-read , serializable quarkus.datasource."datasource-name".jdbc.extended-leak-report Collect and display extra troubleshooting info on leaked connections. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_EXTENDED_LEAK_REPORT boolean false quarkus.datasource."datasource-name".jdbc.flush-on-close Allows connections to be flushed upon return to the pool. It's not enabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_FLUSH_ON_CLOSE boolean false quarkus.datasource."datasource-name".jdbc.detect-statement-leaks When enabled, Agroal will be able to produce a warning when a connection is returned to the pool without the application having closed all open statements. This is unrelated with tracking of open connections. Disable for peak performance, but only when there's high confidence that no leaks are happening. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_DETECT_STATEMENT_LEAKS boolean true quarkus.datasource."datasource-name".jdbc.new-connection-sql Query executed when first using a connection. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_NEW_CONNECTION_SQL string quarkus.datasource."datasource-name".jdbc.validation-query-sql Query executed to validate a connection. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_VALIDATION_QUERY_SQL string quarkus.datasource."datasource-name".jdbc.pooling-enabled Disable pooling to prevent reuse of Connections. Use this when an external pool manages the life-cycle of Connections. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_POOLING_ENABLED boolean true quarkus.datasource."datasource-name".jdbc.transaction-requirement Require an active transaction when acquiring a connection. Recommended for production. WARNING: Some extensions acquire connections without holding a transaction for things like schema updates and schema validation. Setting this setting to STRICT may lead to failures in those cases. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TRANSACTION_REQUIREMENT off , warn , strict quarkus.datasource."datasource-name".jdbc.additional-jdbc-properties Other unspecified properties to be passed to the JDBC driver when creating new connections. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_ADDITIONAL_JDBC_PROPERTIES Map<String,String> quarkus.datasource."datasource-name".jdbc.tracing.enabled Enable JDBC tracing. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TRACING_ENABLED boolean false if quarkus.datasource.jdbc.tracing=false and true if quarkus.datasource.jdbc.tracing=true quarkus.datasource."datasource-name".jdbc.tracing.trace-with-active-span-only Trace calls with active Spans only Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TRACING_TRACE_WITH_ACTIVE_SPAN_ONLY boolean false quarkus.datasource."datasource-name".jdbc.tracing.ignore-for-tracing Ignore specific queries from being traced Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TRACING_IGNORE_FOR_TRACING string Ignore specific queries from being traced, multiple queries can be specified separated by semicolon, double quotes should be escaped with \ quarkus.datasource."datasource-name".jdbc.telemetry.enabled Enable OpenTelemetry JDBC instrumentation. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__JDBC_TELEMETRY_ENABLED boolean false if quarkus.datasource.jdbc.telemetry=false and true if quarkus.datasource.jdbc.telemetry=true About the Duration format To write duration values, use the standard java.time.Duration format. See the Duration#parse() javadoc for more information. You can also use a simplified format, starting with a number: If the value is only a number, it represents time in seconds. If the value is a number followed by ms , it represents time in milliseconds. In other cases, the simplified format is translated to the java.time.Duration format for parsing: If the value is a number followed by h , m , or s , it is prefixed with PT . If the value is a number followed by d , it is prefixed with P . 1.4.3. JDBC URL reference Each of the supported databases contains different JDBC URL configuration options. The following section gives an overview of each database URL and a link to the official documentation. 1.4.3.1. DB2 jdbc:db2://<serverName>[:<portNumber>]/<databaseName>[:<key1>=<value>;[<key2>=<value2>;]] Example jdbc:db2://localhost:50000/MYDB:user=dbadm;password=dbadm; For more information on URL syntax and additional supported options, see the official documentation . 1.4.3.2. Derby jdbc:derby:[//serverName[:portNumber]/][memory:]databaseName[;property=value[;property=value]] Example jdbc:derby://localhost:1527/myDB , jdbc:derby:memory:myDB;create=true Derby is an embedded database that can run as a server, based on a file, or can run completely in memory. All of these options are available as listed above. For more information, see the official documentation . 1.4.3.3. H2 jdbc:h2:{ {.|mem:}[name] | [file:]fileName | {tcp|ssl}:[//]server[:port][,server2[:port]]/name }[;key=value... ] Example jdbc:h2:tcp://localhost/~/test , jdbc:h2:mem:myDB H2 is a database that can run in embedded or server mode. It can use a file storage or run entirely in memory. All of these options are available as listed above. For more information, see the official documentation . 1.4.3.4. MariaDB jdbc:mariadb:[replication:|failover:|sequential:|aurora:]//<hostDescription>[,<hostDescription>... ]/[database][?<key1>=<value1>[&<key2>=<value2>]] hostDescription:: <host>[:<portnumber>] or address=(host=<host>)[(port=<portnumber>)][(type=(master|slave))] Example jdbc:mariadb://localhost:3306/test For more information, see the official documentation . 1.4.3.5. Microsoft SQL server jdbc:sqlserver://[serverName[\instanceName][:portNumber]][;property=value[;property=value]] Example jdbc:sqlserver://localhost:1433;databaseName=AdventureWorks The Microsoft SQL Server JDBC driver works essentially the same as the others. For more information, see the official documentation . 1.4.3.6. MySQL jdbc:mysql:[replication:|failover:|sequential:|aurora:]//<hostDescription>[,<hostDescription>... ]/[database][?<key1>=<value1>[&<key2>=<value2>]] hostDescription:: <host>[:<portnumber>] or address=(host=<host>)[(port=<portnumber>)][(type=(master|slave))] Example jdbc:mysql://localhost:3306/test For more information, see the official documentation . 1.4.3.6.1. MySQL limitations When compiling a Quarkus application to a native image, the MySQL support for JMX and Oracle Cloud Infrastructure (OCI) integrations are disabled as they are incompatible with GraalVM native images. The lack of JMX support is a natural consequence of running in native mode and is unlikely to be resolved. The integration with OCI is not supported. 1.4.3.7. Oracle jdbc:oracle:driver_type:@database_specifier Example jdbc:oracle:thin:@localhost:1521/ORCL_SVC For more information, see the official documentation . 1.4.3.8. PostgreSQL jdbc:postgresql:[//][host][:port][/database][?key=value... ] Example jdbc:postgresql://localhost/test The defaults for the different parts are as follows: host localhost port 5432 database same name as the username For more information about additional parameters, see the official documentation . 1.4.4. Quarkus extensions and database drivers reference The following tables list the built-in db-kind values, the corresponding Quarkus extensions, and the JDBC drivers used by those extensions. When using one of the built-in datasource kinds, the JDBC and Reactive drivers are resolved automatically to match the values from these tables. Table 1.1. Database platform kind to JDBC driver mapping Database kind Quarkus extension Drivers db2 quarkus-jdbc-db2 JDBC: com.ibm.db2.jcc.DB2Driver XA: com.ibm.db2.jcc.DB2XADataSource derby quarkus-jdbc-derby JDBC: org.apache.derby.jdbc.ClientDriver XA: org.apache.derby.jdbc.ClientXADataSource h2 quarkus-jdbc-h2 JDBC: org.h2.Driver XA: org.h2.jdbcx.JdbcDataSource mariadb quarkus-jdbc-mariadb JDBC: org.mariadb.jdbc.Driver XA: org.mariadb.jdbc.MySQLDataSource mssql quarkus-jdbc-mssql JDBC: com.microsoft.sqlserver.jdbc.SQLServerDriver XA: com.microsoft.sqlserver.jdbc.SQLServerXADataSource mysql quarkus-jdbc-mysql JDBC: com.mysql.cj.jdbc.Driver XA: com.mysql.cj.jdbc.MysqlXADataSource oracle quarkus-jdbc-oracle JDBC: oracle.jdbc.driver.OracleDriver XA: oracle.jdbc.xa.client.OracleXADataSource postgresql quarkus-jdbc-postgresql JDBC: org.postgresql.Driver XA: org.postgresql.xa.PGXADataSource Table 1.2. Database kind to Reactive driver mapping Database kind Quarkus extension Driver oracle reactive-oracle-client io.vertx.oracleclient.spi.OracleDriver mysql reactive-mysql-client io.vertx.mysqlclient.spi.MySQLDriver mssql reactive-mssql-client io.vertx.mssqlclient.spi.MSSQLDriver postgresql reactive-pg-client io.vertx.pgclient.spi.PgDriver db2 reactive-db2-client io.vertx.db2client.spi.DB2Driver Tip This automatic resolution is applicable in most cases so that driver configuration is not needed. 1.4.5. Reactive datasource configuration reference Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.reactive If we create a Reactive datasource for this datasource. Environment variable: QUARKUS_DATASOURCE_REACTIVE boolean true quarkus.datasource.reactive.cache-prepared-statements Whether prepared statements should be cached on the client side. Environment variable: QUARKUS_DATASOURCE_REACTIVE_CACHE_PREPARED_STATEMENTS boolean false quarkus.datasource.reactive.url The datasource URLs. If multiple values are set, this datasource will create a pool with a list of servers instead of a single server. The pool uses round-robin load balancing for server selection during connection establishment. Note that certain drivers might not accommodate multiple values in this context. Environment variable: QUARKUS_DATASOURCE_REACTIVE_URL list of string quarkus.datasource.reactive.max-size The datasource pool maximum size. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MAX_SIZE int 20 quarkus.datasource.reactive.event-loop-size When a new connection object is created, the pool assigns it an event loop. When #event-loop-size is set to a strictly positive value, the pool assigns as many event loops as specified, in a round-robin fashion. By default, the number of event loops configured or calculated by Quarkus is used. If #event-loop-size is set to zero or a negative value, the pool assigns the current event loop to the new connection. Environment variable: QUARKUS_DATASOURCE_REACTIVE_EVENT_LOOP_SIZE int quarkus.datasource.reactive.trust-all Whether all server certificates should be trusted. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_ALL boolean false quarkus.datasource.reactive.trust-certificate-pem PEM Trust config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PEM boolean false quarkus.datasource.reactive.trust-certificate-pem.certs Comma-separated list of the trust certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PEM_CERTS list of string quarkus.datasource.reactive.trust-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_JKS boolean false quarkus.datasource.reactive.trust-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_JKS_PATH string quarkus.datasource.reactive.trust-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_JKS_PASSWORD string quarkus.datasource.reactive.trust-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PFX boolean false quarkus.datasource.reactive.trust-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PFX_PATH string quarkus.datasource.reactive.trust-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE_REACTIVE_TRUST_CERTIFICATE_PFX_PASSWORD string quarkus.datasource.reactive.key-certificate-pem PEM Key/cert config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PEM boolean false quarkus.datasource.reactive.key-certificate-pem.keys Comma-separated list of the path to the key files (Pem format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PEM_KEYS list of string quarkus.datasource.reactive.key-certificate-pem.certs Comma-separated list of the path to the certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PEM_CERTS list of string quarkus.datasource.reactive.key-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_JKS boolean false quarkus.datasource.reactive.key-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_JKS_PATH string quarkus.datasource.reactive.key-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_JKS_PASSWORD string quarkus.datasource.reactive.key-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PFX boolean false quarkus.datasource.reactive.key-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PFX_PATH string quarkus.datasource.reactive.key-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE_REACTIVE_KEY_CERTIFICATE_PFX_PASSWORD string quarkus.datasource.reactive.reconnect-attempts The number of reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE_REACTIVE_RECONNECT_ATTEMPTS int 0 quarkus.datasource.reactive.reconnect-interval The interval between reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE_REACTIVE_RECONNECT_INTERVAL Duration PT1S quarkus.datasource.reactive.hostname-verification-algorithm The hostname verification algorithm to use in case the server's identity should be checked. Should be HTTPS, LDAPS or an empty string. Environment variable: QUARKUS_DATASOURCE_REACTIVE_HOSTNAME_VERIFICATION_ALGORITHM string quarkus.datasource.reactive.idle-timeout The maximum time a connection remains unused in the pool before it is closed. Environment variable: QUARKUS_DATASOURCE_REACTIVE_IDLE_TIMEOUT Duration no timeout quarkus.datasource.reactive.shared Set to true to share the pool among datasources. There can be multiple shared pools distinguished by name, when no specific name is set, the __vertx.DEFAULT name is used. Environment variable: QUARKUS_DATASOURCE_REACTIVE_SHARED boolean false quarkus.datasource.reactive.name Set the pool name, used when the pool is shared among datasources, otherwise ignored. Environment variable: QUARKUS_DATASOURCE_REACTIVE_NAME string quarkus.datasource.reactive.additional-properties Other unspecified properties to be passed through the Reactive SQL Client directly to the database when new connections are initiated. Environment variable: QUARKUS_DATASOURCE_REACTIVE_ADDITIONAL_PROPERTIES Map<String,String> Additional named datasources Type Default quarkus.datasource."datasource-name".reactive If we create a Reactive datasource for this datasource. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE boolean true quarkus.datasource."datasource-name".reactive.cache-prepared-statements Whether prepared statements should be cached on the client side. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_CACHE_PREPARED_STATEMENTS boolean false quarkus.datasource."datasource-name".reactive.url The datasource URLs. If multiple values are set, this datasource will create a pool with a list of servers instead of a single server. The pool uses round-robin load balancing for server selection during connection establishment. Note that certain drivers might not accommodate multiple values in this context. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_URL list of string quarkus.datasource."datasource-name".reactive.max-size The datasource pool maximum size. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MAX_SIZE int 20 quarkus.datasource."datasource-name".reactive.event-loop-size When a new connection object is created, the pool assigns it an event loop. When #event-loop-size is set to a strictly positive value, the pool assigns as many event loops as specified, in a round-robin fashion. By default, the number of event loops configured or calculated by Quarkus is used. If #event-loop-size is set to zero or a negative value, the pool assigns the current event loop to the new connection. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_EVENT_LOOP_SIZE int quarkus.datasource."datasource-name".reactive.trust-all Whether all server certificates should be trusted. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_ALL boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-pem PEM Trust config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PEM boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-pem.certs Comma-separated list of the trust certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PEM_CERTS list of string quarkus.datasource."datasource-name".reactive.trust-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_JKS boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_JKS_PATH string quarkus.datasource."datasource-name".reactive.trust-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_JKS_PASSWORD string quarkus.datasource."datasource-name".reactive.trust-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PFX boolean false quarkus.datasource."datasource-name".reactive.trust-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PFX_PATH string quarkus.datasource."datasource-name".reactive.trust-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_TRUST_CERTIFICATE_PFX_PASSWORD string quarkus.datasource."datasource-name".reactive.key-certificate-pem PEM Key/cert config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PEM boolean false quarkus.datasource."datasource-name".reactive.key-certificate-pem.keys Comma-separated list of the path to the key files (Pem format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PEM_KEYS list of string quarkus.datasource."datasource-name".reactive.key-certificate-pem.certs Comma-separated list of the path to the certificate files (Pem format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PEM_CERTS list of string quarkus.datasource."datasource-name".reactive.key-certificate-jks JKS config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_JKS boolean false quarkus.datasource."datasource-name".reactive.key-certificate-jks.path Path of the key file (JKS format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_JKS_PATH string quarkus.datasource."datasource-name".reactive.key-certificate-jks.password Password of the key file. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_JKS_PASSWORD string quarkus.datasource."datasource-name".reactive.key-certificate-pfx PFX config is disabled by default. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PFX boolean false quarkus.datasource."datasource-name".reactive.key-certificate-pfx.path Path to the key file (PFX format). Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PFX_PATH string quarkus.datasource."datasource-name".reactive.key-certificate-pfx.password Password of the key. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_KEY_CERTIFICATE_PFX_PASSWORD string quarkus.datasource."datasource-name".reactive.reconnect-attempts The number of reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_RECONNECT_ATTEMPTS int 0 quarkus.datasource."datasource-name".reactive.reconnect-interval The interval between reconnection attempts when a pooled connection cannot be established on first try. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_RECONNECT_INTERVAL Duration PT1S quarkus.datasource."datasource-name".reactive.hostname-verification-algorithm The hostname verification algorithm to use in case the server's identity should be checked. Should be HTTPS, LDAPS or an empty string. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_HOSTNAME_VERIFICATION_ALGORITHM string quarkus.datasource."datasource-name".reactive.idle-timeout The maximum time a connection remains unused in the pool before it is closed. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_IDLE_TIMEOUT Duration no timeout quarkus.datasource."datasource-name".reactive.shared Set to true to share the pool among datasources. There can be multiple shared pools distinguished by name, when no specific name is set, the __vertx.DEFAULT name is used. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_SHARED boolean false quarkus.datasource."datasource-name".reactive.name Set the pool name, used when the pool is shared among datasources, otherwise ignored. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_NAME string quarkus.datasource."datasource-name".reactive.additional-properties Other unspecified properties to be passed through the Reactive SQL Client directly to the database when new connections are initiated. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_ADDITIONAL_PROPERTIES Map<String,String> About the Duration format To write duration values, use the standard java.time.Duration format. See the Duration#parse() javadoc for more information. You can also use a simplified format, starting with a number: If the value is only a number, it represents time in seconds. If the value is a number followed by ms , it represents time in milliseconds. In other cases, the simplified format is translated to the java.time.Duration format for parsing: If the value is a number followed by h , m , or s , it is prefixed with PT . If the value is a number followed by d , it is prefixed with P . 1.4.5.1. Reactive DB2 configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.reactive.db2.ssl Whether SSL/TLS is enabled. Environment variable: QUARKUS_DATASOURCE_REACTIVE_DB2_SSL boolean false Additional named datasources Type Default quarkus.datasource."datasource-name".reactive.db2.ssl Whether SSL/TLS is enabled. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_DB2_SSL boolean false 1.4.5.2. Reactive MariaDB/MySQL specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.reactive.mysql.charset Charset for connections. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_CHARSET string quarkus.datasource.reactive.mysql.collation Collation for connections. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_COLLATION string quarkus.datasource.reactive.mysql.ssl-mode Desired security state of the connection to the server. See MySQL Reference Manual . Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_SSL_MODE disabled , preferred , required , verify-ca , verify-identity disabled quarkus.datasource.reactive.mysql.connection-timeout Connection timeout in seconds Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_CONNECTION_TIMEOUT int quarkus.datasource.reactive.mysql.authentication-plugin The authentication plugin the client should use. By default, it uses the plugin name specified by the server in the initial handshake packet. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_AUTHENTICATION_PLUGIN default , mysql-clear-password , mysql-native-password , sha256-password , caching-sha2-password default quarkus.datasource.reactive.mysql.pipelining-limit The maximum number of inflight database commands that can be pipelined. By default, pipelining is disabled. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_PIPELINING_LIMIT int quarkus.datasource.reactive.mysql.use-affected-rows Whether to return the number of rows matched by the WHERE clause in UPDATE statements, instead of the number of rows actually changed. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MYSQL_USE_AFFECTED_ROWS boolean false Additional named datasources Type Default quarkus.datasource."datasource-name".reactive.mysql.charset Charset for connections. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MYSQL_CHARSET string quarkus.datasource."datasource-name".reactive.mysql.collation Collation for connections. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MYSQL_COLLATION string quarkus.datasource."datasource-name".reactive.mysql.ssl-mode Desired security state of the connection to the server. See MySQL Reference Manual . Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MYSQL_SSL_MODE disabled , preferred , required , verify-ca , verify-identity disabled quarkus.datasource."datasource-name".reactive.mysql.connection-timeout Connection timeout in seconds Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MYSQL_CONNECTION_TIMEOUT int quarkus.datasource."datasource-name".reactive.mysql.authentication-plugin The authentication plugin the client should use. By default, it uses the plugin name specified by the server in the initial handshake packet. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MYSQL_AUTHENTICATION_PLUGIN default , mysql-clear-password , mysql-native-password , sha256-password , caching-sha2-password default quarkus.datasource."datasource-name".reactive.mysql.pipelining-limit The maximum number of inflight database commands that can be pipelined. By default, pipelining is disabled. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MYSQL_PIPELINING_LIMIT int quarkus.datasource."datasource-name".reactive.mysql.use-affected-rows Whether to return the number of rows matched by the WHERE clause in UPDATE statements, instead of the number of rows actually changed. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MYSQL_USE_AFFECTED_ROWS boolean false 1.4.5.3. Reactive Microsoft SQL server-specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.reactive.mssql.packet-size The desired size (in bytes) for TDS packets. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MSSQL_PACKET_SIZE int quarkus.datasource.reactive.mssql.ssl Whether SSL/TLS is enabled. Environment variable: QUARKUS_DATASOURCE_REACTIVE_MSSQL_SSL boolean false Additional named datasources Type Default quarkus.datasource."datasource-name".reactive.mssql.packet-size The desired size (in bytes) for TDS packets. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MSSQL_PACKET_SIZE int quarkus.datasource."datasource-name".reactive.mssql.ssl Whether SSL/TLS is enabled. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_MSSQL_SSL boolean false 1.4.5.4. Reactive Oracle-specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Additional named datasources Type Default 1.4.5.5. Reactive PostgreSQL-specific configuration Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.datasource.reactive.postgresql.pipelining-limit The maximum number of inflight database commands that can be pipelined. Environment variable: QUARKUS_DATASOURCE_REACTIVE_POSTGRESQL_PIPELINING_LIMIT int quarkus.datasource.reactive.postgresql.ssl-mode SSL operating mode of the client. See Protection Provided in Different Modes . Environment variable: QUARKUS_DATASOURCE_REACTIVE_POSTGRESQL_SSL_MODE disable , allow , prefer , require , verify-ca , verify-full disable Additional named datasources Type Default quarkus.datasource."datasource-name".reactive.postgresql.pipelining-limit The maximum number of inflight database commands that can be pipelined. Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_POSTGRESQL_PIPELINING_LIMIT int quarkus.datasource."datasource-name".reactive.postgresql.ssl-mode SSL operating mode of the client. See Protection Provided in Different Modes . Environment variable: QUARKUS_DATASOURCE__DATASOURCE_NAME__REACTIVE_POSTGRESQL_SSL_MODE disable , allow , prefer , require , verify-ca , verify-full disable 1.4.6. Reactive datasource URL reference 1.4.6.1. DB2 db2://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example db2://dbuser:[email protected]:50000/mydb Currently, the client supports the following parameter keys: host port user password database Note Configuring parameters in the connection URL overrides the default properties. 1.4.6.2. Microsoft SQL server sqlserver://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example sqlserver://dbuser:[email protected]:1433/mydb Currently, the client supports the following parameter keys: host port user password database Note Configuring parameters in the connection URL overrides the default properties. 1.4.6.3. MySQL / MariaDB mysql://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example mysql://dbuser:[email protected]:3211/mydb Currently, the client supports the following parameter keys (case-insensitive): host port user password schema socket useAffectedRows Note Configuring parameters in the connection URL overrides the default properties. 1.4.6.4. Oracle 1.4.6.4.1. EZConnect format oracle:thin:@[[protocol:]//]host[:port][/service_name][:server_mode][/instance_name][?connection properties] Example oracle:thin:@mydbhost1:5521/mydbservice?connect_timeout=10sec 1.4.6.4.2. TNS alias format oracle:thin:@<alias_name>[?connection properties] Example oracle:thin:@prod_db?TNS_ADMIN=/work/tns/ 1.4.6.5. PostgreSQL postgresql://[user[:[password]]@]host[:port][/database][?<key1>=<value1>[&<key2>=<value2>]] Example postgresql://dbuser:[email protected]:5432/mydb Currently, the client supports: Following parameter keys: host port user password dbname sslmode Additional properties, such as: application_name fallback_application_name search_path options Note Configuring parameters in the connection URL overrides the default properties.
[ "quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test quarkus.datasource.jdbc.max-size=16", "quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20", "quarkus.datasource.db-kind=h2", "quarkus.datasource.username=<your username> quarkus.datasource.password=<your password>", "./mvnw quarkus:add-extension -Dextensions=\"jdbc-postgresql\"", "./mvnw quarkus:add-extension -Dextensions=\"agroal\"", "quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test", "quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver", "quarkus.datasource.db-kind=other quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver quarkus.datasource.jdbc.url=jdbc:oracle:thin:@192.168.1.12:1521/ORCL_SVC quarkus.datasource.username=scott quarkus.datasource.password=tiger", "@Inject AgroalDataSource defaultDataSource;", "quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20", "quarkus.datasource.jdbc=false", "quarkus.datasource.reactive=false", "quarkus.datasource.db-kind=h2 quarkus.datasource.username=username-default quarkus.datasource.jdbc.url=jdbc:h2:mem:default quarkus.datasource.jdbc.max-size=13 quarkus.datasource.users.db-kind=h2 quarkus.datasource.users.username=username1 quarkus.datasource.users.jdbc.url=jdbc:h2:mem:users quarkus.datasource.users.jdbc.max-size=11 quarkus.datasource.inventory.db-kind=h2 quarkus.datasource.inventory.username=username2 quarkus.datasource.inventory.jdbc.url=jdbc:h2:mem:inventory quarkus.datasource.inventory.jdbc.max-size=12", "@Inject AgroalDataSource defaultDataSource; @Inject @DataSource(\"users\") AgroalDataSource usersDataSource; @Inject @DataSource(\"inventory\") AgroalDataSource inventoryDataSource;", "quarkus.datasource.\"datasource-name\".health-exclude=true", "package my.app.integrationtests.db; import io.quarkus.test.common.QuarkusTestResource; import io.quarkus.test.h2.H2DatabaseTestResource; @QuarkusTestResource(H2DatabaseTestResource.class) public class TestResources { }", "quarkus.datasource.db-kind=h2 quarkus.datasource.jdbc.url=jdbc:h2:tcp://localhost/mem:test" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/configure_data_sources/configure-data-sources
Chapter 5. Network ports and protocols
Chapter 5. Network ports and protocols Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server in order for it to work. Ensure that these ports are available and are not being blocked by the server firewall. The following architectural diagram is an example of a fully deployed Ansible Automation Platform with all possible components. The following tables show the default Red Hat Ansible Automation Platform destination ports required for each application. Note The following default destination ports and installer inventory listed are configurable. If you choose to configure them to suit your environment, you might experience a change in behavior. Table 5.1. PostgreSQL Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Remote access during installation 5432 TCP Postgres Inbound and Outbound pg_port Default port ALLOW connections from controller(s) to database port Table 5.2. Automation controller Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Installation 80 TCP HTTP Inbound nginx_http_port UI/API 443 TCP HTTPS Inbound nginx_https_port UI/API 5432 TCP PostgreSQL Inbound and Outbound pg_port Open only if the internal database is used along with another component. Otherwise, this port should not be open Hybrid mode in a cluster 27199 TCP Receptor Inbound and Outbound receptor_listener_port ALLOW receptor listener port across all controllers for mandatory and automatic control plane clustering Table 5.3. Hop Nodes Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Installation 27199 TCP Receptor Inbound and Outbound receptor_listener_port Mesh ALLOW connection from controller(s) to Receptor port Table 5.4. Execution Nodes Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Installation 80/443 TCP SSH Inbound and Outbound Fixed value (maps to Table 5.7 Automation hub's "User interface" port) Allows execution nodes to pull the execution environment image from automation hub 27199 TCP Receptor Inbound and Outbound receptor_listener_port Mesh - Nodes directly peered to controllers. No hop nodes involved. 27199 is bi-directional for the execution nodes ALLOW connections from controller(s) to Receptor port (non-hop connected nodes) ALLOW connections from hop node(s) to Receptor port (if relayed through hop nodes) Table 5.5. Control Nodes Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Installation 27199 TCP Receptor Inbound and Outbound receptor_listener_port Mesh - Nodes directly peered to controllers. Direct nodes involved. 27199 is bi-directional for execution nodes ENABLE connections from controller(s) to Receptor port for non-hop connected nodes ENABLE connections from hop node(s) to Receptor port if relayed through hop nodes 443 TCP Podman Inbound nginx_https_port UI/API Table 5.6. Hybrid Nodes Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Installation 27199 TCP Receptor Inbound and Outbound receptor_listener_port Mesh - Nodes directly peered to controllers. No hop nodes involved. 27199 is bi-directional for the execution nodes ENABLE connections from controller(s) to Receptor port for non-hop connected nodes ENABLE connections from hop node(s) to Receptor port if relayed through hop nodes 443 TCP Podman Inbound nginx_https_port UI/API Table 5.7. Automation hub Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Installation 80 TCP HTTP Inbound Fixed value User interface 443 TCP HTTPS Inbound Fixed value User interface 5432 TCP PostgreSQL Inbound and Outbound automationhub_pg_port Open only if the internal database is used along with another component. Otherwise, this port should not be open Table 5.8. Services Catalog Port Protocol Service Direction Installer Inventory Variable Required for 22 TCP SSH Inbound and Outbound ansible_port Installation 443 TCP HTTPS Inbound nginx_https_port Access to Service Catalog user interface 5432 TCP PostgreSQL Inbound and Outbound pg_port Open only if the internal database is used. Otherwise, this port should not be open Table 5.9. Red Hat Insights for Red Hat Ansible Automation Platform URL Required for http://api.access.redhat.com:443 General account services, subscriptions https://cert-api.access.redhat.com:443 Insights data upload https://cert.cloud.redhat.com:443 Inventory upload and Cloud Connector connection https://cloud.redhat.com Access to Insights dashboard Table 5.10. Automation Hub URL Required for https://console.redhat.com:443 General account services, subscriptions https://catalog.redhat.com Indexing execution environments https://sso.redhat.com:443 TCP https://automation-hub-prd.s3.amazonaws.com https://automation-hub-prd.s3.us-east-2.amazonaws.com/ Firewall access https://galaxy.ansible.com Ansible Community curated Ansible content https://ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com https://registry.redhat.io:443 Access to container images provided by Red Hat and partners https://cert.cloud.redhat.com:443 Red Hat and partner curated Ansible Collections Table 5.11. Execution Environments (EE) URL Required for https://registry.redhat.io:443 Access to container images provided by Red Hat and partners cdn.quay.io:443 Access to container images provided by Red Hat and partners cdn01.quay.io:443 Access to container images provided by Red Hat and partners cdn02.quay.io:443 Access to container images provided by Red Hat and partners cdn03.quay.io:443 Access to container images provided by Red Hat and partners Important Image manifests and filesystem blobs are served directly from registry.redhat.io . However, from 1 May 2023, filesystem blobs are served from quay.io instead. To avoid problems pulling container images, you must enable outbound connections to the listed quay.io hostnames. Make this change to any firewall configuration that specifically enables outbound connections to registry.redhat.io . Use the hostnames instead of IP addresses when configuring firewall rules. After making this change, you can continue to pull images from registry.redhat.io . You do not require a quay.io login, or need to interact with the quay.io registry directly in any way to continue pulling Red Hat container images. For more information, see the article here
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_planning_guide/ref-network-ports-protocols_planning
Chapter 1. Building applications overview
Chapter 1. Building applications overview Using OpenShift Container Platform, you can create, edit, delete, and manage applications using the web console or command line interface (CLI). 1.1. Working on a project Using projects, you can organize and manage applications in isolation. You can manage the entire project lifecycle, including creating, viewing, and deleting a project in OpenShift Container Platform. After you create the project, you can grant or revoke access to a project for the users using the Developer perspective. You can also edit the project configuration resource while creating a project template that is used for automatic provisioning of new projects. Using the CLI, you can create a project as a different user by impersonating a request to the OpenShift Container Platform API. When you make a request to create a new project, the OpenShift Container Platform uses an endpoint to provision the project according to a customizable template. As a cluster administrator, you can choose to prevent an authenticated user group from self-provisioning new projects . 1.2. Working on an application 1.2.1. Creating an application To create applications, you must have created a project or have access to a project with the appropriate roles and permissions. You can create an application by using either the Developer perspective in the web console , installed Operators , or the OpenShift Container Platform CLI . You can source the applications to be added to the project from Git, JAR files, devfiles, or the developer catalog. You can also use components that include source or binary code, images, and templates to create an application by using the OpenShift Container Platform CLI. With the OpenShift Container Platform web console, you can create an application from an Operator installed by a cluster administrator. 1.2.2. Maintaining an application After you create the application you can use the web console to monitor your project or application metrics . You can also edit or delete the application using the web console. When the application is running, not all applications resources are used. As a cluster administrator, you can choose to idle these scalable resources to reduce resource consumption. 1.2.3. Deploying an application You can deploy your application using Deployment or DeploymentConfig objects and manage them from the web console. You can create deployment strategies that help reduce downtime during a change or an upgrade to the application. You can also use Helm , a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. 1.3. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace where you can discover and access certified software for container-based environments that run on public clouds and on-premises.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/applications/building-applications-overview
Chapter 9. Trade Zoo
Chapter 9. Trade Zoo A simple trading application that runs in the public cloud but keeps its data in a private Kafka cluster This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites. Overview This example is a simple Kafka application that shows how you can use Skupper to access a Kafka cluster at a remote site without exposing it to the public internet. It contains four services: A Kafka cluster running in a private data center. The cluster has two topics, "orders" and "updates". An order processor running in the public cloud. It consumes from "orders", matching buy and sell offers to make trades. It publishes new and updated orders and trades to "updates". A market data service running in the public cloud. It looks at the completed trades and computes the latest and average prices, which it then publishes to "updates". A web frontend service running in the public cloud. It submits buy and sell orders to "orders" and consumes from "updates" in order to show what's happening. To set up the Kafka cluster, this example uses the Kubernetes operator from the Strimzi project. The other services are small Python programs. The example uses two Kubernetes namespaces, "private" and "public", to represent the private data center and public cloud. Prerequisites The kubectl command-line tool, version 1.15 or later ( installation guide ) Access to at least one Kubernetes cluster, from any provider you choose Procedure Clone the repo for this example. Install the Skupper command-line tool Set up your namespaces Deploy the Kafka cluster Deploy the application services Create your sites Link your sites Expose the Kafka cluster Access the frontend Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository. Install the Skupper command-line tool This example uses the Skupper command-line tool to deploy Skupper. You need to install the skupper command only once for each development environment. See the Installation for details about installing the CLI. For configured systems, use the following command: Set up your namespaces Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The skupper and kubectl commands use your kubeconfig and current context to select the namespace where they operate. Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it. A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs. For each namespace, open a new terminal window. In each terminal, set the KUBECONFIG environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context. Note The login procedure varies by provider. See the documentation for yours: Amazon Elastic Kubernetes Service (EKS) Azure Kubernetes Service (AKS) Google Kubernetes Engine (GKE) IBM Kubernetes Service OpenShift Public: Private: Deploy the Kafka cluster In Private, use the kubectl create and kubectl apply commands with the listed YAML files to install the operator and deploy the cluster and topic. Private: NOTE: By default, the Kafka bootstrap server returns broker addresses that include the Kubernetes namespace in their domain name. When, as in this example, the Kafka client is running in a namespace with a different name from that of the Kafka cluster, this prevents the client from resolving the Kafka brokers. To make the Kafka brokers reachable, set the advertisedHost property of each broker to a domain name that the Kafka client can resolve at the remote site. In this example, this is achieved with the following listener configuration: See Advertised addresses for brokers for more information. Deploy the application services In Public, use the kubectl apply command with the listed YAML files to install the application services. Public: Create your sites A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace. For each namespace, use skupper init to create a site. This deploys the Skupper router and controller. Then use skupper status to see the outcome. Public: Sample output: Private: Sample output: As you move through the steps below, you can use skupper status at any time to check your progress. Link your sites A Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests. Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create . The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, The skupper link create command uses the token to create a link to the site that generated it. Note The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it. First, use skupper token create in site Public to generate the token. Then, use skupper link create in site Private to link the sites. Public: Sample output: Private: Sample output: If your terminal sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation. Expose the Kafka cluster In Private, use skupper expose with the --headless option to expose the Kafka cluster as a headless service on the Skupper network. Then, in Public, use kubectl get service to check that the cluster1-kafka-brokers service appears after a moment. Private: Public: Access the frontend In order to use and test the application, we need external access to the frontend. Use kubectl expose with --type LoadBalancer to open network access to the frontend service. Once the frontend is exposed, use kubectl get service/frontend to look up the external IP of the frontend service. If the external IP is <pending> , try again after a moment. Once you have the external IP, use curl or a similar tool to request the /api/health endpoint at that address. Note The <external-ip> field in the following commands is a placeholder. The actual value is an IP address. Public: Sample output: If everything is in order, you can now access the web interface by navigating to http://<external-ip>:8080/ in your browser.
[ "sudo dnf install skupper-cli", "export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public", "export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private", "create -f kafka-cluster/strimzi.yaml apply -f kafka-cluster/cluster1.yaml wait --for condition=ready --timeout 900s kafka/cluster1", "spec: kafka: listeners: - name: plain port: 9092 type: internal tls: false configuration: brokers: - broker: 0 advertisedHost: cluster1-kafka-0.cluster1-kafka-brokers", "apply -f order-processor/kubernetes.yaml apply -f market-data/kubernetes.yaml apply -f frontend/kubernetes.yaml", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.", "skupper init skupper status", "skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.", "skupper token create ~/secret.token", "skupper token create ~/secret.token Token written to ~/secret.token", "skupper link create ~/secret.token", "skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.", "skupper expose statefulset/cluster1-kafka --headless --port 9092", "get service/cluster1-kafka-brokers", "expose deployment/frontend --port 8080 --type LoadBalancer get service/frontend curl http://<external-ip>:8080/api/health", "kubectl expose deployment/frontend --port 8080 --type LoadBalancer service/frontend exposed kubectl get service/frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.103.232.28 <external-ip> 8080:30407/TCP 15s curl http://<external-ip>:8080/api/health OK" ]
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/examples/trade_zoo
Chapter 3. Deploy standalone Multicloud Object Gateway
Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. After deploying the MCG component, you can create and manage buckets using MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway (MCG) component while deploying OpenShift Data Foundation. After you create the MCG component, you can create and manage buckets using the MCG object browser. For more information, see Creating and managing buckets using MCG object browser . Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_google_cloud/deploy-standalone-multicloud-object-gateway
Chapter 5. Migration
Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.2. 5.1. Migrating to MariaDB 10.2 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. MariaDB is a community-developed drop-in replacement for MySQL . MariaDB 10.1 has been available as a Software Collection since Red Hat Software Collections 2.2; Red Hat Software Collections 3.2 is distributed with MariaDB 10.2 . The rh-mariadb102 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, does not conflict with the mysql or mariadb packages from the core systems, so it is possible to install the rh-mariadb102 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Collection is still installed and even running. Note that if you are using MariaDB 5.5 or MariaDB 10.0 , it is necessary to upgrade to the rh-mariadb101 Software Collection first, which is described in the Red Hat Software Collections 2.4 Release Notes . For more information about MariaDB 10.2 , see the upstream documentation about changes in version 10.2 and about upgrading . Note The rh-mariadb102 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb101 and rh-mariadb102 Software Collections Major changes in MariaDB 10.2 are described in the Red Hat Software Collections 3.0 Release Notes . Since MariaDB 10.2 , behavior of the SQL_MODE variable has been changed; see the upstream documentation for details. Multiple options have changed their default values or have been deprecated or removed. For details, see the Knowledgebase article Migrating from MariaDB 10.1 to the MariaDB 10.2 Software Collection . The rh-mariadb102 Software Collection includes the rh-mariadb102-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb102*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb102* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.1.2. Upgrading from the rh-mariadb101 to the rh-mariadb102 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb101 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb101 server. service rh-mariadb101-mariadb stop Install the rh-mariadb102 Software Collection. yum install rh-mariadb102-mariadb-server Note that it is possible to install the rh-mariadb102 Software Collection while the rh-mariadb101 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb102 , which is stored in the /etc/opt/rh/rh-mariadb102/my.cnf file and the /etc/opt/rh/rh-mariadb102/my.cnf.d/ directory. Compare it with configuration of rh-mariadb101 stored in /etc/opt/rh/rh-mariadb101/my.cnf and /etc/opt/rh/rh-mariadb101/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb101 Software Collection is stored in the /var/opt/rh/rh-mariadb101/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb102/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data are owned by the mysql user and SELinux context is correct. Start the rh-mariadb102 database server. service rh-mariadb102-mariadb start Perform the data migration. scl enable rh-mariadb102 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mariadb102 -- mysql_upgrade -p 5.2. Migrating to MySQL 8.0 The rh-mysql80 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mysql80 Software Collection conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql* or rh-mariadb* Software Collections. It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 8.0 only from MySQL 5.7 . If you need to upgrade from an earlier version, upgrade to MySQL 5.7 first. Instructions how to upgrade to MySQL 5.7 are available in Section 5.3, "Migrating to MySQL 5.7" . 5.2.1. Notable Differences Between MySQL 5.7 and MySQL 8.0 Differences Specific to the rh-mysql80 Software Collection The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 Software Collection. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . General Changes in MySQL 8.0 Binary logging is enabled by default during the server startup. The log_bin system variable is now set to ON by default even if the --log-bin option has not been specified. To disable binary logging, specify the --skip-log-bin or --disable-log-bin option at startup. For a CREATE FUNCTION statement to be accepted, at least one of the DETERMINISTIC , NO SQL , or READS SQL DATA keywords must be specified explicitly, otherwise an error occurs. Certain features related to account management have been removed. Namely, using the GRANT statement to modify account properties other than privilege assignments, such as authentication, SSL, and resource-limit, is no longer possible. To establish the mentioned properties at account-creation time, use the CREATE USER statement. To modify these properties, use the ALTER USER statement. Certain SSL-related options have been removed on the client-side. Use the --ssl-mode=REQUIRED option instead of --ssl=1 or --enable-ssl . Use the --ssl-mode=DISABLED option instead of --ssl=0 , --skip-ssl , or --disable-ssl . Use the --ssl-mode=VERIFY_IDENTITY option instead of --ssl-verify-server-cert options. Note that these option remains unchanged on the server side. The default character set has been changed from latin1 to utf8mb4 . The utf8 character set is currently an alias for utf8mb3 but in the future, it will become a reference to utf8mb4 . To prevent ambiguity, specify utf8mb4 explicitly for character set references instead of utf8 . Setting user variables in statements other than SET has been deprecated. The log_syslog variable, which previously configured error logging to the system logs, has been removed. Certain incompatible changes to spatial data support have been introduced. The deprecated ASC or DESC qualifiers for GROUP BY clauses have been removed. To produce a given sort order, provide an ORDER BY clause. For detailed changes in MySQL 8.0 compared to earlier versions, see the upstream documentation: What Is New in MySQL 8.0 and Changes Affecting Upgrades to MySQL 8.0 . 5.2.2. Upgrading to the rh-mysql80 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql80 Software Collection. yum install rh-mysql80-mysql-server Inspect the configuration of rh-mysql80 , which is stored in the /etc/opt/rh/rh-mysql80/my.cnf file and the /etc/opt/rh/rh-mysql80/my.cnf.d/ directory. Compare it with the configuration of rh-mysql57 stored in /etc/opt/rh/rh-mysql57/my.cnf and /etc/opt/rh/rh-mysql57/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql57 database server, if it is still running. systemctl stop rh-mysql57-mysqld.service All data of the rh-mysql57 Software Collection is stored in the /var/opt/rh/rh-mysql57/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql80/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql80 database server. systemctl start rh-mysql80-mysqld.service Perform the data migration. scl enable rh-mysql80 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql80 -- mysql_upgrade -p Note that when the rh-mysql80*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb102 Software Collection. 5.3. Migrating to MySQL 5.7 Red Hat Enterprise Linux 6 contains MySQL 5.1 as the default MySQL implementation. Red Hat Enterprise Linux 7 includes MariaDB 5.5 as the default MySQL implementation. In addition to these basic versions, MySQL 5.6 has been available as a Software Collection for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 since Red Hat Software Collections 2.0. The rh-mysql57 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql56 Software Collection, so it is possible to install the rh-mysql57 Software Collection together with the mysql , mariadb , or rh-mysql56 packages. It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 5.7 only from MySQL 5.6 . If you need to upgrade from an earlier version, upgrade to MySQL 5.6 first. Instructions how to upgrade to MySQL 5.6 are available in the Red Hat Software Collections 2.2 Release Notes . 5.3.1. Notable Differences Between MySQL 5.6 and MySQL 5.7 The mysql-bench subpackage is not included in the rh-mysql57 Software Collection. Since MySQL 5.7.7 , the default SQL mode includes NO_AUTO_CREATE_USER . Therefore it is necessary to create MySQL accounts using the CREATE USER statement because the GRANT statement no longer creates a user by default. See the upstream documentation for details. For detailed changes in MySQL 5.7 compared to earlier versions, see the upstream documentation: What Is New in MySQL 5.7 and Changes Affecting Upgrades to MySQL 5.7 . 5.3.2. Upgrading to the rh-mysql57 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql57 Software Collection. yum install rh-mysql57-mysql-server Inspect the configuration of rh-mysql57 , which is stored in the /etc/opt/rh/rh-mysql57/my.cnf file and the /etc/opt/rh/rh-mysql57/my.cnf.d/ directory. Compare it with the configuration of rh-mysql56 stored in /etc/opt/rh/rh-mysql56/my.cnf and /etc/opt/rh/rh-mysql56/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql56 database server, if it is still running. service rh-mysql56-mysqld stop All data of the rh-mysql56 Software Collection is stored in the /var/opt/rh/rh-mysql56/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql57/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql57 database server. service rh-mysql57-mysqld start Perform the data migration. scl enable rh-mysql57 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql57 -- mysql_upgrade -p 5.4. Migrating to MongoDB 3.6 Red Hat Software Collections 3.2 is released with MongoDB 3.6 , provided by the rh-mongodb36 Software Collection and available only for Red Hat Enterprise Linux 7. The rh-mongodb36 Software Collection includes the rh-mongodb36-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb36*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb36* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . 5.4.1. Notable Differences Between MongoDB 3.4 and MongoDB 3.6 General Changes The rh-mongodb36 Software Collection introduces the following significant general change: On Non-Uniform Access Memory (NUMA) hardware, it is possible to configure systemd services to be launched using the numactl command; see the upstream recommendation . To use MongoDB with the numactl command, you need to install the numactl RPM package and change the /etc/opt/rh/rh-mongodb36/sysconfig/mongod and /etc/opt/rh/rh-mongodb36/sysconfig/mongos configuration files accordingly. Compatibility Changes MongoDB 3.6 includes various minor changes that can affect compatibility with versions of MongoDB : MongoDB binaries now bind to localhost by default, so listening on different IP addresses needs to be explicitly enabled. Note that this is already the default behavior for systemd services distributed with MongoDB Software Collections. The MONGODB-CR authentication mechanism has been deprecated. For databases with users created by MongoDB versions earlier than 3.0, upgrade authentication schema to SCRAM . The HTTP interface and REST API have been removed Arbiters in replica sets have priority 0 Master-slave replication has been deprecated For detailed compatibility changes in MongoDB 3.6 , see the upstream release notes . Backwards Incompatible Features The following MongoDB 3.6 features are backwards incompatible and require the version to be set to 3.6 using the featureCompatibilityVersion command : UUID for collections USDjsonSchema document validation Change streams Chunk aware secondaries View definitions, document validators, and partial index filters that use version 3.6 query features Sessions and retryable writes Users and roles with authenticationRestrictions For details regarding backward incompatible changes in MongoDB 3.6 , see the upstream release notes . 5.4.2. Upgrading from the rh-mongodb34 to the rh-mongodb36 Software Collection Important Before migrating from the rh-mongodb34 to the rh-mongodb36 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb34/lib/mongodb/ directory. In addition, see the Compatibility Changes to ensure that your applications and deployments are compatible with MongoDB 3.6 . To upgrade to the rh-mongodb36 Software Collection, perform the following steps. To be able to upgrade, the rh-mongodb34 instance must have featureCompatibilityVersion set to 3.4 . Check featureCompatibilityVersion : ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Install the MongoDB servers and shells from the rh-mongodb36 Software Collections: ~]# yum install rh-mongodb36 Stop the MongoDB 3.4 server: ~]# systemctl stop rh-mongodb34-mongod.service Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb34/lib/mongodb/* /var/opt/rh/rh-mongodb36/lib/mongodb/ Configure the rh-mongodb36-mongod daemon in the /etc/opt/rh/rh-mongodb36/mongod.conf file. Start the MongoDB 3.6 server: ~]# systemctl start rh-mongodb36-mongod.service Enable backwards incompatible features: ~]USD scl enable rh-mongodb36 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )' If the mongod server is configured with enabled access control, add the --username and --password options to the mongo command. Note After upgrading, it is recommended to run the deployment first without enabling the backwards incompatible features for a burn-in period of time, to minimize the likelihood of a downgrade. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.5. Migrating to MongoDB 3.4 The rh-mongodb34 Software Collection, available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, provides MongoDB 3.4 . 5.5.1. Notable Differences Between MongoDB 3.2 and MongoDB 3.4 General Changes The rh-mongodb34 Software Collection introduces various general changes. Major changes are listed in the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 . For detailed changes, see the upstream release notes . In addition, this Software Collection includes the rh-mongodb34-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mongodb34*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mongodb34* packages. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Compatibility Changes MongoDB 3.4 includes various minor changes that can affect compatibility with versions of MongoDB . For details, see the Knowledgebase article Migrating from MongoDB 3.2 to MongoDB 3.4 and the upstream documentation . Notably, the following MongoDB 3.4 features are backwards incompatible and require that the version is set to 3.4 using the featureCompatibilityVersion command: Support for creating read-only views from existing collections or other views Index version v: 2 , which adds support for collation, decimal data and case-insensitive indexes Support for the decimal128 format with the new decimal data type For details regarding backward incompatible changes in MongoDB 3.4 , see the upstream release notes . 5.5.2. Upgrading from the rh-mongodb32 to the rh-mongodb34 Software Collection Note that once you have upgraded to MongoDB 3.4 and started using new features, cannot downgrade to version 3.2.7 or earlier. You can only downgrade to version 3.2.8 or later. Important Before migrating from the rh-mongodb32 to the rh-mongodb34 Software Collection, back up all your data, including any MongoDB databases, which are by default stored in the /var/opt/rh/rh-mongodb32/lib/mongodb/ directory. In addition, see the compatibility changes to ensure that your applications and deployments are compatible with MongoDB 3.4 . To upgrade to the rh-mongodb34 Software Collection, perform the following steps. Install the MongoDB servers and shells from the rh-mongodb34 Software Collections: ~]# yum install rh-mongodb34 Stop the MongoDB 3.2 server: ~]# systemctl stop rh-mongodb32-mongod.service Use the service rh-mongodb32-mongodb stop command on a Red Hat Enterprise Linux 6 system. Copy your data to the new location: ~]# cp -a /var/opt/rh/rh-mongodb32/lib/mongodb/* /var/opt/rh/rh-mongodb34/lib/mongodb/ Configure the rh-mongodb34-mongod daemon in the /etc/opt/rh/rh-mongodb34/mongod.conf file. Start the MongoDB 3.4 server: ~]# systemctl start rh-mongodb34-mongod.service On Red Hat Enterprise Linux 6, use the service rh-mongodb34-mongodb start command instead. Enable backwards-incompatible features: ~]USD scl enable rh-mongodb34 'mongo --host localhost --port 27017 admin' --eval 'db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )' If the mongod server is configured with enabled access control, add the --username and --password options to mongo command. Note that it is recommended to run the deployment after the upgrade without enabling these features first. For detailed information about upgrading, see the upstream release notes . For information about upgrading a Replica Set, see the upstream MongoDB Manual . For information about upgrading a Sharded Cluster, see the upstream MongoDB Manual . 5.6. Migrating to PostgreSQL 10 Red Hat Software Collections 3.2 is distributed with PostgreSQL 10 , available only for Red Hat Enterprise Linux 7. The rh-postgresql10 Software Collection can be safely installed on the same machine in parallel with the base Red Hat Enterprise Linux system version of PostgreSQL or any PostgreSQL Software Collection. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. See Section 5.7, "Migrating to PostgreSQL 9.6" for instructions how to migrate to an earlier version or when using Red Hat Enterprise Linux 6. The rh-postgresql10 Software Collection includes the rh-postgresql10-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl10*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl10* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Important Before migrating to PostgreSQL 10 , see the upstream compatibility notes . The following table provides an overview of different paths in a Red Hat Enterprise Linux 7 system version of PostgreSQL provided by the postgresql package, and in the rh-postgresql96 and rh-postgresql10 Software Colections. Table 5.1. Diferences in the PostgreSQL paths Content postgresql rh-postgresql96 rh-postgresql10 Executables /usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ /opt/rh/rh-postgresql10/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ /opt/rh/rh-postgresql10/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql10/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed Data /var/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ /var/opt/rh/rh-postgresql10/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ /var/opt/rh/rh-postgresql10/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ /opt/rh/rh-postgresql10/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ /opt/rh/rh-postgresql10/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql10/root/usr/lib64/pgsql/test/regress/ (in the -test package) 5.6.1. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 10 Software Collection Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql10 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 10, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop postgresql.service To verify that the server is not running, type: systemctl status postgresql.service Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql10/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql10/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 10 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql10/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql10 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql10/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql10-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql10-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql10 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : systemctl stop postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql10-postgresql -- postgresql-setup --initdb Start the new server as root : systemctl start rh-postgresql10-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql10 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.6.2. Migrating from the PostgreSQL 9.6 Software Collection to the PostgreSQL 10 Software Collection To migrate your data from the rh-postgresql96 Software Collection to the rh-postgresql10 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.6 to PostgreSQL 10 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql96/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop rh-postgresql96-postgresql.service To verify that the server is not running, type: systemctl status rh-postgresql96-postgresql.service Verify that the old directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql10/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql10/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 10 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql10/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql10 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql96-postgresql Alternatively, you can use the /opt/rh/rh-postgresql10/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql96-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql10-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql10-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql10 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old PostgreSQL 9.6 server, type the following command as root : chkconfig rh-postgresql96-postgreqsql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start rh-postgresql96-postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql96 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : systemctl stop rh-postgresql96-postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql10-postgresql -- postgresql-setup --initdb Start the new server as root : systemctl start rh-postgresql10-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql10 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 10 server to start automatically at boot time. To disable the old PostgreSQL 9.6 server, type the following command as root : chkconfig rh-postgresql96-postgresql off To enable the PostgreSQL 10 server, type as root : chkconfig rh-postgresql10-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql10/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7. Migrating to PostgreSQL 9.6 PostgreSQL 9.6 is available for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 and it can be safely installed on the same machine in parallel with PostgreSQL 8.4 from Red Hat Enterprise Linux 6, PostgreSQL 9.2 from Red Hat Enterprise Linux 7, or any version of PostgreSQL released in versions of Red Hat Software Collections. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. 5.7.1. Notable Differences Between PostgreSQL 9.5 and PostgreSQL 9.6 The most notable changes between PostgreSQL 9.5 and PostgreSQL 9.6 are described in the upstream release notes . The rh-postgresql96 Software Collection includes the rh-postgresql96-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgreqsl96*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgreqsl96* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The following table provides an overview of different paths in a Red Hat Enterprise Linux system version of PostgreSQL ( postgresql ) and in the postgresql92 , rh-postgresql95 , and rh-postgresql96 Software Collections. Note that the paths of PostgreSQL 8.4 distributed with Red Hat Enterprise Linux 6 and the system version of PostgreSQL 9.2 shipped with Red Hat Enterprise Linux 7 are the same; the paths for the rh-postgresql94 Software Collection are analogous to rh-postgresql95 . Table 5.2. Diferences in the PostgreSQL paths Content postgresql postgresql92 rh-postgresql95 rh-postgresql96 Executables /usr/bin/ /opt/rh/postgresql92/root/usr/bin/ /opt/rh/rh-postgresql95/root/usr/bin/ /opt/rh/rh-postgresql96/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/postgresql92/root/usr/lib64/ /opt/rh/rh-postgresql95/root/usr/lib64/ /opt/rh/rh-postgresql96/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/postgresql92/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/postgresql92/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql95/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql96/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed not installed Data /var/lib/pgsql/data/ /opt/rh/postgresql92/root/var/lib/pgsql/data/ /var/opt/rh/rh-postgresql95/lib/pgsql/data/ /var/opt/rh/rh-postgresql96/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /opt/rh/postgresql92/root/var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql95/lib/pgsql/backups/ /var/opt/rh/rh-postgresql96/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/postgresql92/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/postgresql92/root/usr/include/pgsql/ /opt/rh/rh-postgresql95/root/usr/include/pgsql/ /opt/rh/rh-postgresql96/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/postgresql92/root/usr/share/pgsql/ /opt/rh/rh-postgresql95/root/usr/share/pgsql/ /opt/rh/rh-postgresql96/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/postgresql92/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql95/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql96/root/usr/lib64/pgsql/test/regress/ (in the -test package) For changes between PostgreSQL 8.4 and PostgreSQL 9.2 , refer to the Red Hat Software Collections 1.2 Release Notes . Notable changes between PostgreSQL 9.2 and PostgreSQL 9.4 are described in Red Hat Software Collections 2.0 Release Notes . For differences between PostgreSQL 9.4 and PostgreSQL 9.5 , refer to Red Hat Software Collections 2.2 Release Notes . 5.7.2. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 9.6 Software Collection Red Hat Enterprise Linux 6 includes PostgreSQL 8.4 , Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql96 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. The following procedures are applicable for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 system versions of PostgreSQL . Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 9.6, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.5. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service postgresql stop To verify that the server is not running, type: service postgresql status Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrade from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.6. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : service postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.7.3. Migrating from the PostgreSQL 9.5 Software Collection to the PostgreSQL 9.6 Software Collection To migrate your data from the rh-postgresql95 Software Collection to the rh-postgresql96 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 9.5 to PostgreSQL 9.6 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql95/lib/pgsql/data/ directory. Procedure 5.7. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : service rh-postgresql95-postgresql stop To verify that the server is not running, type: service rh-postgresql95-postgresql status Verify that the old directory /var/opt/rh/rh-postgresql95/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql95/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql96/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql96/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 9.6 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql96/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql96 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql Alternatively, you can use the /opt/rh/rh-postgresql96/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql95-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql96-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : service rh-postgresql96-postgresql start It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql96 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgreqsql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.8. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : service rh-postgresql95-postgresql start Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql95 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : service rh-postgresql95-postgresql stop Initialize the data directory for the new server as root : scl enable rh-postgresql96-postgresql -- postgresql-setup --initdb Start the new server as root : service rh-postgresql96-postgresql start Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql96 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 9.6 server to start automatically at boot time. To disable the old PostgreSQL 9.5 server, type the following command as root : chkconfig rh-postgresql95-postgresql off To enable the PostgreSQL 9.6 server, type as root : chkconfig rh-postgresql96-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql96/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. If you need to migrate from the postgresql92 Software Collection, refer to Red Hat Software Collections 2.0 Release Notes ; the procedure is the same, you just need to adjust the version of the new Collection. The same applies to migration from the rh-postgresql94 Software Collection, which is described in Red Hat Software Collections 2.2 Release Notes . 5.8. Migrating to nginx 1.14 The root directory for the rh-nginx114 Software Collection is located in /opt/rh/rh-nginx114/root/ . The error log is stored in /var/opt/rh/rh-nginx114/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx114/nginx/ directory. Configuration files in nginx 1.14 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx114/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.12 to nginx 1.14 , back up all your data, including web pages located in the /opt/rh/nginx112/root/ tree and configuration files located in the /etc/opt/rh/nginx112/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx112/root/ tree, replicate those changes in the new /opt/rh/rh-nginx114/root/ and /etc/opt/rh/rh-nginx114/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.8 , nginx 1.10 , or nginx 1.12 to nginx 1.14 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ .
[ "[mysqld] default_authentication_plugin=caching_sha2_password" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.2_release_notes/chap-Migration
Chapter 1. What is simple content access?
Chapter 1. What is simple content access? Simple content access is a set of capabilities that enables a change in the way Red Hat manages its subscription and entitlement enforcement model. This model of content access and consumption results in fewer barriers to content deployment. Note The process of migrating Red Hat accounts and organizations that primarily use Red Hat Subscription Management for subscription management to use simple content access begins on October 25, 2024, and will be complete in November 2024. For Red Hat accounts and organizations that primarily use Satellite, versions 6.15 and earlier can continue to support an entitlement-based workflow for the remainder of the supported lifecycle for those versions. However, Satellite version 6.16 and later versions will support only the simple content access workflow. With simple content access, the enforcement model changes from a per-system requirement, where you must attach a subscription to a system before you can access content, to a per-organization and per-account requirement, where you can access content on a system without attaching a subscription to that system. Because of the added freedom and flexibility to consume content that simple content access provides, and in the absence of strict entitlement enforcement from the classic entitlement-based subscription model, it becomes important for you to keep track of how you are using your subscriptions. With the subscriptions service, Red Hat provides additional tooling to help you with tracking and compliance. The subscriptions service is a reporting solution that provides account-wide visibility of both subscription usage and utilization and aids in self-governance of your entire subscription profile. When simple content access and the subscriptions service are used together, they enable a different and more flexible subscription experience. Overall, this experience removes or improves many of the high-overhead and complicated business processes that are associated with the classic Red Hat entitlement-based enforcement subscription model: Time-consuming processes that require multiple Red Hat tools and many steps for content to be accessed and used. Overly complex and sometimes extremely manual processes that are needed to complete subscription reporting. Processes to resolve problems related to accessing content, under- and over-deployment, renewals, and so on, that resulted in significant business impact to Red Hat customers, including being blocked from content access. You can choose to use neither, either, or both of these services. However, simple content access and the subscriptions service are designed as complementary services and function best when they are used in tandem. To learn more about the subscriptions service and how you can use it with simple content access, see the Getting Started with the Subscriptions Service guide.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_simple_content_access/con-what-is-simplecontent_assembly-simplecontent-ctxt
Chapter 16. Creating assets
Chapter 16. Creating assets You can create business processes, rules, DRL files, and other assets in your Business Central projects. Note Migrating business processes is an irreversible process. Procedure In Business Central, go to Menu Design Projects and click the project name. For example, Evaluation . Click Add Asset and select the asset type. In the Create new asset_type window, add the required information and click Ok . Figure 16.1. Define Asset Note If you have not created a project, you can either add a project, use a sample project, or import an existing project. For more information, see Managing projects in Business Central .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/creating_assets_proc_managing-assets
9.2. Known Issues
9.2. Known Issues The udev daemon in Red Hat Enterprise 6 watches all devices for changes. If a change occurs, the device is rescanned for device information to be stored in the udev database. The scanning process causes additional I/O to devices after they were changed by tools. udev to can be told to exclude devices from being watched with a udev rule. A rule can be created by adding a new file <myname>.rules in /etc/udev/rules.d containing the following line: The SYMLINK should be replaced with any symlink path found in /dev/disk/* for the device in question. This will prevent unexpected I/O on the device, after data was written directly to the device (not on the filesystem). However, it will also prevent device updates in the udev database, like filesystem labels, symbolic links in /dev/disk/*, etc. Under some circumstances, the bfa-firmware package in Red Hat Enterprise Linux 6 may cause these devices to encounter a rare memory parity error. To work around this issue, to update to the newer firmware package, available directly from Brocade. Red Hat Enterprise Linux 6 only has support for the first revision of the UPEK Touchstrip fingerprint reader (USB ID 147e:2016). Attempting to use a second revision device may cause the fingerprint reader daemon to crash. The command will return the version of the device being used in an individual machine. The Emulex Fibre Channel/Fibre Channel-over-Ethernet (FCoE) driver in Red Hat Enterprise Linux 6 does not support DH-CHAP authentication. DH-CHAP authentication provides secure access between hosts and mass storage in Fibre-Channel and FCoE SANs in compliance with the FC-SP specification. Note, however that the Emulex driver ( lpfc ) does support DH-CHAP authentication on Red Hat Enterprise Linux 5, from version 5.4. Future Red Hat Enterprise Linux 6 releases may include DH-CHAP authentication. Partial Offload iSCSI adapters do not work on Red Hat Enterprise Linux. Consequently, devices that use the be2iscsi driver cannot be used during installation. The hpsa_allow_any kernel option allows the hpsa driver to be used with older hardware that typically uses the cciss module by default. To use the hpsa driver with older hardware, set hpsa_allow_any=1 and blacklist the cciss module. Note, however that this is an unsupported, non-default configuration. Platforms with BIOS/UEFI that are unaware of PCI-e SR-IOV capabilities may fail to enable virtual functions The recommended minimum HBA firmware revision for use with the mpt2sas driver is "Phase 5 firmware" (i.e. with version number in the form 05.xx.xx.xx .) Note that following this recommendation is especially important on complex SAS configurations involving multiple SAS expanders. The persistent naming of devices that are dynamically discovered in a system is a large problem that exists both in and outside of kdump. Nominally, devices are detected in the same order, which leads to consistent naming. In cases where devices are not detected in the same order, device abstraction layers (e.g. LVM) make essentially resolve the issue, though the use of metadata stored on the devices to create consistency. In the rare cases where no such abstraction layer is in use, and renaming devices causes issues with kdump, it is recommended that devices be referred to by disk label or UUID in kdump.conf. The following issues and limitations may be encountered with the Broadcom bnx2 , bnx2x , and cnic drivers Support for only one VLAN per port If deactivating the interface (i.e. the ifdown and ifup commands) the driver will need to be unloaded and reloaded to function correctly.
[ "ACTION==\"add|change\", SYMLINK==\"disk/by-id/scsi-SATA_SAMSUNG_HD400LDS0AXJ1LL903246\", OPTIONS+=\"nowatch\"", "lsusb -v -d 147e:2016 | grep bcdDevice" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/ar01s09s02
5.4.2. Creating Striped Volumes
5.4.2. Creating Striped Volumes For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O. For general information about striped volumes, see Section 3.3.2, "Striped Logical Volumes" . When you create a striped logical volume, you specify the number of stripes with the -i argument of the lvcreate command. This determines over how many physical volumes the logical volume will be striped. The number of stripes cannot be greater than the number of physical volumes in the volume group (unless the --alloc anywhere argument is used). If the underlying physical devices that make up a striped logical volume are different sizes, the maximum size of the striped volume is determined by the smallest underlying device. For example, in a two-legged stripe, the maximum size is twice the size of the smaller device. In a three-legged stripe, the maximum size is three times the size of the smallest device. The following command creates a striped logical volume across 2 physical volumes with a stripe of 64kB. The logical volume is 50 gigabytes in size, is named gfslv , and is carved out of volume group vg0 . As with linear volumes, you can specify the extents of the physical volume that you are using for the stripe. The following command creates a striped volume 100 extents in size that stripes across two physical volumes, is named stripelv and is in volume group testvg . The stripe will use sectors 0-49 of /dev/sda1 and sectors 50-99 of /dev/sdb1 .
[ "lvcreate -L 50G -i2 -I64 -n gfslv vg0", "lvcreate -l 100 -i2 -nstripelv testvg /dev/sda1:0-49 /dev/sdb1:50-99 Using default stripesize 64.00 KB Logical volume \"stripelv\" created" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lv_stripecreate
8.2. Types
8.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with squid . Different types allow you to configure flexible access: httpd_squid_script_exec_t This type is used for utilities such as cachemgr.cgi , which provides a variety of statistics about squid and its configuration. squid_cache_t Use this type for data that is cached by squid, as defined by the cache_dir directive in /etc/squid/squid.conf . By default, files created in or copied into /var/cache/squid/ and /var/spool/squid/ are labeled with the squid_cache_t type. Files for the squidGuard URL redirector plugin for squid created in or copied to /var/squidGuard/ are also labeled with the squid_cache_t type. Squid is only able to use files and directories that are labeled with this type for its cached data. squid_conf_t This type is used for the directories and files that squid uses for its configuration. Existing files, or those created in or copied to /etc/squid/ and /usr/share/squid/ are labeled with this type, including error messages and icons. squid_exec_t This type is used for the squid binary, /usr/sbin/squid . squid_log_t This type is used for logs. Existing files, or those created in or copied to /var/log/squid/ or /var/log/squidGuard/ must be labeled with this type. squid_initrc_exec_t This type is used for the initialization file required to start squid which is located at /etc/rc.d/init.d/squid . squid_var_run_t This type is used by files in /var/run/ , especially the process id (PID) named /var/run/squid.pid which is created by squid when it runs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-squid_caching_proxy-types
Chapter 9. Deprecated Functionality
Chapter 9. Deprecated Functionality This chapter provides an overview of functionality that has been deprecated in all minor releases of Red Hat Enterprise Linux 7 up to Red Hat Enterprise Linux 7. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For details regarding differences between RHEL 7 and RHEL 8, see Considerations in adopting RHEL 8 . 9.1. Deprecated Packages The following packages are now deprecated. For information regarding replaced packages or availability in an unsupported RHEL 8 repository (if applicable), see Considerations in adopting RHEL 8 . a2ps abrt-addon-upload-watch abrt-devel abrt-gui-devel abrt-retrace-client acpid-sysvinit advancecomp adwaita-icon-theme-devel adwaita-qt-common adwaita-qt4 agg aic94xx-firmware akonadi akonadi-devel akonadi-mysql alacarte alsa-tools anaconda-widgets-devel ant-antunit ant-antunit-javadoc antlr-C++-doc antlr-python antlr-tool apache-commons-collections-javadoc apache-commons-collections-testframework apache-commons-configuration apache-commons-configuration-javadoc apache-commons-daemon apache-commons-daemon-javadoc apache-commons-daemon-jsvc apache-commons-dbcp apache-commons-dbcp-javadoc apache-commons-digester apache-commons-digester-javadoc apache-commons-jexl apache-commons-jexl-javadoc apache-commons-lang-javadoc apache-commons-pool apache-commons-pool-javadoc apache-commons-validator apache-commons-validator-javadoc apache-commons-vfs apache-commons-vfs-ant apache-commons-vfs-examples apache-commons-vfs-javadoc apache-rat apache-rat-core apache-rat-javadoc apache-rat-plugin apache-rat-tasks apr-util-nss args4j args4j-javadoc ark ark-libs asciidoc-latex at-spi at-spi-devel at-spi-python at-sysvinit atlas-static attica attica-devel audiocd-kio audiocd-kio-devel audiocd-kio-libs audiofile audiofile-devel audit-libs-python audit-libs-static authconfig authconfig-gtk authd autogen-libopts-devel automoc autotrace-devel avahi-dnsconfd avahi-glib-devel avahi-gobject-devel avahi-qt3 avahi-qt3-devel avahi-qt4 avahi-qt4-devel avahi-tools avahi-ui avahi-ui-devel avahi-ui-tools avalon-framework avalon-framework-javadoc avalon-logkit avalon-logkit-javadoc bacula-console-bat bacula-devel bacula-traymonitor baekmuk-ttf-batang-fonts baekmuk-ttf-dotum-fonts baekmuk-ttf-fonts-common baekmuk-ttf-fonts-ghostscript baekmuk-ttf-gulim-fonts baekmuk-ttf-hline-fonts base64coder base64coder-javadoc batik batik-demo batik-javadoc batik-rasterizer batik-slideshow batik-squiggle batik-svgpp batik-ttf2svg bcc-devel bcel bison-devel blas-static blas64-devel blas64-static bltk bluedevil bluedevil-autostart bmc-snmp-proxy bogofilter-bogoupgrade bridge-utils bsdcpio bsh-demo bsh-utils btrfs-progs btrfs-progs-devel buildnumber-maven-plugin buildnumber-maven-plugin-javadoc bwidget bzr bzr-doc cairo-tools cal10n caribou caribou-antler caribou-devel caribou-gtk2-module caribou-gtk3-module cdi-api-javadoc cdparanoia-static cdrskin ceph-common check-static cheese-libs-devel cifs-utils-devel cim-schema-docs cim-schema-docs cjkuni-ukai-fonts clutter-gst2-devel clutter-tests cmpi-bindings-pywbem cobertura cobertura-javadoc cockpit-machines-ovirt codehaus-parent codemodel codemodel-javadoc cogl-tests colord-extra-profiles colord-kde compat-cheese314 compat-dapl compat-dapl-devel compat-dapl-static compat-dapl-utils compat-db compat-db-headers compat-db47 compat-exiv2-023 compat-gcc-44 compat-gcc-44-c++ compat-gcc-44-gfortran compat-glade315 compat-glew compat-glibc compat-glibc-headers compat-gnome-desktop314 compat-grilo02 compat-libcap1 compat-libcogl-pango12 compat-libcogl12 compat-libcolord1 compat-libf2c-34 compat-libgdata13 compat-libgfortran-41 compat-libgnome-bluetooth11 compat-libgnome-desktop3-7 compat-libgweather3 compat-libical1 compat-libmediaart0 compat-libmpc compat-libpackagekit-glib2-16 compat-libstdc++-33 compat-libtiff3 compat-libupower-glib1 compat-libxcb compat-locales-sap-common compat-openldap compat-openmpi16 compat-openmpi16-devel compat-opensm-libs compat-poppler022 compat-poppler022-cpp compat-poppler022-glib compat-poppler022-qt compat-sap-c++-5 compat-sap-c++-6 compat-sap-c++-7 conman console-setup coolkey coolkey-devel cpptest cpptest-devel cppunit cppunit-devel cppunit-doc cpuid cracklib-python crda-devel crit criu-devel crypto-utils cryptsetup-python cvs cvs-contrib cvs-doc cvs-inetd cvsps cyrus-imapd-devel dapl dapl-devel dapl-static dapl-utils dbus-doc dbus-python-devel dbus-tests dbusmenu-qt dbusmenu-qt-devel dbusmenu-qt-devel-docs debugmode dejagnu dejavu-lgc-sans-fonts dejavu-lgc-sans-mono-fonts dejavu-lgc-serif-fonts deltaiso dhcp-devel dialog-devel dleyna-connector-dbus-devel dleyna-core-devel dlm-devel dmraid dmraid-devel dmraid-events dmraid-events-logwatch docbook-simple docbook-slides docbook-style-dsssl docbook-utils docbook-utils-pdf docbook5-schemas docbook5-style-xsl docbook5-style-xsl-extensions docker-rhel-push-plugin dom4j dom4j-demo dom4j-javadoc dom4j-manual dovecot-pigeonhole dracut-fips dracut-fips-aesni dragon drm-utils drpmsync dtdinst e2fsprogs-static ecj edac-utils-devel efax efivar-devel egl-utils ekiga ElectricFence emacs-a2ps emacs-a2ps-el emacs-auctex emacs-auctex-doc emacs-git emacs-git-el emacs-gnuplot emacs-gnuplot-el emacs-php-mode empathy enchant-aspell enchant-voikko eog-devel epydoc espeak-devel evince-devel evince-dvi evolution-data-server-doc evolution-data-server-perl evolution-data-server-tests evolution-devel evolution-devel-docs evolution-tests expat-static expect-devel expectk farstream farstream-devel farstream-python farstream02-devel fedfs-utils-admin fedfs-utils-client fedfs-utils-common fedfs-utils-devel fedfs-utils-lib fedfs-utils-nsdbparams fedfs-utils-python fedfs-utils-server felix-bundlerepository felix-bundlerepository-javadoc felix-framework felix-framework-javadoc felix-osgi-obr felix-osgi-obr-javadoc felix-shell felix-shell-javadoc fence-sanlock festival festival-devel festival-docs festival-freebsoft-utils festival-lib festival-speechtools-devel festival-speechtools-libs festival-speechtools-utils festvox-awb-arctic-hts festvox-bdl-arctic-hts festvox-clb-arctic-hts festvox-jmk-arctic-hts festvox-kal-diphone festvox-ked-diphone festvox-rms-arctic-hts festvox-slt-arctic-hts file-static filebench filesystem-content finch finch-devel finger finger-server flatpak-devel flex-devel fltk-fluid fltk-static flute-javadoc folks folks-devel folks-tools fontforge-devel fontpackages-tools fonttools fop fop-javadoc fprintd-devel freeradius-python freetype-demos fros fros-gnome fros-recordmydesktop fwupd-devel fwupdate-devel gamin-python gavl-devel gcab gcc-gnat gcc-go gcc-objc gcc-objc++ gcc-plugin-devel gconf-editor gd-progs gdk-pixbuf2-tests gdm-devel gdm-pam-extensions-devel gedit-devel gedit-plugin-bookmarks gedit-plugin-bracketcompletion gedit-plugin-charmap gedit-plugin-codecomment gedit-plugin-colorpicker gedit-plugin-colorschemer gedit-plugin-commander gedit-plugin-drawspaces gedit-plugin-findinfiles gedit-plugin-joinlines gedit-plugin-multiedit gedit-plugin-smartspaces gedit-plugin-synctex gedit-plugin-terminal gedit-plugin-textsize gedit-plugin-translate gedit-plugin-wordcompletion gedit-plugins gedit-plugins-data gegl-devel geoclue geoclue-devel geoclue-doc geoclue-gsmloc geoclue-gui GeoIP GeoIP-data GeoIP-devel GeoIP-update geronimo-jaspic-spec geronimo-jaspic-spec-javadoc geronimo-jaxrpc geronimo-jaxrpc-javadoc geronimo-jms geronimo-jta geronimo-jta-javadoc geronimo-osgi-support geronimo-osgi-support-javadoc geronimo-saaj geronimo-saaj-javadoc ghostscript-chinese ghostscript-chinese-zh_CN ghostscript-chinese-zh_TW ghostscript-cups ghostscript-devel ghostscript-gtk giflib-utils gimp-data-extras gimp-help gimp-help-ca gimp-help-da gimp-help-de gimp-help-el gimp-help-en_GB gimp-help-es gimp-help-fr gimp-help-it gimp-help-ja gimp-help-ko gimp-help-nl gimp-help-nn gimp-help-pt_BR gimp-help-ru gimp-help-sl gimp-help-sv gimp-help-zh_CN git-bzr git-cvs git-gnome-keyring git-hg git-p4 gjs-tests glade glade3 glade3-libgladeui glade3-libgladeui-devel glassfish-dtd-parser glassfish-dtd-parser-javadoc glassfish-jaxb-javadoc glassfish-jsp glassfish-jsp-javadoc glew glib-networking-tests gmp-static gnome-clocks gnome-common gnome-contacts gnome-desktop3-tests gnome-devel-docs gnome-dictionary gnome-doc-utils gnome-doc-utils-stylesheets gnome-documents gnome-documents-libs gnome-icon-theme gnome-icon-theme-devel gnome-icon-theme-extras gnome-icon-theme-legacy gnome-icon-theme-symbolic gnome-packagekit gnome-packagekit-common gnome-packagekit-installer gnome-packagekit-updater gnome-python2 gnome-python2-bonobo gnome-python2-canvas gnome-python2-devel gnome-python2-gconf gnome-python2-gnome gnome-python2-gnomevfs gnome-settings-daemon-devel gnome-software-devel gnome-vfs2 gnome-vfs2-devel gnome-vfs2-smb gnome-weather gnome-weather-tests gnote gnu-efi-utils gnu-getopt gnu-getopt-javadoc gnuplot-latex gnuplot-minimal gob2 gom-devel google-noto-sans-korean-fonts google-noto-sans-simplified-chinese-fonts google-noto-sans-traditional-chinese-fonts gperftools gperftools-devel gperftools-libs gpm-static grantlee grantlee-apidocs grantlee-devel graphviz-graphs graphviz-guile graphviz-java graphviz-lua graphviz-ocaml graphviz-perl graphviz-php graphviz-python graphviz-ruby graphviz-tcl groff-doc groff-perl groff-x11 groovy groovy-javadoc grub2 grub2-ppc-modules grub2-ppc64-modules gsm-tools gsound-devel gssdp-utils gstreamer gstreamer-devel gstreamer-devel-docs gstreamer-plugins-bad-free gstreamer-plugins-bad-free-devel gstreamer-plugins-bad-free-devel-docs gstreamer-plugins-base gstreamer-plugins-base-devel gstreamer-plugins-base-devel-docs gstreamer-plugins-base-tools gstreamer-plugins-good gstreamer-plugins-good-devel-docs gstreamer-python gstreamer-python-devel gstreamer-tools gstreamer1-devel-docs gstreamer1-plugins-base-devel-docs gstreamer1-plugins-base-tools gstreamer1-plugins-ugly-free-devel gtk-vnc gtk-vnc-devel gtk-vnc-python gtk-vnc2-devel gtk3-devel-docs gtk3-immodules gtk3-tests gtkhtml3 gtkhtml3-devel gtksourceview3-tests gucharmap gucharmap-devel gucharmap-libs gupnp-av-devel gupnp-av-docs gupnp-dlna-devel gupnp-dlna-docs gupnp-docs gupnp-igd-python gutenprint-devel gutenprint-extras gutenprint-foomatic gvfs-tests gvnc-devel gvnc-tools gvncpulse gvncpulse-devel gwenview gwenview-libs hamcrest hawkey-devel hesiod highcontrast-qt highcontrast-qt4 highcontrast-qt5 highlight-gui hispavoces-pal-diphone hispavoces-sfl-diphone hsakmt hsakmt-devel hspell-devel hsqldb hsqldb-demo hsqldb-javadoc hsqldb-manual htdig html2ps http-parser-devel httpunit httpunit-doc httpunit-javadoc i2c-tools-eepromer i2c-tools-python ibus-pygtk2 ibus-qt ibus-qt-devel ibus-qt-docs ibus-rawcode ibus-table-devel ibutils ibutils-devel ibutils-libs icc-profiles-openicc icon-naming-utils im-chooser im-chooser-common ImageMagick ImageMagick-c++ ImageMagick-c++-devel ImageMagick-devel ImageMagick-doc ImageMagick-perl imake imsettings imsettings-devel imsettings-gsettings imsettings-libs imsettings-qt imsettings-xim indent infinipath-psm infinipath-psm-devel iniparser iniparser-devel iok ipa-gothic-fonts ipa-mincho-fonts ipa-pgothic-fonts ipa-pmincho-fonts iperf3-devel iproute-doc ipset-devel ipsilon ipsilon-authform ipsilon-authgssapi ipsilon-authldap ipsilon-base ipsilon-client ipsilon-filesystem ipsilon-infosssd ipsilon-persona ipsilon-saml2 ipsilon-saml2-base ipsilon-tools-ipa iputils-sysvinit iscsi-initiator-utils-devel isdn4k-utils isdn4k-utils-devel isdn4k-utils-doc isdn4k-utils-static isdn4k-utils-vboxgetty isomd5sum-devel isorelax istack-commons-javadoc ixpdimm_sw ixpdimm_sw-devel ixpdimm-cli ixpdimm-monitor jai-imageio-core jai-imageio-core-javadoc jakarta-commons-httpclient-demo jakarta-commons-httpclient-javadoc jakarta-commons-httpclient-manual jakarta-oro jakarta-taglibs-standard jakarta-taglibs-standard-javadoc jandex jandex-javadoc jansson-devel-doc jarjar jarjar-javadoc jarjar-maven-plugin jasper jasper-utils java-1.6.0-openjdk java-1.6.0-openjdk-demo java-1.6.0-openjdk-devel java-1.6.0-openjdk-javadoc java-1.6.0-openjdk-src java-1.7.0-openjdk java-1.7.0-openjdk-accessibility java-1.7.0-openjdk-demo java-1.7.0-openjdk-devel java-1.7.0-openjdk-headless java-1.7.0-openjdk-javadoc java-1.7.0-openjdk-src java-1.8.0-openjdk-accessibility-debug java-1.8.0-openjdk-debug java-1.8.0-openjdk-demo-debug java-1.8.0-openjdk-devel-debug java-1.8.0-openjdk-headless-debug java-1.8.0-openjdk-javadoc-debug java-1.8.0-openjdk-javadoc-zip-debug java-1.8.0-openjdk-src-debug java-11-openjdk-debug java-11-openjdk-demo-debug java-11-openjdk-devel-debug java-11-openjdk-headless-debug java-11-openjdk-javadoc-debug java-11-openjdk-javadoc-zip-debug java-11-openjdk-jmods-debug java-11-openjdk-src-debug javamail jaxen jboss-ejb-3.1-api jboss-ejb-3.1-api-javadoc jboss-el-2.2-api jboss-el-2.2-api-javadoc jboss-jaxrpc-1.1-api jboss-jaxrpc-1.1-api-javadoc jboss-servlet-2.5-api jboss-servlet-2.5-api-javadoc jboss-servlet-3.0-api jboss-servlet-3.0-api-javadoc jboss-specs-parent jboss-transaction-1.1-api jboss-transaction-1.1-api-javadoc jdom jettison jettison-javadoc jetty-annotations jetty-ant jetty-artifact-remote-resources jetty-assembly-descriptors jetty-build-support jetty-build-support-javadoc jetty-client jetty-continuation jetty-deploy jetty-distribution-remote-resources jetty-http jetty-io jetty-jaas jetty-jaspi jetty-javadoc jetty-jmx jetty-jndi jetty-jsp jetty-jspc-maven-plugin jetty-maven-plugin jetty-monitor jetty-parent jetty-plus jetty-project jetty-proxy jetty-rewrite jetty-runner jetty-security jetty-server jetty-servlet jetty-servlets jetty-start jetty-test-policy jetty-test-policy-javadoc jetty-toolchain jetty-util jetty-util-ajax jetty-version-maven-plugin jetty-version-maven-plugin-javadoc jetty-webapp jetty-websocket-api jetty-websocket-client jetty-websocket-common jetty-websocket-parent jetty-websocket-server jetty-websocket-servlet jetty-xml jing jing-javadoc jline-demo jna jna-contrib jna-javadoc joda-convert joda-convert-javadoc js js-devel jsch-demo json-glib-tests jsr-311 jsr-311-javadoc juk junit junit-demo jvnet-parent k3b k3b-common k3b-devel k3b-libs kaccessible kaccessible-libs kactivities kactivities-devel kamera kate kate-devel kate-libs kate-part kcalc kcharselect kcm_colors kcm_touchpad kcm-gtk kcolorchooser kcoloredit kde-base-artwork kde-baseapps kde-baseapps-devel kde-baseapps-libs kde-filesystem kde-l10n kde-l10n-Arabic kde-l10n-Basque kde-l10n-Bosnian kde-l10n-British kde-l10n-Bulgarian kde-l10n-Catalan kde-l10n-Catalan-Valencian kde-l10n-Croatian kde-l10n-Czech kde-l10n-Danish kde-l10n-Dutch kde-l10n-Estonian kde-l10n-Farsi kde-l10n-Finnish kde-l10n-Galician kde-l10n-Greek kde-l10n-Hebrew kde-l10n-Hungarian kde-l10n-Icelandic kde-l10n-Interlingua kde-l10n-Irish kde-l10n-Kazakh kde-l10n-Khmer kde-l10n-Latvian kde-l10n-Lithuanian kde-l10n-LowSaxon kde-l10n-Norwegian kde-l10n-Norwegian-Nynorsk kde-l10n-Polish kde-l10n-Portuguese kde-l10n-Romanian kde-l10n-Serbian kde-l10n-Slovak kde-l10n-Slovenian kde-l10n-Swedish kde-l10n-Tajik kde-l10n-Thai kde-l10n-Turkish kde-l10n-Ukrainian kde-l10n-Uyghur kde-l10n-Vietnamese kde-l10n-Walloon kde-plasma-networkmanagement kde-plasma-networkmanagement-libreswan kde-plasma-networkmanagement-libs kde-plasma-networkmanagement-mobile kde-print-manager kde-runtime kde-runtime-devel kde-runtime-drkonqi kde-runtime-libs kde-settings kde-settings-ksplash kde-settings-minimal kde-settings-plasma kde-settings-pulseaudio kde-style-oxygen kde-style-phase kde-wallpapers kde-workspace kde-workspace-devel kde-workspace-ksplash-themes kde-workspace-libs kdeaccessibility kdeadmin kdeartwork kdeartwork-screensavers kdeartwork-sounds kdeartwork-wallpapers kdeclassic-cursor-theme kdegraphics kdegraphics-devel kdegraphics-libs kdegraphics-strigi-analyzer kdegraphics-thumbnailers kdelibs kdelibs-apidocs kdelibs-common kdelibs-devel kdelibs-ktexteditor kdemultimedia kdemultimedia-common kdemultimedia-devel kdemultimedia-libs kdenetwork kdenetwork-common kdenetwork-devel kdenetwork-fileshare-samba kdenetwork-kdnssd kdenetwork-kget kdenetwork-kget-libs kdenetwork-kopete kdenetwork-kopete-devel kdenetwork-kopete-libs kdenetwork-krdc kdenetwork-krdc-devel kdenetwork-krdc-libs kdenetwork-krfb kdenetwork-krfb-libs kdepim kdepim-devel kdepim-libs kdepim-runtime kdepim-runtime-libs kdepimlibs kdepimlibs-akonadi kdepimlibs-apidocs kdepimlibs-devel kdepimlibs-kxmlrpcclient kdeplasma-addons kdeplasma-addons-devel kdeplasma-addons-libs kdesdk kdesdk-cervisia kdesdk-common kdesdk-devel kdesdk-dolphin-plugins kdesdk-kapptemplate kdesdk-kapptemplate-template kdesdk-kcachegrind kdesdk-kioslave kdesdk-kmtrace kdesdk-kmtrace-devel kdesdk-kmtrace-libs kdesdk-kompare kdesdk-kompare-devel kdesdk-kompare-libs kdesdk-kpartloader kdesdk-kstartperf kdesdk-kuiviewer kdesdk-lokalize kdesdk-okteta kdesdk-okteta-devel kdesdk-okteta-libs kdesdk-poxml kdesdk-scripts kdesdk-strigi-analyzer kdesdk-thumbnailers kdesdk-umbrello kdeutils kdeutils-common kdeutils-minimal kdf kernel-rt-doc kernel-rt-trace kernel-rt-trace-devel kernel-rt-trace-kvm keytool-maven-plugin keytool-maven-plugin-javadoc kgamma kgpg kgreeter-plugins khotkeys khotkeys-libs kiconedit kinfocenter kio_sysinfo kmag kmenuedit kmix kmod-oracleasm kolourpaint kolourpaint-libs konkretcmpi konkretcmpi-devel konkretcmpi-python konsole konsole-part kross-interpreters kross-python kross-ruby kruler ksaneplugin kscreen ksnapshot ksshaskpass ksysguard ksysguard-libs ksysguardd ktimer kwallet kwin kwin-gles kwin-gles-libs kwin-libs kwrite kxml kxml-javadoc lapack64-devel lapack64-static lasso-devel latrace lcms2-utils ldns-doc ldns-python libabw-devel libabw-doc libabw-tools libappindicator libappindicator-devel libappindicator-docs libappstream-glib-builder libappstream-glib-builder-devel libart_lgpl libart_lgpl-devel libasan-static libavc1394-devel libbase-javadoc libblockdev-btrfs libblockdev-btrfs-devel libblockdev-crypto-devel libblockdev-devel libblockdev-dm-devel libblockdev-fs-devel libblockdev-kbd-devel libblockdev-loop-devel libblockdev-lvm-devel libblockdev-mdraid-devel libblockdev-mpath-devel libblockdev-nvdimm-devel libblockdev-part-devel libblockdev-swap-devel libblockdev-utils-devel libblockdev-vdo-devel libbluedevil libbluedevil-devel libbluray-devel libbonobo libbonobo-devel libbonoboui libbonoboui-devel libbytesize-devel libcacard-tools libcap-ng-python libcdr-devel libcdr-doc libcdr-tools libcgroup-devel libchamplain-demos libchewing libchewing-devel libchewing-python libcmis-devel libcmis-tools libcryptui libcryptui-devel libdb-devel-static libdb-java libdb-java-devel libdb-tcl libdb-tcl-devel libdbi libdbi-dbd-mysql libdbi-dbd-pgsql libdbi-dbd-sqlite libdbi-devel libdbi-drivers libdbusmenu-doc libdbusmenu-gtk2 libdbusmenu-gtk2-devel libdbusmenu-gtk3-devel libdhash-devel libdmapsharing-devel libdmmp-devel libdmx-devel libdnet-progs libdnet-python libdnf-devel libdv-tools libdvdnav-devel libeasyfc-devel libeasyfc-gobject-devel libee libee-devel libee-utils libesmtp libesmtp-devel libestr-devel libetonyek-doc libetonyek-tools libevdev-utils libexif-doc libexttextcat-devel libexttextcat-tools libfastjson-devel libfdt libfonts-javadoc libformula-javadoc libfprint-devel libfreehand-devel libfreehand-doc libfreehand-tools libgcab1-devel libgccjit libgdither-devel libgee06 libgee06-devel libgepub libgepub-devel libgfortran-static libgfortran4 libgfortran5 libgit2-devel libglade2 libglade2-devel libGLEWmx libgnat libgnat-devel libgnat-static libgnome libgnome-devel libgnome-keyring-devel libgnomecanvas libgnomecanvas-devel libgnomeui libgnomeui-devel libgo libgo-devel libgo-static libgovirt-devel libgudev-devel libgxim libgxim-devel libgxps-tools libhangul-devel libhbaapi-devel libhif-devel libical-glib libical-glib-devel libical-glib-doc libid3tag libid3tag-devel libiec61883-utils libieee1284-python libimobiledevice-python libimobiledevice-utils libindicator libindicator-devel libindicator-gtk3-devel libindicator-tools libinvm-cim libinvm-cim-devel libinvm-cli libinvm-cli-devel libinvm-i18n libinvm-i18n-devel libiodbc libiodbc-devel libipa_hbac-devel libiptcdata-devel libiptcdata-python libitm-static libixpdimm-cim libixpdimm-core libjpeg-turbo-static libkcddb libkcddb-devel libkcompactdisc libkcompactdisc-devel libkdcraw libkdcraw-devel libkexiv2 libkexiv2-devel libkipi libkipi-devel libkkc-devel libkkc-tools libksane libksane-devel libkscreen libkscreen-devel libkworkspace liblayout-javadoc libloader-javadoc liblognorm-devel liblouis-devel liblouis-doc liblouis-utils libmatchbox-devel libmbim-devel libmediaart-devel libmediaart-tests libmnl-static libmodman-devel libmodulemd-devel libmpc-devel libmsn libmsn-devel libmspub-devel libmspub-doc libmspub-tools libmtp-examples libmudflap libmudflap-devel libmudflap-static libmwaw-devel libmwaw-doc libmwaw-tools libmx libmx-devel libmx-docs libndp-devel libnetfilter_cthelper-devel libnetfilter_cttimeout-devel libnftnl-devel libnl libnl-devel libnm-gtk libnm-gtk-devel libntlm libntlm-devel libobjc libodfgen-doc libofa libofa-devel liboil liboil-devel libopenraw-pixbuf-loader liborcus-devel liborcus-doc liborcus-tools libosinfo-devel libosinfo-vala libotf-devel libpagemaker-devel libpagemaker-doc libpagemaker-tools libpinyin-devel libpinyin-tools libpipeline-devel libplist-python libpng-static libpng12-devel libproxy-kde libpst libpst-devel libpst-devel-doc libpst-doc libpst-python libpurple-perl libpurple-tcl libqmi-devel libquadmath-static LibRaw-static librelp-devel libreoffice libreoffice-bsh libreoffice-gdb-debug-support libreoffice-glade libreoffice-librelogo libreoffice-nlpsolver libreoffice-officebean libreoffice-officebean-common libreoffice-postgresql libreoffice-rhino libreofficekit-devel librepo-devel libreport-compat libreport-devel libreport-gtk-devel libreport-web-devel librepository-javadoc librevenge-doc librsvg2-tools libseccomp-devel libselinux-static libsemanage-devel libsemanage-static libserializer-javadoc libsexy libsexy-devel libsmbios-devel libsmi-devel libsndfile-utils libsolv-demo libsolv-devel libsolv-tools libspiro-devel libss-devel libssh2 libsss_certmap-devel libsss_idmap-devel libsss_nss_idmap-devel libsss_simpleifp-devel libstaroffice-devel libstaroffice-doc libstaroffice-tools libstdc++-static libstoragemgmt-devel libstoragemgmt-targetd-plugin libtar-devel libteam-devel libtheora-devel-docs libtiff-static libtimezonemap-devel libtnc libtnc-devel libtranslit libtranslit-devel libtranslit-icu libtranslit-m17n libtsan-static libudisks2-devel libuninameslist-devel libunwind libunwind-devel libusal-devel libusb-static libusbmuxd-utils libuser-devel libvdpau-docs libverto-glib libverto-glib-devel libverto-libevent-devel libverto-tevent libverto-tevent-devel libvirt-cim libvirt-daemon-driver-lxc libvirt-daemon-lxc libvirt-gconfig-devel libvirt-glib-devel libvirt-gobject-devel libvirt-java libvirt-java-devel libvirt-java-javadoc libvirt-login-shell libvirt-snmp libvisio-doc libvisio-tools libvma-devel libvma-utils libvoikko-devel libvpx-utils libwebp-java libwebp-tools libwpd-tools libwpg-tools libwps-tools libwsman-devel libwvstreams libwvstreams-devel libwvstreams-static libxcb-doc libXevie libXevie-devel libXfont libXfont-devel libxml2-static libxslt-python libXvMC-devel libzapojit libzapojit-devel libzmf-devel libzmf-doc libzmf-tools lldpad-devel log4cxx log4cxx-devel log4j-manual lpsolve-devel lua-devel lua-static lvm2-cluster lvm2-python-libs lvm2-sysvinit lz4-static m17n-contrib m17n-contrib-extras m17n-db-devel m17n-db-extras m17n-lib-devel m17n-lib-tools m2crypto malaga-devel man-pages-cs man-pages-es man-pages-es-extra man-pages-fr man-pages-it man-pages-ja man-pages-ko man-pages-pl man-pages-ru man-pages-zh-CN mariadb-bench marisa-devel marisa-perl marisa-python marisa-ruby marisa-tools maven-changes-plugin maven-changes-plugin-javadoc maven-deploy-plugin maven-deploy-plugin-javadoc maven-doxia-module-fo maven-ear-plugin maven-ear-plugin-javadoc maven-ejb-plugin maven-ejb-plugin-javadoc maven-error-diagnostics maven-gpg-plugin maven-gpg-plugin-javadoc maven-istack-commons-plugin maven-jarsigner-plugin maven-jarsigner-plugin-javadoc maven-javadoc-plugin maven-javadoc-plugin-javadoc maven-jxr maven-jxr-javadoc maven-osgi maven-osgi-javadoc maven-plugin-jxr maven-project-info-reports-plugin maven-project-info-reports-plugin-javadoc maven-release maven-release-javadoc maven-release-manager maven-release-plugin maven-reporting-exec maven-repository-builder maven-repository-builder-javadoc maven-scm maven-scm-javadoc maven-scm-test maven-shared-jar maven-shared-jar-javadoc maven-site-plugin maven-site-plugin-javadoc maven-verifier-plugin maven-verifier-plugin-javadoc maven-wagon-provider-test maven-wagon-scm maven-war-plugin maven-war-plugin-javadoc mdds-devel meanwhile-devel meanwhile-doc memcached-devel memstomp mesa-demos mesa-libxatracker-devel mesa-private-llvm mesa-private-llvm-devel metacity-devel mgetty mgetty-sendfax mgetty-viewfax mgetty-voice migrationtools minizip minizip-devel mkbootdisk mobile-broadband-provider-info-devel mod_auth_kerb mod_auth_mellon-diagnostics mod_nss mod_revocator ModemManager-vala mono-icon-theme mozjs17 mozjs17-devel mozjs24 mozjs24-devel mpich-3.0-autoload mpich-3.0-doc mpich-3.2-autoload mpich-3.2-doc mpitests-compat-openmpi16 msv-demo msv-msv msv-rngconv msv-xmlgen mvapich2-2.0-devel mvapich2-2.0-doc mvapich2-2.0-psm-devel mvapich2-2.2-devel mvapich2-2.2-doc mvapich2-2.2-psm-devel mvapich2-2.2-psm2-devel mvapich23-devel mvapich23-doc mvapich23-psm-devel mvapich23-psm2-devel nagios-plugins-bacula nasm nasm-doc nasm-rdoff ncurses-static nekohtml nekohtml-demo nekohtml-javadoc nepomuk-core nepomuk-core-devel nepomuk-core-libs nepomuk-widgets nepomuk-widgets-devel net-snmp-gui net-snmp-perl net-snmp-python net-snmp-sysvinit netsniff-ng NetworkManager-glib NetworkManager-glib-devel newt-static nfsometer nfstest nhn-nanum-brush-fonts nhn-nanum-fonts-common nhn-nanum-myeongjo-fonts nhn-nanum-pen-fonts nmap-frontend nss_compat_ossl nss_compat_ossl-devel nss-pem nss-pkcs11-devel ntp-doc ntp-perl nuvola-icon-theme nuxwdog nuxwdog-client-java nuxwdog-client-perl nuxwdog-devel objectweb-anttask objectweb-anttask-javadoc objectweb-asm ocaml-brlapi ocaml-calendar ocaml-calendar-devel ocaml-csv ocaml-csv-devel ocaml-curses ocaml-curses-devel ocaml-docs ocaml-emacs ocaml-fileutils ocaml-fileutils-devel ocaml-gettext ocaml-gettext-devel ocaml-libvirt ocaml-libvirt-devel ocaml-ocamlbuild-doc ocaml-source ocaml-x11 ocaml-xml-light ocaml-xml-light-devel oci-register-machine okular okular-devel okular-libs okular-part opa-libopamgt-devel opal opal-devel open-vm-tools-devel open-vm-tools-test opencc-tools openchange-client openchange-devel openchange-devel-docs opencv-devel-docs opencv-python OpenEXR openhpi-devel openjade openjpeg-devel openjpeg-libs openldap-servers openldap-servers-sql openlmi openlmi-account openlmi-account-doc openlmi-fan openlmi-fan-doc openlmi-hardware openlmi-hardware-doc openlmi-indicationmanager-libs openlmi-indicationmanager-libs-devel openlmi-journald openlmi-journald-doc openlmi-logicalfile openlmi-logicalfile-doc openlmi-networking openlmi-networking-doc openlmi-pcp openlmi-powermanagement openlmi-powermanagement-doc openlmi-providers openlmi-providers-devel openlmi-python-base openlmi-python-providers openlmi-python-test openlmi-realmd openlmi-realmd-doc openlmi-service openlmi-service-doc openlmi-software openlmi-software-doc openlmi-storage openlmi-storage-doc openlmi-tools openlmi-tools-doc openobex openobex-apps openobex-devel openscap-containers openscap-engine-sce-devel openslp-devel openslp-server opensm-static opensp openssh-server-sysvinit openssl-static openssl098e openwsman-perl openwsman-ruby oprofile-devel oprofile-gui oprofile-jit optipng ORBit2 ORBit2-devel orc-doc ortp ortp-devel oscilloscope oxygen-cursor-themes oxygen-gtk oxygen-gtk2 oxygen-gtk3 oxygen-icon-theme PackageKit-yum-plugin pakchois-devel pam_krb5 pam_pkcs11 pam_snapper pango-tests paps-devel passivetex pax pciutils-devel-static pcp-collector pcp-monitor pcre-tools pcre2-static pcre2-tools pentaho-libxml-javadoc pentaho-reporting-flow-engine-javadoc perl-AppConfig perl-Archive-Extract perl-B-Keywords perl-Browser-Open perl-Business-ISBN perl-Business-ISBN-Data perl-CGI-Session perl-Class-Load perl-Class-Load-XS perl-Class-Singleton perl-Config-Simple perl-Config-Tiny perl-Convert-ASN1 perl-CPAN-Changes perl-CPANPLUS perl-CPANPLUS-Dist-Build perl-Crypt-CBC perl-Crypt-DES perl-Crypt-OpenSSL-Bignum perl-Crypt-OpenSSL-Random perl-Crypt-OpenSSL-RSA perl-Crypt-PasswdMD5 perl-Crypt-SSLeay perl-CSS-Tiny perl-Data-Peek perl-DateTime perl-DateTime-Format-DateParse perl-DateTime-Locale perl-DateTime-TimeZone perl-DBD-Pg-tests perl-DBIx-Simple perl-Devel-Cover perl-Devel-Cycle perl-Devel-EnforceEncapsulation perl-Devel-Leak perl-Devel-Symdump perl-Digest-SHA1 perl-Email-Address perl-FCGI perl-File-Find-Rule-Perl perl-File-Inplace perl-Font-AFM perl-Font-TTF perl-FreezeThaw perl-GD perl-GD-Barcode perl-Hook-LexWrap perl-HTML-Format perl-HTML-FormatText-WithLinks perl-HTML-FormatText-WithLinks-AndTables perl-HTML-Tree perl-HTTP-Daemon perl-Image-Base perl-Image-Info perl-Image-Xbm perl-Image-Xpm perl-Inline perl-Inline-Files perl-IO-CaptureOutput perl-IO-stringy perl-JSON-tests perl-LDAP perl-libxml-perl perl-List-MoreUtils perl-Locale-Maketext-Gettext perl-Locale-PO perl-Log-Message perl-Log-Message-Simple perl-Mail-DKIM perl-Mixin-Linewise perl-Module-Implementation perl-Module-Manifest perl-Module-Signature perl-Net-Daemon perl-Net-DNS-Nameserver perl-Net-DNS-Resolver-Programmable perl-Net-LibIDN perl-Net-Telnet perl-Newt perl-Object-Accessor perl-Object-Deadly perl-Package-Constants perl-Package-DeprecationManager perl-Package-Stash perl-Package-Stash-XS perl-PAR-Dist perl-Parallel-Iterator perl-Params-Validate perl-Parse-CPAN-Meta perl-Parse-RecDescent perl-Perl-Critic perl-Perl-Critic-More perl-Perl-MinimumVersion perl-Perl4-CoreLibs perl-PlRPC perl-Pod-Coverage perl-Pod-Coverage-TrustPod perl-Pod-Eventual perl-Pod-POM perl-Pod-Spell perl-PPI perl-PPI-HTML perl-PPIx-Regexp perl-PPIx-Utilities perl-Probe-Perl perl-Readonly-XS perl-SGMLSpm perl-Sort-Versions perl-String-Format perl-String-Similarity perl-Syntax-Highlight-Engine-Kate perl-Task-Weaken perl-Template-Toolkit perl-Term-UI perl-Test-ClassAPI perl-Test-CPAN-Meta perl-Test-DistManifest perl-Test-EOL perl-Test-HasVersion perl-Test-Inter perl-Test-Manifest perl-Test-Memory-Cycle perl-Test-MinimumVersion perl-Test-MockObject perl-Test-NoTabs perl-Test-Object perl-Test-Output perl-Test-Perl-Critic perl-Test-Perl-Critic-Policy perl-Test-Pod perl-Test-Pod-Coverage perl-Test-Portability-Files perl-Test-Script perl-Test-Spelling perl-Test-SubCalls perl-Test-Synopsis perl-Test-Tester perl-Test-Vars perl-Test-Without-Module perl-Text-CSV_XS perl-Text-Iconv perl-Tree-DAG_Node perl-Unicode-Map8 perl-Unicode-String perl-UNIVERSAL-can perl-UNIVERSAL-isa perl-Version-Requirements perl-WWW-Curl perl-XML-Dumper perl-XML-Filter-BufferText perl-XML-Grove perl-XML-Handler-YAWriter perl-XML-LibXSLT perl-XML-SAX-Writer perl-XML-TreeBuilder perl-XML-Twig perl-XML-Writer perl-XML-XPathEngine perl-YAML-Tiny perltidy phonon phonon-backend-gstreamer phonon-devel php-pecl-memcache php-pspell pidgin-perl pinentry-qt pinentry-qt4 pki-javadoc plasma-scriptengine-python plasma-scriptengine-ruby plexus-digest plexus-digest-javadoc plexus-mail-sender plexus-mail-sender-javadoc plexus-tools-pom plymouth-devel pm-utils pm-utils-devel pngcrush pngnq polkit-kde polkit-qt polkit-qt-devel polkit-qt-doc poppler-demos poppler-qt poppler-qt-devel popt-static postfix-sysvinit pothana2000-fonts powerpc-utils-python pprof pps-tools pptp-setup procps-ng-devel protobuf-emacs protobuf-emacs-el protobuf-java protobuf-javadoc protobuf-lite-devel protobuf-lite-static protobuf-python protobuf-static protobuf-vim psutils psutils-perl pth-devel ptlib ptlib-devel publican publican-common-db5-web publican-common-web publican-doc publican-redhat pulseaudio-esound-compat pulseaudio-module-gconf pulseaudio-module-zeroconf pulseaudio-qpaeq pygpgme pygtk2-libglade pykde4 pykde4-akonadi pykde4-devel pyldb-devel pyliblzma PyOpenGL PyOpenGL-Tk pyOpenSSL-doc pyorbit pyorbit-devel PyPAM pyparsing-doc PyQt4 PyQt4-devel pytalloc-devel python-appindicator python-beaker python-cffi-doc python-cherrypy python-criu python-debug python-deltarpm python-dtopt python-fpconst python-gpod python-gudev python-inotify-examples python-ipaddr python-IPy python-isodate python-isomd5sum python-kerberos python-kitchen python-kitchen-doc python-krbV python-libteam python-lxml-docs python-matplotlib python-matplotlib-doc python-matplotlib-qt4 python-matplotlib-tk python-memcached python-mutagen python-paramiko python-paramiko-doc python-paste python-pillow-devel python-pillow-doc python-pillow-qt python-pillow-sane python-pillow-tk python-rados python-rbd python-reportlab-docs python-requests-kerberos python-rtslib-doc python-setproctitle python-slip-gtk python-smbc python-smbc-doc python-smbios python-sphinx-doc python-tempita python-tornado python-tornado-doc python-twisted-core python-twisted-core-doc python-twisted-web python-twisted-words python-urlgrabber python-volume_key python-webob python-webtest python-which python-zope-interface python2-caribou python2-futures python2-gexiv2 python2-smartcols python2-solv python2-subprocess32 qca-ossl qca2 qca2-devel qdox qimageblitz qimageblitz-devel qimageblitz-examples qjson qjson-devel qpdf-devel qt qt-assistant qt-config qt-demos qt-devel qt-devel-private qt-doc qt-examples qt-mysql qt-odbc qt-postgresql qt-qdbusviewer qt-qvfb qt-settings qt-x11 qt3 qt3-config qt3-designer qt3-devel qt3-devel-docs qt3-MySQL qt3-ODBC qt3-PostgreSQL qt5-qt3d-doc qt5-qtbase-doc qt5-qtcanvas3d-doc qt5-qtconnectivity-doc qt5-qtdeclarative-doc qt5-qtenginio qt5-qtenginio-devel qt5-qtenginio-doc qt5-qtenginio-examples qt5-qtgraphicaleffects-doc qt5-qtimageformats-doc qt5-qtlocation-doc qt5-qtmultimedia-doc qt5-qtquickcontrols-doc qt5-qtquickcontrols2-doc qt5-qtscript-doc qt5-qtsensors-doc qt5-qtserialbus-devel qt5-qtserialbus-doc qt5-qtserialport-doc qt5-qtsvg-doc qt5-qttools-doc qt5-qtwayland-doc qt5-qtwebchannel-doc qt5-qtwebsockets-doc qt5-qtx11extras-doc qt5-qtxmlpatterns-doc quagga quagga-contrib quota-devel qv4l2 rarian-devel rcs rdate rdist readline-static realmd-devel-docs Red_Hat_Enterprise_Linux-Release_Notes-7-as-IN Red_Hat_Enterprise_Linux-Release_Notes-7-bn-IN Red_Hat_Enterprise_Linux-Release_Notes-7-de-DE Red_Hat_Enterprise_Linux-Release_Notes-7-en-US Red_Hat_Enterprise_Linux-Release_Notes-7-es-ES Red_Hat_Enterprise_Linux-Release_Notes-7-fr-FR Red_Hat_Enterprise_Linux-Release_Notes-7-gu-IN Red_Hat_Enterprise_Linux-Release_Notes-7-hi-IN Red_Hat_Enterprise_Linux-Release_Notes-7-it-IT Red_Hat_Enterprise_Linux-Release_Notes-7-ja-JP Red_Hat_Enterprise_Linux-Release_Notes-7-kn-IN Red_Hat_Enterprise_Linux-Release_Notes-7-ko-KR Red_Hat_Enterprise_Linux-Release_Notes-7-ml-IN Red_Hat_Enterprise_Linux-Release_Notes-7-mr-IN Red_Hat_Enterprise_Linux-Release_Notes-7-or-IN Red_Hat_Enterprise_Linux-Release_Notes-7-pa-IN Red_Hat_Enterprise_Linux-Release_Notes-7-pt-BR Red_Hat_Enterprise_Linux-Release_Notes-7-ru-RU Red_Hat_Enterprise_Linux-Release_Notes-7-ta-IN Red_Hat_Enterprise_Linux-Release_Notes-7-te-IN Red_Hat_Enterprise_Linux-Release_Notes-7-zh-CN Red_Hat_Enterprise_Linux-Release_Notes-7-zh-TW redhat-access-plugin-ipa redhat-bookmarks redhat-lsb-supplemental redhat-lsb-trialuse redhat-upgrade-dracut redhat-upgrade-dracut-plymouth redhat-upgrade-tool redland-mysql redland-pgsql redland-virtuoso regexp relaxngcc rest-devel resteasy-base-jettison-provider resteasy-base-tjws rhdb-utils rhino rhino-demo rhino-javadoc rhino-manual rhythmbox-devel rngom rngom-javadoc rp-pppoe rrdtool-php rrdtool-python rsh rsh-server rsyslog-libdbi rsyslog-udpspoof rtcheck rtctl ruby-tcltk rubygem-net-http-persistent rubygem-net-http-persistent-doc rubygem-thor rubygem-thor-doc rusers rusers-server rwho sac-javadoc samba-dc samba-devel satyr-devel satyr-python saxon saxon-demo saxon-javadoc saxon-manual saxon-scripts sbc-devel sblim-cim-client2 sblim-cim-client2-javadoc sblim-cim-client2-manual sblim-cmpi-base sblim-cmpi-base-devel sblim-cmpi-base-test sblim-cmpi-fsvol sblim-cmpi-fsvol-devel sblim-cmpi-fsvol-test sblim-cmpi-network sblim-cmpi-network-devel sblim-cmpi-network-test sblim-cmpi-nfsv3 sblim-cmpi-nfsv3-test sblim-cmpi-nfsv4 sblim-cmpi-nfsv4-test sblim-cmpi-params sblim-cmpi-params-test sblim-cmpi-sysfs sblim-cmpi-sysfs-test sblim-cmpi-syslog sblim-cmpi-syslog-test sblim-gather sblim-gather-devel sblim-gather-provider sblim-gather-test sblim-indication_helper sblim-indication_helper-devel sblim-smis-hba sblim-testsuite sblim-wbemcli scannotation scannotation-javadoc scpio screen SDL-static seahorse-nautilus seahorse-sharing sendmail-sysvinit setools-devel setools-gui setools-libs-tcl setuptool shared-desktop-ontologies shared-desktop-ontologies-devel shim-unsigned-ia32 shim-unsigned-x64 sisu sisu-parent slang-slsh slang-static smbios-utils smbios-utils-bin smbios-utils-python snakeyaml snakeyaml-javadoc snapper snapper-devel snapper-libs sntp SOAPpy soprano soprano-apidocs soprano-devel source-highlight-devel sox sox-devel speex-tools spice-xpi sqlite-tcl squid-migration-script squid-sysvinit sssd-libwbclient-devel sssd-polkit-rules stax2-api stax2-api-javadoc strigi strigi-devel strigi-libs strongimcv subversion-kde subversion-python subversion-ruby sudo-devel suitesparse-doc suitesparse-static supermin-helper svgpart svrcore svrcore-devel sweeper syslinux-devel syslinux-perl system-config-date system-config-date-docs system-config-firewall system-config-firewall-base system-config-firewall-tui system-config-keyboard system-config-keyboard-base system-config-language system-config-printer system-config-users-docs system-switch-java systemd-sysv t1lib t1lib-apps t1lib-devel t1lib-static t1utils taglib-doc talk talk-server tang-nagios targetd tcl-pgtcl tclx tclx-devel tcp_wrappers tcp_wrappers-devel tcp_wrappers-libs teamd-devel teckit-devel telepathy-farstream telepathy-farstream-devel telepathy-filesystem telepathy-gabble telepathy-glib telepathy-glib-devel telepathy-glib-vala telepathy-haze telepathy-logger telepathy-logger-devel telepathy-mission-control telepathy-mission-control-devel telepathy-salut tex-preview texinfo texlive-collection-documentation-base texlive-mh texlive-mh-doc texlive-misc texlive-thailatex texlive-thailatex-doc tix-doc tncfhh tncfhh-devel tncfhh-examples tncfhh-libs tncfhh-utils tog-pegasus-test tokyocabinet-devel-doc tomcat tomcat-admin-webapps tomcat-docs-webapp tomcat-el-2.2-api tomcat-javadoc tomcat-jsp-2.2-api tomcat-jsvc tomcat-lib tomcat-servlet-3.0-api tomcat-webapps totem-devel totem-pl-parser-devel tracker-devel tracker-docs tracker-needle tracker-preferences trang trousers-static txw2 txw2-javadoc unique3 unique3-devel unique3-docs uriparser uriparser-devel usbguard-devel usbredir-server ustr ustr-debug ustr-debug-static ustr-devel ustr-static uuid-c++ uuid-c++-devel uuid-dce uuid-dce-devel uuid-perl uuid-php v4l-utils v4l-utils-devel-tools vala-doc valadoc valadoc-devel valgrind-openmpi velocity-demo velocity-javadoc velocity-manual vemana2000-fonts vigra vigra-devel virtuoso-opensource virtuoso-opensource-utils vlgothic-p-fonts vsftpd-sysvinit vte3 vte3-devel wayland-doc webkitgtk3 webkitgtk3-devel webkitgtk3-doc webkitgtk4-doc webrtc-audio-processing-devel weld-parent whois woodstox-core woodstox-core-javadoc wordnet wordnet-browser wordnet-devel wordnet-doc ws-commons-util ws-commons-util-javadoc ws-jaxme ws-jaxme-javadoc ws-jaxme-manual wsdl4j wsdl4j-javadoc wvdial x86info xchat-tcl xdg-desktop-portal-devel xerces-c xerces-c-devel xerces-c-doc xerces-j2-demo xerces-j2-javadoc xferstats xguest xhtml2fo-style-xsl xhtml2ps xisdnload xml-commons-apis-javadoc xml-commons-apis-manual xml-commons-apis12 xml-commons-apis12-javadoc xml-commons-apis12-manual xml-commons-resolver-javadoc xmlgraphics-commons xmlgraphics-commons-javadoc xmlrpc-c-apps xmlrpc-client xmlrpc-common xmlrpc-javadoc xmlrpc-server xmlsec1-gcrypt-devel xmlsec1-nss-devel xmlto-tex xmlto-xhtml xmltoman xorg-x11-apps xorg-x11-drv-intel-devel xorg-x11-drv-keyboard xorg-x11-drv-mouse xorg-x11-drv-mouse-devel xorg-x11-drv-openchrome xorg-x11-drv-openchrome-devel xorg-x11-drv-synaptics xorg-x11-drv-synaptics-devel xorg-x11-drv-vmmouse xorg-x11-drv-void xorg-x11-server-source xorg-x11-xkb-extras xpp3 xpp3-javadoc xpp3-minimal xsettings-kde xstream xstream-javadoc xulrunner xulrunner-devel xz-compat-libs yelp-xsl-devel yum-langpacks yum-NetworkManager-dispatcher yum-plugin-filter-data yum-plugin-fs-snapshot yum-plugin-keys yum-plugin-list-data yum-plugin-local yum-plugin-merge-conf yum-plugin-ovl yum-plugin-post-transaction-actions yum-plugin-pre-transaction-actions yum-plugin-protectbase yum-plugin-ps yum-plugin-rpm-warm-cache yum-plugin-show-leaves yum-plugin-upgrade-helper yum-plugin-verify yum-updateonboot 9.2. Deprecated Device Drivers The following device drivers continue to be supported until the end of life of Red Hat Enterprise Linux 7 but will likely not be supported in future major releases of this product and are not recommended for new deployments. 3w-9xxx 3w-sas aic79xx aoe arcmsr ata drivers: acard-ahci sata_mv sata_nv sata_promise sata_qstor sata_sil sata_sil24 sata_sis sata_svw sata_sx4 sata_uli sata_via sata_vsc bfa cxgb3 cxgb3i e1000 floppy hptiop initio isci iw_cxgb3 mptbase mptctl mptsas mptscsih mptspi mtip32xx mvsas mvumi OSD drivers: osd libosd osst pata drivers: pata_acpi pata_ali pata_amd pata_arasan_cf pata_artop pata_atiixp pata_atp867x pata_cmd64x pata_cs5536 pata_hpt366 pata_hpt37x pata_hpt3x2n pata_hpt3x3 pata_it8213 pata_it821x pata_jmicron pata_marvell pata_netcell pata_ninja32 pata_oldpiix pata_pdc2027x pata_pdc202xx_old pata_piccolo pata_rdc pata_sch pata_serverworks pata_sil680 pata_sis pata_via pdc_adma pm80xx(pm8001) pmcraid qla3xxx qlcnic qlge stex sx8 tulip ufshcd wireless drivers: carl9170 iwl4965 iwl3945 mwl8k rt73usb rt61pci rtl8187 wil6210 9.3. Deprecated Adapters The following adapters continue to be supported until the end of life of Red Hat Enterprise Linux 7 but will likely not be supported in future major releases of this product and are not recommended for new deployments. Other adapters from the mentioned drivers that are not listed here remain unchanged. PCI IDs are in the format of vendor:device:subvendor:subdevice . If the subdevice or subvendor:subdevice entry is not listed, devices with any values of such missing entries have been deprecated. To check the PCI IDs of the hardware on your system, run the lspci -nn command. The following adapters from the aacraid driver have been deprecated: PERC 2/Si (Iguana/PERC2Si), PCI ID 0x1028:0x0001:0x1028:0x0001 PERC 3/Di (Opal/PERC3Di), PCI ID 0x1028:0x0002:0x1028:0x0002 PERC 3/Si (SlimFast/PERC3Si), PCI ID 0x1028:0x0003:0x1028:0x0003 PERC 3/Di (Iguana FlipChip/PERC3DiF), PCI ID 0x1028:0x0004:0x1028:0x00d0 PERC 3/Di (Viper/PERC3DiV), PCI ID 0x1028:0x0002:0x1028:0x00d1 PERC 3/Di (Lexus/PERC3DiL), PCI ID 0x1028:0x0002:0x1028:0x00d9 PERC 3/Di (Jaguar/PERC3DiJ), PCI ID 0x1028:0x000a:0x1028:0x0106 PERC 3/Di (Dagger/PERC3DiD), PCI ID 0x1028:0x000a:0x1028:0x011b PERC 3/Di (Boxster/PERC3DiB), PCI ID 0x1028:0x000a:0x1028:0x0121 catapult, PCI ID 0x9005:0x0283:0x9005:0x0283 tomcat, PCI ID 0x9005:0x0284:0x9005:0x0284 Adaptec 2120S (Crusader), PCI ID 0x9005:0x0285:0x9005:0x0286 Adaptec 2200S (Vulcan), PCI ID 0x9005:0x0285:0x9005:0x0285 Adaptec 2200S (Vulcan-2m), PCI ID 0x9005:0x0285:0x9005:0x0287 Legend S220 (Legend Crusader), PCI ID 0x9005:0x0285:0x17aa:0x0286 Legend S230 (Legend Vulcan), PCI ID 0x9005:0x0285:0x17aa:0x0287 Adaptec 3230S (Harrier), PCI ID 0x9005:0x0285:0x9005:0x0288 Adaptec 3240S (Tornado), PCI ID 0x9005:0x0285:0x9005:0x0289 ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028a ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285:0x9005:0x028b ASR-2230S + ASR-2230SLP PCI-X (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028c ASR-2130S (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028d AAR-2820SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029b AAR-2620SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029c AAR-2420SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029d ICP9024RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029e ICP9014RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029f ICP9047MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a0 ICP9087MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a1 ICP5445AU (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a3 ICP9085LI (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x02a4 ICP5085BR (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x02a5 ICP9067MA (Intruder-6), PCI ID 0x9005:0x0286:0x9005:0x02a6 Themisto Jupiter Platform, PCI ID 0x9005:0x0287:0x9005:0x0800 Themisto Jupiter Platform, PCI ID 0x9005:0x0200:0x9005:0x0200 Callisto Jupiter Platform, PCI ID 0x9005:0x0286:0x9005:0x0800 ASR-2020SA SATA PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028e ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285:0x9005:0x028f AAR-2410SA PCI SATA 4ch (Jaguar II), PCI ID 0x9005:0x0285:0x9005:0x0290 CERC SATA RAID 2 PCI SATA 6ch (DellCorsair), PCI ID 0x9005:0x0285:0x9005:0x0291 AAR-2810SA PCI SATA 8ch (Corsair-8), PCI ID 0x9005:0x0285:0x9005:0x0292 AAR-21610SA PCI SATA 16ch (Corsair-16), PCI ID 0x9005:0x0285:0x9005:0x0293 ESD SO-DIMM PCI-X SATA ZCR (Prowler), PCI ID 0x9005:0x0285:0x9005:0x0294 AAR-2610SA PCI SATA 6ch, PCI ID 0x9005:0x0285:0x103C:0x3227 ASR-2240S (SabreExpress), PCI ID 0x9005:0x0285:0x9005:0x0296 ASR-4005, PCI ID 0x9005:0x0285:0x9005:0x0297 IBM 8i (AvonPark), PCI ID 0x9005:0x0285:0x1014:0x02F2 IBM 8i (AvonPark Lite), PCI ID 0x9005:0x0285:0x1014:0x0312 IBM 8k/8k-l8 (Aurora), PCI ID 0x9005:0x0286:0x1014:0x9580 IBM 8k/8k-l4 (Aurora Lite), PCI ID 0x9005:0x0286:0x1014:0x9540 ASR-4000 (BlackBird), PCI ID 0x9005:0x0285:0x9005:0x0298 ASR-4800SAS (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x0299 ASR-4805SAS (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x029a ASR-3800 (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a2 Perc 320/DC, PCI ID 0x9005:0x0285:0x1028:0x0287 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0365 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0364 Dell PERC2/QC, PCI ID 0x1011:0x0046:0x9005:0x1364 HP NetRAID-4M, PCI ID 0x1011:0x0046:0x103c:0x10c2 Dell Catchall, PCI ID 0x9005:0x0285:0x1028 Legend Catchall, PCI ID 0x9005:0x0285:0x17aa Adaptec Catch All, PCI ID 0x9005:0x0285 Adaptec Rocket Catch All, PCI ID 0x9005:0x0286 Adaptec NEMER/ARK Catch All, PCI ID 0x9005:0x0288 The following adapters from the mpt2sas driver have been deprecated: SAS2004, PCI ID 0x1000:0x0070 SAS2008, PCI ID 0x1000:0x0072 SAS2108_1, PCI ID 0x1000:0x0074 SAS2108_2, PCI ID 0x1000:0x0076 SAS2108_3, PCI ID 0x1000:0x0077 SAS2116_1, PCI ID 0x1000:0x0064 SAS2116_2, PCI ID 0x1000:0x0065 SSS6200, PCI ID 0x1000:0x007E The following adapters from the megaraid_sas driver have been deprecated: Dell PERC5, PCI ID 0x1028:0x0015 SAS1078R, PCI ID 0x1000:0x0060 SAS1078DE, PCI ID 0x1000:0x007C SAS1064R, PCI ID 0x1000:0x0411 VERDE_ZCR, PCI ID 0x1000:0x0413 SAS1078GEN2, PCI ID 0x1000:0x0078 SAS0079GEN2, PCI ID 0x1000:0x0079 SAS0073SKINNY, PCI ID 0x1000:0x0073 SAS0071SKINNY, PCI ID 0x1000:0x0071 The following adapters from the qla2xxx driver have been deprecated: ISP24xx, PCI ID 0x1077:0x2422 ISP24xx, PCI ID 0x1077:0x2432 ISP2422, PCI ID 0x1077:0x5422 QLE220, PCI ID 0x1077:0x5432 QLE81xx, PCI ID 0x1077:0x8001 QLE10000, PCI ID 0x1077:0xF000 QLE84xx, PCI ID 0x1077:0x8044 QLE8000, PCI ID 0x1077:0x8432 QLE82xx, PCI ID 0x1077:0x8021 The following adapters from the qla4xxx driver have been deprecated: QLOGIC_ISP8022, PCI ID 0x1077:0x8022 QLOGIC_ISP8324, PCI ID 0x1077:0x8032 QLOGIC_ISP8042, PCI ID 0x1077:0x8042 The following adapters from the be2iscsi driver have been deprecated: BladeEngine 2 (BE2) Devices BladeEngine2 10Gb iSCSI Initiator (generic), PCI ID 0x19a2:0x212 OneConnect OCe10101, OCm10101, OCe10102, OCm10102 BE2 adapter family, PCI ID 0x19a2:0x702 OCe10100 BE2 adapter family, PCI ID 0x19a2:0x703 BladeEngine 3 (BE3) Devices OneConnect TOMCAT iSCSI, PCI ID 0x19a2:0x0712 BladeEngine3 iSCSI, PCI ID 0x19a2:0x0222 The following Ethernet adapters controlled by the be2net driver have been deprecated: BladeEngine 2 (BE2) Devices OneConnect TIGERSHARK NIC, PCI ID 0x19a2:0x0700 BladeEngine2 Network Adapter, PCI ID 0x19a2:0x0211 BladeEngine 3 (BE3) Devices OneConnect TOMCAT NIC, PCI ID 0x19a2:0x0710 BladeEngine3 Network Adapter, PCI ID 0x19a2:0x0221 The following adapters from the lpfc driver have been deprecated: BladeEngine 2 (BE2) Devices OneConnect TIGERSHARK FCoE, PCI ID 0x19a2:0x0704 BladeEngine 3 (BE3) Devices OneConnect TOMCAT FCoE, PCI ID 0x19a2:0x0714 Fibre Channel (FC) Devices FIREFLY, PCI ID 0x10df:0x1ae5 PROTEUS_VF, PCI ID 0x10df:0xe100 BALIUS, PCI ID 0x10df:0xe131 PROTEUS_PF, PCI ID 0x10df:0xe180 RFLY, PCI ID 0x10df:0xf095 PFLY, PCI ID 0x10df:0xf098 LP101, PCI ID 0x10df:0xf0a1 TFLY, PCI ID 0x10df:0xf0a5 BSMB, PCI ID 0x10df:0xf0d1 BMID, PCI ID 0x10df:0xf0d5 ZSMB, PCI ID 0x10df:0xf0e1 ZMID, PCI ID 0x10df:0xf0e5 NEPTUNE, PCI ID 0x10df:0xf0f5 NEPTUNE_SCSP, PCI ID 0x10df:0xf0f6 NEPTUNE_DCSP, PCI ID 0x10df:0xf0f7 FALCON, PCI ID 0x10df:0xf180 SUPERFLY, PCI ID 0x10df:0xf700 DRAGONFLY, PCI ID 0x10df:0xf800 CENTAUR, PCI ID 0x10df:0xf900 PEGASUS, PCI ID 0x10df:0xf980 THOR, PCI ID 0x10df:0xfa00 VIPER, PCI ID 0x10df:0xfb00 LP10000S, PCI ID 0x10df:0xfc00 LP11000S, PCI ID 0x10df:0xfc10 LPE11000S, PCI ID 0x10df:0xfc20 PROTEUS_S, PCI ID 0x10df:0xfc50 HELIOS, PCI ID 0x10df:0xfd00 HELIOS_SCSP, PCI ID 0x10df:0xfd11 HELIOS_DCSP, PCI ID 0x10df:0xfd12 ZEPHYR, PCI ID 0x10df:0xfe00 HORNET, PCI ID 0x10df:0xfe05 ZEPHYR_SCSP, PCI ID 0x10df:0xfe11 ZEPHYR_DCSP, PCI ID 0x10df:0xfe12 Lancer FCoE CNA Devices OCe15104-FM, PCI ID 0x10df:0xe260 OCe15102-FM, PCI ID 0x10df:0xe260 OCm15108-F-P, PCI ID 0x10df:0xe260 9.4. Other Deprecated Functionality Python 2 has been deprecated In the major release, RHEL 8, Python 3.6 is the default Python implementation, and only limited support for Python 2.7 is provided. See the Conservative Python 3 Porting Guide for information on how to migrate large code bases to Python 3 . LVM libraries and LVM Python bindings have been deprecated The lvm2app library and LVM Python bindings, which are provided by the lvm2-python-libs package, have been deprecated. Red Hat recommends the following solutions instead: The LVM D-Bus API in combination with the lvm2-dbusd service. This requires using Python version 3. The LVM command-line utilities with JSON formatting. This formatting has been available since the lvm2 package version 2.02.158. The libblockdev library for C and C++. Mirrored mirror log has been deprecated in LVM The mirrored mirror log feature of mirrored LVM volumes has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support creating or activating LVM volumes with a mirrored mirror log. The recommended replacements are: RAID1 LVM volumes. The main advantage of RAID1 volumes is their ability to work even in degraded mode and to recover after a transient failure. For information on converting mirrored volumes to RAID1, see the Converting a Mirrored LVM Device to a RAID1 Device section in the LVM Administration guide. Disk mirror log. To convert a mirrored mirror log to disk mirror log, use the following command: lvconvert --mirrorlog disk my_vg/my_lv . The clvmd daemon has been deprecated The clvmd daemon for managing shared storage devices has been deprecated. A future major release of Red Hat Enterprise linux will instead use the lvmlockd daemon. The lvmetad daemon has been deprecated The lvmetad daemon for caching metadata has been deprecated. In a future major release of Red Hat Enterprise Linux, LVM will always read metadata from disk. Previously, autoactivation of logical volumes was indirectly tied to the use_lvmetad setting in the lvm.conf configuration file. The correct way to disable autoactivation continues to be setting auto_activation_volume_list=[] (an empty list) in the lvm.conf file. The sap-hana-vmware Tuned profile has been deprecated The sap-hana-vmware Tuned profile has been deprecated. For backward compatibility, this profile is still provided in the tuned-profiles-sap-hana package, but the profile will be removed in future major release of Red Hat Enterprise Linux. The recommended replacement is the sap-hana Tuned profile. Deprecated packages related to Identity Management and security The following packages have been deprecated and will not be included in a future major release of Red Hat Enterprise Linux: Deprecated packages Proposed replacement package or product authconfig authselect pam_pkcs11 sssd [a] pam_krb5 sssd openldap-servers Depending on the use case, migrate to Identity Management included in Red Hat Enterprise Linux; or to Red Hat Directory Server. [b] mod_auth_kerb mod_auth_gssapi python-kerberos python-krbV python-gssapi python-requests-kerberos python-requests-gssapi hesiod No replacement available. mod_nss mod_ssl mod_revocator No replacement available. [a] System Security Services Daemon (SSSD) contains enhanced smart card functionality. [b] Red Hat Directory Server requires a valid Directory Server subscription. For details, see also What is the support status of the LDAP-server shipped with Red Hat Enterprise Linux? in Red Hat Knowledgebase. The Clevis HTTP pin has been deprecated The Clevis HTTP pin has been deprecated and this feature will not be included in the major version of Red Hat Enterprise Linux and will remain out of the distribution until a further notice. crypto-utils has been deprecated The crypto-utils packages have been deprecated, and they will not be available in a future major version of Red Hat Enterprise Linux. You can use tools provided by the openssl , gnutls-utils , and nss-tools packages instead. NSS SEED ciphers have been deprecated The Mozilla Network Security Services ( NSS ) library will not support Transport Layer Security (TLS) cipher suites that use a SEED cipher in a future release. For deployments that rely on SEED ciphers, Red Hat recommends enabling support for other cipher suites. This way, you ensure smooth transitions when NSS will remove support for them. Note that the SEED ciphers are already disabled by default in RHEL. All-numeric user and group names in shadow-utils have been deprecated Creating user and group names consisting purely of numeric characters using the useradd and groupadd commands has been deprecated and will be removed from the system with the major release. Such names can potentially confuse many tools that work with user and group names and user and group ids (which are numbers). 3DES is removed from the Python SSL default cipher list The Triple Data Encryption Standard ( 3DES ) algorithm has been removed from the Python SSL default cipher list. This enables Python applications using SSL to be PCI DSS-compliant. sssd-secrets has been deprecated The sssd-secrets component of the System Security Services Daemon (SSSD) has been deprecated in Red Hat Enterprise Linux 7.6. This is because Custodia, a secrets service provider, available as a Technology Preview, is no longer actively developed. Use other Identity Management tools to store secrets, for example the Vaults. Support for earlier IdM servers and for IdM replicas at domain level 0 will be limited Red Hat does not plan to support using Identity Management (IdM) servers running Red Hat Enterprise Linux (RHEL) 7.3 and earlier with IdM clients of the major release of RHEL. If you plan to introduce client systems running on the major version of RHEL into a deployment that is currently managed by IdM servers running on RHEL 7.3 or earlier, be aware that you will need to upgrade the servers, moving them to RHEL 7.4 or later. In the major release of RHEL, only domain level 1 replicas will be supported. Before introducing IdM replicas running on the major version of RHEL into an existing deployment, be aware that you will need to upgrade all IdM servers to RHEL 7.4 or later, and change the domain level to 1. Consider planning the upgrade in advance if your deployment will be affected. Bug-fix only support for the nss-pam-ldapd and NIS packages in the major release of Red Hat Enterprise Linux The nss-pam-ldapd packages and packages related to the NIS server will be released in the future major release of Red Hat Enterprise Linux but will receive a limited scope of support. Red Hat will accept bug reports but no new requests for enhancements. Customers are advised to migrate to the following replacement solutions: Affected packages Proposed replacement package or product nss-pam-ldapd sssd ypserv ypbind portmap yp-tools Identity Management in Red Hat Enterprise Linux Use the Go Toolset instead of golang The golang package, previously available in the Optional repository, will no longer receive updates in Red Hat Enterprise Linux 7. Developers are encouraged to use the Go Toolset instead. mesa-private-llvm will be replaced with llvm-private The mesa-private-llvm package, which contains the LLVM-based runtime support for Mesa , will be replaced in a future minor release of Red Hat Enterprise Linux 7 with the llvm-private package. libdbi and libdbi-drivers have been deprecated The libdbi and libdbi-drivers packages will not be included in the Red Hat Enterprise Linux (RHEL) major release. Ansible deprecated in the Extras repository Ansible and its dependencies will no longer be updated through the Extras repository. Instead, the Red Hat Ansible Engine product has been made available to Red Hat Enterprise Linux subscriptions and will provide access to the official Ansible Engine channel. Customers who have previously installed Ansible and its dependencies from the Extras repository are advised to enable and update from the Ansible Engine channel, or uninstall the packages as future errata will not be provided from the Extras repository. Ansible was previously provided in Extras (for AMD64 and Intel 64 architectures, and IBM POWER, little endian) as a runtime dependency of, and limited in support to, the Red Hat Enterprise Linux (RHEL) System Roles. Ansible Engine is available today for AMD64 and Intel 64 architectures, with IBM POWER, little endian availability coming soon. Note that Ansible in the Extras repository was not a part of the Red Hat Enterprise Linux FIPS validation process. The following packages have been deprecated from the Extras repository: ansible(-doc) libtomcrypt libtommath(-devel) python2-crypto python2-jmespath python-httplib2 python-paramiko(-doc) python-passlib sshpass For more information and guidance, see the Knowledgebase article at https://access.redhat.com/articles/3359651 . Note that Red Hat Enterprise Linux System Roles continue to be distributed though the Extras repository. Although Red Hat Enterprise Linux System Roles no longer depend on the ansible package, installing ansible from the Ansible Engine repository is still needed to run playbooks which use Red Hat Enterprise Linux System Roles. signtool has been deprecated and moved to unsupported-tools The signtool tool from the nss packages, which uses insecure signature algorithms, has been deprecated. The signtool executable has been moved to the /usr/lib64/nss/unsupported-tools/ or /usr/lib/nss/unsupported-tools/ directory, depending on the platform. SSL 3.0 and RC4 are disabled by default in NSS Support for the RC4 ciphers in the TLS protocols and the SSL 3.0 protocol is disabled by default in the NSS library. Applications that require RC4 ciphers or SSL 3.0 protocol for interoperability do not work in default system configuration. It is possible to re-enable those algorithms by editing the /etc/pki/nss-legacy/nss-rhel7.config file. To re-enable RC4, remove the :RC4 string from the disallow= list. To re-enable SSL 3.0 change the TLS-VERSION-MIN=tls1.0 option to ssl3.0 . TLS compression support has been removed from nss To prevent security risks, such as the CRIME attack, support for TLS compression in the NSS library has been removed for all TLS versions. This change preserves the API compatibility. Public web CAs are no longer trusted for code signing by default The Mozilla CA certificate trust list distributed with Red Hat Enterprise Linux 7.5 no longer trusts any public web CAs for code signing. As a consequence, any software that uses the related flags, such as NSS or OpenSSL , no longer trusts these CAs for code signing by default. The software continues to fully support code signing trust. Additionally, it is still possible to configure CA certificates as trusted for code signing using system configuration. Sendmail has been deprecated Sendmail has been deprecated in Red Hat Enterprise Linux 7. Customers are advised to use Postfix , which is configured as the default Mail Transfer Agent (MTA). dmraid has been deprecated Since Red Hat Enterprise Linux 7.5, the dmraid packages have been deprecated. It will stay available in Red Hat Enterprise Linux 7 releases but a future major release will no longer support legacy hybrid combined hardware and software RAID host bus adapter (HBA). Automatic loading of DCCP modules through socket layer is now disabled by default For security reasons, automatic loading of the Datagram Congestion Control Protocol (DCCP) kernel modules through socket layer is now disabled by default. This ensures that userspace applications can not maliciously load any modules. All DCCP related modules can still be loaded manually through the modprobe program. The /etc/modprobe.d/dccp-blacklist.conf configuration file for blacklisting the DCCP modules is included in the kernel package. Entries included there can be cleared by editing or removing this file to restore the behavior. Note that any re-installation of the same kernel package or of a different version does not override manual changes. If the file is manually edited or removed, these changes persist across package installations. rsyslog-libdbi has been deprecated The rsyslog-libdbi sub-package, which contains one of the less used rsyslog module, has been deprecated and will not be included in a future major release of Red Hat Enterprise Linux. Removing unused or rarely used modules helps users to conveniently find a database output to use. The inputname option of the rsyslog imudp module has been deprecated The inputname option of the imudp module for the rsyslog service has been deprecated. Use the name option instead. SMBv1 is no longer installed with Microsoft Windows 10 and 2016 (updates 1709 and later) Microsoft announced that the Server Message Block version 1 (SMBv1) protocol will no longer be installed with the latest versions of Microsoft Windows and Microsoft Windows Server. Microsoft also recommends users to disable SMBv1 on earlier versions of these products. This update impacts Red Hat customers who operate their systems in a mixed Linux and Windows environment. Red Hat Enterprise Linux 7.1 and earlier support only the SMBv1 version of the protocol. Support for SMBv2 was introduced in Red Hat Enterprise Linux 7.2. For details on how this change affects Red Hat customers, see SMBv1 no longer installed with latest Microsoft Windows 10 and 2016 update (version 1709) in Red Hat Knowledgebase. The -ok option of the tc command has been deprecated The -ok option of the tc command has been deprecated and this feature will not be included in the major version of Red Hat Enterprise Linux. FedFS has been deprecated Federated File System (FedFS) has been deprecated because the upstream FedFS project is no longer being actively maintained. Red Hat recommends migrating FedFS installations to use autofs , which provides more flexible functionality. Btrfs has been deprecated The Btrfs file system has been in Technology Preview state since the initial release of Red Hat Enterprise Linux 6. Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux. The Btrfs file system did receive numerous updates from the upstream in Red Hat Enterprise Linux 7.4 and will remain available in the Red Hat Enterprise Linux 7 series. However, this is the last planned update to this feature. tcp_wrappers deprecated The tcp_wrappers package has been deprecated. tcp_wrappers provides a library and a small daemon program that can monitor and filter incoming requests for audit , cyrus-imap , dovecot , nfs-utils , openssh , openldap , proftpd , sendmail , stunnel , syslog-ng , vsftpd , and various other network services. nautilus-open-terminal replaced with gnome-terminal-nautilus Since Red Hat Enterprise Linux 7.3, the nautilus-open-terminal package has been deprecated and replaced with the gnome-terminal-nautilus package. This package provides a Nautilus extension that adds the Open in Terminal option to the right-click context menu in Nautilus. nautilus-open-terminal is replaced by gnome-terminal-nautilus during the system upgrade. sslwrap() removed from Python The sslwrap() function has been removed from Python 2.7 . After the 466 Python Enhancement Proposal was implemented, using this function resulted in a segmentation fault. The removal is consistent with upstream. Red Hat recommends using the ssl.SSLContext class and the ssl.SSLContext.wrap_socket() function instead. Most applications can simply use the ssl.create_default_context() function, which creates a context with secure default settings. The default context uses the system's default trust store, too. Symbols from libraries linked as dependencies no longer resolved by ld Previously, the ld linker resolved any symbols present in any linked library, even if some libraries were linked only implicitly as dependencies of other libraries. This allowed developers to use symbols from the implicitly linked libraries in application code and omit explicitly specifying these libraries for linking. For security reasons, ld has been changed to not resolve references to symbols in libraries linked implicitly as dependencies. As a result, linking with ld fails when application code attempts to use symbols from libraries not declared for linking and linked only implicitly as dependencies. To use symbols from libraries linked as dependencies, developers must explicitly link against these libraries as well. To restore the behavior of ld , use the -copy-dt-needed-entries command-line option. (BZ# 1292230 ) Windows guest virtual machine support limited As of Red Hat Enterprise Linux 7, Windows guest virtual machines are supported only under specific subscription programs, such as Advanced Mission Critical (AMC). libnetlink is deprecated The libnetlink library contained in the iproute-devel package has been deprecated. The user should use the libnl and libmnl libraries instead. S3 and S4 power management states for KVM have been deprecated Native KVM support for the S3 (suspend to RAM) and S4 (suspend to disk) power management states has been discontinued. This feature was previously available as a Technology Preview. The Certificate Server plug-in udnPwdDirAuth is discontinued The udnPwdDirAuth authentication plug-in for the Red Hat Certificate Server was removed in Red Hat Enterprise Linux 7.3. Profiles using the plug-in are no longer supported. Certificates created with a profile using the udnPwdDirAuth plug-in are still valid if they have been approved. Red Hat Access plug-in for IdM is discontinued The Red Hat Access plug-in for Identity Management (IdM) was removed in Red Hat Enterprise Linux 7.3. During the update, the redhat-access-plugin-ipa package is automatically uninstalled. Features previously provided by the plug-in, such as Knowledgebase access and support case engagement, are still available through the Red Hat Customer Portal. Red Hat recommends to explore alternatives, such as the redhat-support-tool tool. The Ipsilon identity provider service for federated single sign-on The ipsilon packages were introduced as Technology Preview in Red Hat Enterprise Linux 7.2. Ipsilon links authentication providers and applications or utilities to allow for single sign-on (SSO). Red Hat does not plan to upgrade Ipsilon from Technology Preview to a fully supported feature. The ipsilon packages will be removed from Red Hat Enterprise Linux in a future minor release. Red Hat has released Red Hat Single Sign-On as a web SSO solution based on the Keycloak community project. Red Hat Single Sign-On provides greater capabilities than Ipsilon and is designated as the standard web SSO solution across the Red Hat product portfolio. Several rsyslog options deprecated The rsyslog utility version in Red Hat Enterprise Linux 7.4 has deprecated a large number of options. These options no longer have any effect and cause a warning to be displayed. The functionality previously provided by the options -c , -u , -q , -x , -A , -Q , -4 , and -6 can be achieved using the rsyslog configuration. There is no replacement for the functionality previously provided by the options -l and -s Deprecated symbols from the memkind library The following symbols from the memkind library have been deprecated: memkind_finalize() memkind_get_num_kind() memkind_get_kind_by_partition() memkind_get_kind_by_name() memkind_partition_mmap() memkind_get_size() MEMKIND_ERROR_MEMALIGN MEMKIND_ERROR_MALLCTL MEMKIND_ERROR_GETCPU MEMKIND_ERROR_PMTT MEMKIND_ERROR_TIEDISTANCE MEMKIND_ERROR_ALIGNMENT MEMKIND_ERROR_MALLOCX MEMKIND_ERROR_REPNAME MEMKIND_ERROR_PTHREAD MEMKIND_ERROR_BADPOLICY MEMKIND_ERROR_REPPOLICY Options of Sockets API Extensions for SCTP (RFC 6458) deprecated The options SCTP_SNDRCV , SCTP_EXTRCV and SCTP_DEFAULT_SEND_PARAM of Sockets API Extensions for the Stream Control Transmission Protocol have been deprecated per the RFC 6458 specification. New options SCTP_SNDINFO , SCTP_NXTINFO , SCTP_NXTINFO and SCTP_DEFAULT_SNDINFO have been implemented as a replacement for the deprecated options. Managing NetApp ONTAP using SSLv2 and SSLv3 is no longer supported by libstorageMgmt The SSLv2 and SSLv3 connections to the NetApp ONTAP storage array are no longer supported by the libstorageMgmt library. Users can contact NetApp support to enable the Transport Layer Security (TLS) protocol. dconf-dbus-1 has been deprecated and dconf-editor is now delivered separately With this update, the dconf-dbus-1 API has been removed. However, the dconf-dbus-1 library has been backported to preserve binary compatibility. Red Hat recommends using the GDBus library instead of dconf-dbus-1 . The dconf-error.h file has been renamed to dconf-enums.h . In addition, the dconf Editor is now delivered in the separate dconf-editor package. FreeRADIUS no longer accepts Auth-Type := System The FreeRADIUS server no longer accepts the Auth-Type := System option for the rlm_unix authentication module. This option has been replaced by the use of the unix module in the authorize section of the configuration file. The libcxgb3 library and the cxgb3 firmware package have been deprecated The libcxgb3 library provided by the libibverbs package and the cxgb3 firmware package have been deprecated. They continue to be supported in Red Hat Enterprise Linux 7 but will likely not be supported in the major releases of this product. This change corresponds with the deprecation of the cxgb3 , cxgb3i , and iw_cxgb3 drivers listed above. SFN4XXX adapters have been deprecated Starting with Red Hat Enterprise Linux 7.4, SFN4XXX Solarflare network adapters have been deprecated. Previously, Solarflare had a single driver sfc for all adapters. Recently, support of SFN4XXX was split from sfc and moved into a new SFN4XXX-only driver, called sfc-falcon . Both drivers continue to be supported at this time, but sfc-falcon and SFN4XXX support is scheduled for removal in a future major release. Software-initiated-only FCoE storage technologies have been deprecated The software-initiated-only type of the Fibre Channel over Ethernet (FCoE) storage technology has been deprecated due to limited customer adoption. The software-initiated-only storage technology will remain supported for the life of Red Hat Enterprise Linux 7. The deprecation notice indicates the intention to remove software-initiated-based FCoE support in a future major release of Red Hat Enterprise Linux. It is important to note that the hardware support and the associated user-space tools (such as drivers, libfc , or libfcoe ) are unaffected by this deprecation notice. For details regarding changes to FCoE support in RHEL 8, see Considerations in adopting RHEL 8 . Target mode in Software FCoE and Fibre Channel has been deprecated Software FCoE: The NIC Software FCoE target functionality has been deprecated and will remain supported for the life of Red Hat Enterprise Linux 7. The deprecation notice indicates the intention to remove the NIC Software FCoE target functionality support in a future major release of Red Hat Enterprise Linux. For more information regarding changes to FCoE support in RHEL 8, see Considerations in adopting RHEL 8 . Fibre Channel: Target mode in Fibre Channel has been deprecated and will remain supported for the life of Red Hat Enterprise Linux 7. Target mode will be disabled for the tcm_fc and qla2xxx drivers in a future major release of Red Hat Enterprise Linux. Containers using the libvirt-lxc tooling have been deprecated The following libvirt-lxc packages are deprecated since Red Hat Enterprise Linux 7.1: libvirt-daemon-driver-lxc libvirt-daemon-lxc libvirt-login-shell Future development on the Linux containers framework is now based on the docker command-line interface. libvirt-lxc tooling may be removed in a future release of Red Hat Enterprise Linux (including Red Hat Enterprise Linux 7) and should not be relied upon for developing custom container management applications. For more information, see the Red Hat KnowledgeBase article . The Perl and shell scripts for Directory Server have been deprecated The Perl and shell scripts, which are provided by the 389-ds-base package, have been deprecated. The scripts will be replaced by new utilities in the major release of Red Hat Enterprise Linux. libguestfs can no longer inspect ISO installer files The libguestfs library does no longer support inspecting ISO installer files, for example using the guestfish or virt-inspector utilities. Use the osinfo-detect command for inspecting ISO files instead. This command can be obtained from the libosinfo package. Creating internal snapshots of virtual machines has been deprecated Due to their lack of optimization and stability, internal virtual machine snapshots are now deprecated. In their stead, external snapshots are recommended for use. For more information, including instructions for creating external snapshots, see the Virtualization Deployment and Admnistration Guide . IVSHMEM has been deprecated The inter-VM shared memory device (IVSHMEM) feature has been deprecated. Therefore, in a future major release of RHEL, if a virtual machine (VM) is configured to share memory between multiple virtual machines in the form of a PCI device that exposes memory to guests, the VM will fail to boot. The gnome-shell-browser-plugin subpackage has been deprecated Since the Firefox Extended Support Release (ESR 60), Firefox no longer supports the Netscape Plugin Application Programming Interface (NPAPI) that was used by the gnome-shell-browser-plugin subpackage. The subpackage, which provided the functionality to install GNOME Shell Extensions, has thus been deprecated. The installation of GNOME Shell Extensions is now handled directly in the gnome-software package. The VDO read cache has been deprecated The read cache functionality in Virtual Data Optimizer (VDO) has been deprecated. The read cache is disabled by default on new VDO volumes. In the major Red Hat Enterprise Linux release, the read cache functionality will be removed, and you will no longer be able to enable it using the --readCache option of the vdo utility. cpuid has been deprecated The cpuid command has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support using cpuid to dump the information about CPUID instruction for each CPU. To obtain similar information, use the lscpu command instead. KDE has been deprecated KDE Plasma Workspaces (KDE), which has been provided as an alternative to the default GNOME desktop environment has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support using KDE instead of the default GNOME desktop environment. Using virt-install with NFS locations is deprecated With a future major version of Red Hat Enterprise Linux, the virt-install utility will not be able to mount NFS locations. As a consequence, attempting to install a virtual machine using virt-install with a NFS address as a value of the --location option will fail. To work around this change, mount your NFS share prior to using virt-install , or use a HTTP location. The lwresd daemon has been deprecated The lwresd daemon, which is a part of the bind package, has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support providing name lookup services to clients that use the BIND 9 lightweight resolver library with lwresd . The recommended replacements are: The systemd-resolved daemon and nss-resolve API, provided by the systemd package The unbound library API and daemon, provided by the unbound and unbound-libs packages The getaddrinfo and related glibc library calls The /etc/sysconfig/nfs file and legacy NFS service names have been deprecated A future major Red Hat Enterprise Linux release will move the NFS configuration from the /etc/sysconfig/nfs file to /etc/nfs.conf . Red Hat Enterprise Linux 7 currently supports both of these files. Red Hat recommends that you use the new /etc/nfs.conf file to make NFS configuration in all versions of Red Hat Enterprise Linux compatible with automated configuration systems. Additionally, the following NFS service aliases will be removed and replaced by their upstream names: nfs.service , replaced by nfs-server.service nfs-secure.service , replaced by rpc-gssd.service rpcgssd.service , replaced by rpc-gssd.service nfs-idmap.service , replaced by nfs-idmapd.service rpcidmapd.service , replaced by nfs-idmapd.service nfs-lock.service , replaced by rpc-statd.service nfslock.service , replaced by rpc-statd.service The JSON export functionality has been removed from the nft utility Previously, the nft utility provided an export feature, but the exported content could contain internal ruleset representation details, which was likely to change without further notice. For this reason, the deprecated export functionality has been removed from nft starting with RHEL 7.7. Future versions of nft , such as provided by RHEL 8, contain a high-level JSON API. However, this API not available in RHEL 7.7. The openvswitch-2.0.0-7 package in the RHEL 7 Optional repository has been deprecated RHEL 7.5 introduced the openvswitch-2.0.0-7.el7 package in the RHEL 7 Optional repository as a dependency of the NetworkManager-ovs package. This dependency no longer exists and, as a result, openvswitch-2.0.0-7.el7 is now deprecated. Note that Red Hat does not support packages in the RHEL 7 Optional repository and that openvswitch-2.0.0-7.el7 will not be updated in the future. For this reason, do not use this package in production environments. Deprecated PHP extensions The following PHP extensions have been deprecated: aspell mysql memcache Deprecated Apache HTTP Server modules The following modules of the Apache HTTP Server have been deprecated: mod_file_cache mod_nss mod_perl Apache Tomcat has been deprecated The Apache Tomcat server, a servlet container for the Java Servlet and JavaServer Pages (JSP) technologies, has been deprecated. Red Hat recommends that users requiring a servlet container use the JBoss Web Server. The DES algorithm is deprecated in IdM Due to security reasons, the Data Encryption Standard (DES) algorithm is deprecated in Identity Management (IdM). The MIT Kerberos libraries provided by the krb5-libs package do not support using the Data Encryption Standard (DES) in new deployments. Use DES only for compatibility reasons if your environment does not support any newer algorithm. Red Hat also recommends to avoid using RC4 ciphers over Kerberos. While DES is deprecated, the Server Message Block (SMB) protocol still uses RC4. However, the SMB protocol can also use the secure AES algorithms. For further details, see: MIT Kerberos Documentation - Retiring DES RFC6649: Deprecate DES, RC4-HMAC-EXP, and Other Weak Cryptographic Algorithms in Kerberos real(kind=16) type support has been removed from libquadmath library real(kind=16) type support has been removed from the libquadmath library in the compat-libgfortran-41 package in order to preserve ABI compatibility. Deprecated glibc features The following features of the GNU C library provided by the glibc packages have been deprecated: the librtkaio library Sun RPC and NIS interfaces Deprecated features of the GDB debugger The following features and capabilities of the GDB debugger have been deprecated: debugging Java programs built with the gcj compiler HP-UX XDB compatibility mode and the -xdb option Sun version of the stabs format Development headers and static libraries from valgrind-devel have been deprecated The valgrind-devel sub-package includes development files for developing custom Valgrind tools. These files do not have a guaranteed API, have to be linked statically, are unsupported, and thus have been deprecated. Red Hat recommends to use the other development files and header files for valgrind-aware programs from the valgrind-devel package such as valgrind.h , callgrind.h , drd.h , helgrind.h , and memcheck.h , which are stable and well supported. The nosegneg libraries for 32-bit Xen have been deprecated The glibc i686 packages contain an alternative glibc build, which avoids the use of the thread descriptor segment register with negative offsets ( nosegneg ). This alternative build is only used in the 32-bit version of the Xen Project hypervisor without hardware virtualization support, as an optimization to reduce the cost of full paravirtualization. This alternative build is deprecated. Ada, Go, and Objective C/C++ build capability in GCC has been deprecated Capability for building code in the Ada (GNAT), GCC Go, and Objective C/C++ languages using the GCC compiler has been deprecated. To build Go code, use the Go Toolset instead. Deprecated Kickstart commands and options The following Kickstart commands and options have been deprecated: upgrade btrfs part btrfs and partition btrfs part --fstype btrfs and partition --fstype btrfs logvol --fstype btrfs raid --fstype btrfs unsupported_hardware Where only specific options and values are listed, the base command and its other options are not deprecated. The env option in virt-who has become deprecated With this update, the virt-who utility no longer uses the env option for hypervisor detection. As a consequence, Red Hat discourages the use of env in your virt-who configurations, as the option will not have the intended effect. AGP graphics card have been deprecatd Graphics cards using the Accelerated Graphics Port (AGP) bus have been deprecated and are not supported in RHEL 8. AGP graphics cards are rarely used in 64-bit machines and the bus has been replaced by PCI-Express. The copy_file_range() call has been disabled on local file systems and in NFS The copy_file_range() system call on local file systems contains multiple issues that are difficult to fix. To avoid file corruptions, copy_file_range() support on local file systems has been disabled in RHEL 7.8. If an application uses the call in this case, copy_file_range() now returns an ENOSYS error. For the same reason, the server-side-copy feature has been disabled in the NFS server. However, the NFS client still supports copy_file_range() when accessing a server that supports server-side-copy.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.8_release_notes/deprecated_functionality
Red Hat Ansible Inside Installation Guide
Red Hat Ansible Inside Installation Guide Red Hat Ansible Inside 1.3 Install and configure Red Hat Inside Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_inside/1.3/html/red_hat_ansible_inside_installation_guide/index
Chapter 3. Deploying applications with OpenShift Client
Chapter 3. Deploying applications with OpenShift Client You can use OpenShift Client (oc) for application deployment. Procedure Create a new OpenShift project: Add the ASP.NET Core application: Track the progress of the build: View the deployed application once the build is finished: The application is now accessible within the project. Optional: Make the project accessible externally: Obtain the shareable URL:
[ "oc new-project sample-project", "oc new-app --name= example-app 'dotnet:9.0-ubi8~https://github.com/redhat-developer/s2i-dotnetcore-ex#dotnet-9.0' --build-env DOTNET_STARTUP_PROJECT=app", "oc logs -f bc/ example-app", "oc logs -f dc/ example-app", "oc expose svc/ example-app", "oc get routes" ]
https://docs.redhat.com/en/documentation/net/9.0/html/getting_started_with_.net_on_openshift_container_platform/assembly_dotnet-deploying-apps_getting-started-with-dotnet-on-openshift
8.29. device-mapper-persistent-data
8.29. device-mapper-persistent-data 8.29.1. RHEA-2013:1696 - device-mapper-persistent-data enhancement update Updated device-mapper-persistent-data packages that add various enhancements are now available for Red Hat Enterprise Linux 6. The device-mapper-persistent-data packages provide device-mapper thin provisioning (thinp) tools. Bug Fix BZ# 814790 , BZ# 960284 , BZ# 1006059 , BZ# 1019217 This enhancement update adds important thin provisioning tools (repair, rmap, and metadata_size) as well as caching tools (check, dump, restore, and repair) to the device-mapper-persistent-data packages in Red Hat Enterprise Linux 6. Users of device-mapper-persistent-data are advised to upgrade to these updated packages, which add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/device-mapper-persistent-data
Release notes for Eclipse Temurin 11.0.19
Release notes for Eclipse Temurin 11.0.19 Red Hat build of OpenJDK 11 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.19/index
Chapter 1. RHACS Cloud Service service description
Chapter 1. RHACS Cloud Service service description 1.1. Introduction to RHACS Red Hat Advanced Cluster Security for Kubernetes (RHACS) is an enterprise-ready, Kubernetes-native container security solution that helps you build, deploy, and run cloud-native applications more securely. Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides Kubernetes-native security as a service. With RHACS Cloud Service, Red Hat maintains, upgrades, and manages your Central services. Central services include the user interface (UI), data storage, RHACS application programming interface (API), and image scanning capabilities. You deploy your Central service through the Red Hat Hybrid Cloud Console. When you create a new ACS instance, Red Hat creates your individual control plane for RHACS. RHACS Cloud Service allows you to secure self-managed clusters that communicate with a Central instance. The clusters you secure, called Secured Clusters, are managed by you, and not by Red Hat. Secured Cluster services include optional vulnerability scanning services, admission control services, and data collection services used for runtime monitoring and compliance. You install Secured Cluster services on any OpenShift or Kubernetes cluster you want to secure. 1.2. Architecture RHACS Cloud Service is hosted on Amazon Web Services (AWS) over two regions, eu-west-1 and us-east-1, and uses the network access points provided by the cloud provider. Each tenant from RHACS Cloud Service uses highly-available egress proxies and is spread over 3 availability zones. For more information about RHACS Cloud Service system architecture and components, see Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) architecture . 1.3. Billing Customers can purchase a RHACS Cloud Service subscription on the Amazon Web Services (AWS) marketplace. The service cost is charged hourly per secured core, or vCPU of a node belonging to a secured cluster. Example 1.1. Subscription cost example If you have established a connection to two secured clusters, each with 5 identical nodes with 8 vCPUs (such as Amazon EC2 m7g.2xlarge), the total number of secured cores is 80 (2 x 5 x 8 = 80). 1.4. Security and compliance All RHACS Cloud Service data in the Central instance is encrypted in transit and at rest. The data is stored in secure storage with full replication and high availability together with regularly-scheduled backups. RHACS Cloud Service is available through cloud data centers that ensure optimal performance and the ability to meet data residency requirements. 1.4.1. Information security guidelines, roles, and responsibilities Red Hat's information security guidelines, aligned with the NIST Cybersecurity Framework , are approved by executive management. Red Hat maintains a dedicated team of globally-distributed certified information security professionals. See the following resources: FIRST: RH-ISIRT team TF-CSIRT: RH-ISIRT team Red Hat has strict internal policies and practices to protect our customers and their businesses. These policies and practices are confidential. In addition, we comply with all applicable laws and regulations, including those related to data privacy. Red Hat's information security roles and responsibilities are not managed by third parties. Red Hat maintains an ISO 27001 certification for our corporate information security management system (ISMS), which governs how all of our people work, corporate endpoint devices, and authentication and authorization practices. We have taken a standardized approach to this through the implementation of the Red Hat Enterprise Security Standard (ESS) to all infrastructure, products, services and technology that Red Hat employs. A copy of the ESS is available upon request. RHACS Cloud Service runs on an instance of OpenShift Dedicated hosted on Amazon Web Services (AWS). OpenShift Dedicated is compliant with ISO 27001, ISO 27017, ISO 27018, PCI DSS, SOC 2 Type 2, and HIPAA. Strong processes and security controls are aligned with industry standards to manage information security. RHACS Cloud Service follows the same security principles, guidelines, processes and controls defined for OpenShift Dedicated. These certifications demonstrate how our services platform, associated operations, and management practices align with core security requirements. We meet many of these requirements by following solid Secure Software Development Framework (SSDF) practices as defined by NIST, including build pipeline security. Implementation of SSDF controls are implemented via our Secure Software Management Lifecycle (SSML) for all products and services. Red Hat's proven and experienced global site reliability engineering (SRE) team is available 24x7 and proactively manages the cluster life cycle, infrastructure configuration, scaling, maintenance, security patching, and incident response as it relates to the hosted components of RHACS Cloud Service. The Red Hat SRE team is responsible for managing HA, uptime, backups, restore, and security for the RHACS Cloud Service control plane. RHACS Cloud Service comes with a 99.95% availability SLA and 24x7 RH SRE support by phone or chat. You are responsible for use of the product, including implementation of policies, vulnerability management, and deployment of secured cluster components within your OpenShift Container Platform environments. The Red Hat SRE team manages the control plane that contains tenant data in line with the compliance frameworks noted previously, including: All Red Hat SRE access the data plane clusters through the backplane which enables audited access to the cluster Red Hat SRE only deploys images from the Red Hat registry. All content posted to the Red Hat registry goes through rigorous checks. These images are the same images available to self-managed customers. Each tenant has their own individual mTLS CA, which encrypts data in-transit, enabling multi-tenant isolation. Additional isolation is provided via SELinux controls namespaces and network policies. Each tenant has their own instance of the RDS database. All Red Hat SREs and developers go through rigorous Secure Development Lifecycle training. For more information, see the following resources: Red Hat Site Reliability Engineering (SRE) services Red Hat OpenShift Dedicated An Overview of Red Hat's Secure Development Lifecycle (SDL) practices 1.4.2. Vulnerability management program Red Hat scans for vulnerabilities in our products during the build process and our dedicated Product Security team tracks and assesses newly-discovered vulnerabilities. Red Hat Information Security regularly scans running environments for vulnerabilities. Qualified critical and important Security Advisories (RHSAs) and urgent and selected high priority Bug Fix Advisories (RHBAs) are released as they become available. All other available fix and qualified patches are released via periodic updates. All RHACS Cloud Service software impacted by critical or important severity flaws are updated as soon as the fix is available. For more information about remediation of critical or high-priority issues, see Understanding Red Hat's Product Security Incident Response Plan . 1.4.3. Security exams and audits RHACS Cloud Service does not currently hold any external security certifications or attestations. The Red Hat Information Risk and Security Team has achieved ISO 27001:2013 certification for our Information Security Management System (ISMS). 1.4.4. Systems interoperability security RHACS Cloud Service supports integrations with registries, CI systems, notification systems, workflow systems like ServiceNow and Jira, and Security information and event management (SIEM) platforms. For more information about supported integrations, see the Integrating documentation. Custom integrations can be implemented using the API or generic webhooks. RHACS Cloud Service uses certificate-based architecture (mTLS) for both authentication and end-to-end encryption of all inflight traffic between the customer's site and Red Hat. It does not require a VPN. IP allowlists are not supported. Data transfer is encrypted using mTLS. File transfer, including Secure FTP, is not supported. 1.4.5. Malicious code prevention RHACS Cloud Service is deployed on Red Hat Enterprise Linux CoreOS (RHCOS). The user space in RHCOS is read-only. In addition, all RHACS Cloud Service instances are monitored in runtime by RHACS. Red Hat uses a commercially-available, enterprise-grade anti-virus solution for Windows and Mac platforms, which is centrally managed and logged. Anti-virus solutions on Linux-based platforms are not part of Red Hat's strategy, as they can introduce additional vulnerabilities. Instead, we harden and rely on the built-in tooling (for example, SELinux) to protect the platform. Red Hat uses SentinelOne and osquery for individual endpoint security, with updates made as they are available from the vendor. All third-party JavaScript libraries are downloaded and included in build images which are scanned for vulnerabilities before being published. 1.4.6. Systems development lifecycle security Red Hat follows secure development lifecycle practices. Red Hat Product Security practices are aligned with the Open Web Application Security Project (OWASP) and ISO12207:2017 wherever it is feasible. Red Hat covers OWASP project recommendations along with other secure software development practices to increase the general security posture of our products. OWASP project analysis is included in Red Hat's automated scanning, security testing, and threat models, as the OWASP project is built based on selected CWE weaknesses. Red Hat monitors weaknesses in our products to address issues before they are exploited and become vulnerabilities. For more information, see the following resources: Red Hat Software Development Life Cycle practices Security by design: Security principles and threat modeling Applications are scanned regularly and the container scan results of the product are available publicly. For example, on the Red Hat Ecosystem Catalog site, you can select a component image such as rhacs-main and click the Security tab to see the health index and the status of security updates. As part of Red Hat's policy, a support policy and maintenance plan is issued for any third-party components we depend on that go to end-of-life. 1.4.7. Software Bill of Materials Red Hat has published software bill of materials (SBOMs) files for core Red Hat offerings. An SBOM is a machine-readable, comprehensive inventory (manifest) of software components and dependencies with license and provenance information. SBOM files help establish reviews for procurement and audits of what is in a set of software applications and libraries. Combined with Vulnerability Exploitability eXchange (VEX), SBOMs help an organization address its vulnerability risk assessment process. Together they provide information on where a potential risk might exist (where the vulnerable artifact is included and the correlation between this artifact and components or the product), and its current status to known vulnerabilities or exploits. Red Hat, together with other vendors, is working to define the specific requirements for publishing useful SBOMs that can be correlated with Common Security Advisory Framework (CSAF)-VEX files, and inform consumers and partners about how to use this data. For now, SBOM files published by Red Hat, including SBOMs for RHACS Cloud Service, are considered to be beta versions for customer testing and are available at https://access.redhat.com/security/data/sbom/beta/spdx/ . For more detail on Red Hat's Security data, see The future of Red Hat security data . 1.4.8. Data centers and providers The following third-party providers are used by Red Hat in providing subscription support services: Flexential hosts the Raleigh Data Center, which is the primary data center used to support the Red Hat Customer Portal databases. Digital Realty hosts the Phoenix Data Center, which is the secondary backup data center supporting the Red Hat Customer Portal databases. Salesforce provides the engine behind the customer ticketing system. AWS is used to augment data center infrastructure capacity, some of which is used to support the Red Hat Customer Portal application. Akamai is used to host the Web Application Firewall and provide DDoS protection. Iron Mountain is used to handle the destruction of sensitive material. 1.5. Access control User accounts are managed with role-based access control (RBAC). See Managing RBAC in Red Hat Advanced Cluster Security for Kubernetes for more information. Red Hat site reliability engineers (SREs) have access to Central instances. Access is controlled with OpenShift RBAC. Credentials are instantly revoked upon termination. 1.5.1. Authentication provider When you create a Central instance using Red Hat Hybrid Cloud Console , authentication for the cluster administrator is configured as part of the process. Customers must manage all access to the Central instance as part of their integrated solution. For more information about the available authentication methods, see Understanding authentication providers . The default identity provider in RHACS Cloud Service is Red Hat Single Sign-On (SSO). Authorization rules are set up to provide administrator access to the user who created the RHACS Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin login is disabled for RHACS Cloud Service by default and can only be enabled temporarily by SREs. For more information about authentication using Red Hat SSO, see Default access to the ACS Console . 1.5.2. Password management Red Hat's password policy requires the use of a complex password. Passwords must contain at least 14 characters and at least three of the following character classes: Base 10 digits (0 to 9) Upper case characters (A to Z) Lower case characters (a to z) Punctuation, spaces, and other characters Most systems require two-factor authentication. Red Hat follows best password practices according to NIST guidelines . 1.5.3. Remote access Access for remote support and troubleshooting is strictly controlled through implementation of the following guidelines: Strong two-factor authentication for VPN access A segregated network with management and administrative networks requiring additional authentication through a bastion host All access and management is performed over encrypted sessions Our customer support team offers Bomgar as a remote access solution for troubleshooting. Bomgar sessions are optional, must be initiated by the customer, and can be monitored and controlled. To prevent information leakage, logs are shipped to SRE through our security information and event management (SIEM) application, Splunk. 1.6. Compliance RHACS Cloud Service is certified across key global standards, ensuring top-tier security, compliance, and data protection for your business. The following table outlines certifications for RHACS Cloud Service. Table 1.1. Security and control certifications for RHACS Cloud Service Compliance RHACS Cloud Service on Kubernetes ISO/IEC 27001:2022 Yes ISO/IEC 27017:2015 Yes ISO/IEC 27018:2019 Yes PCI DSS 4.0 Yes SOC 2 Type 2 Yes SOC 2 Type 3 Yes 1.7. Data protection Red Hat provides data protection by using various methods, such as logging, access control, and encryption. 1.7.1. Data storage media protection To protect our data and client data from risk of theft or destruction, Red Hat employs the following methods: access logging automated account termination procedures application of the principle of least privilege Data is encrypted in transit and at rest using strong data encryption following NIST guidelines and Federal Information Processing Standards (FIPS) where possible and practical. This includes backup systems. RHACS Cloud Service encrypts data at rest within the Amazon Relational Database Service (RDS) database by using AWS-managed Key Management Services (KMS) keys. All data between the application and the database, together with data exchange between the systems, are encrypted in transit. 1.7.1.1. Data retention and destruction Records, including those containing personal data, are retained as required by law. Records not required by law or a reasonable business need are securely removed. Secure data destruction requirements are included in operating procedures, using military grade tools. In addition, staff have access to secure document destruction facilities. 1.7.1.2. Encryption Red Hat uses AWS managed keys which are rotated by AWS each year. For information on the use of keys, see AWS KMS key management . For more information about RDS, see Amazon RDS Security . 1.7.1.3. Multi-tenancy RHACS Cloud Service isolates tenants by namespace on OpenShift Container Platform. SELinux provides additional isolation. Each customer has a unique RDS instance. 1.7.1.4. Data ownership Customer data is stored in an encrypted RDS database not available on the public internet. Only Site Reliability Engineers (SREs) have access to it, and the access is audited. Every RHACS Cloud Service system comes integrated with Red Hat external SSO. Authorization rules are set up to provide administrator access to the user created the Cloud Service and to users who are marked as organization administrators in Red Hat SSO. The admin login is disabled for RHACS Cloud Service by default and can only be temporarily enabled by SREs. Red Hat collects information about the number of secured clusters connected to RHACS Cloud Service and the usage of features. Metadata generated by the application and stored in the RDS database is owned by the customer. Red Hat only accesses data for troubleshooting purposes and with customer permission. Red Hat access requires audited privilege escalation. Upon contract termination, Red Hat can perform a secure disk wipe upon request. However, we are unable to physically destroy media (cloud providers such as AWS do not provide this option). To secure data in case of a breach, you can perform the following actions: Disconnect all secured clusters from RHACS Cloud Service immediately using the cluster management page. Immediately disable access to the RHACS Cloud Service by using the Access Control page. Immediately delete your RHACS instance, which also deletes the RDS instance. Any AWS RDS (data store) specific access modifications would be implemented by the RHACS Cloud Service SRE engineers. 1.8. Metrics and Logging 1.8.1. Service metrics Service metrics are internal only. Red Hat provides and maintains the service at the agreed upon level. Service metrics are accessible only to authorized Red Hat personnel. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . 1.8.2. Customer metrics Core usage capacity metrics are available either through Subscription Watch or the Subscriptions page . 1.8.3. Service logging System logs for all components of the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are internal and available only to Red Hat personnel. Red Hat does not provide user access to component logs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . 1.9. Updates and Upgrades Red Hat makes a commercially reasonable effort to notify customers prior to updates and upgrades that impact service. The decision regarding the need for a Service update to the Central instance and its timing is the sole responsibility of Red Hat. Customers have no control over when a Central service update occurs. For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . Upgrades to the version of Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) are considered part of the service update. Upgrades are transparent to the customer and no connection to any update site is required. Customers are responsible for timely RHACS Secured Cluster services upgrades that are required to maintain compatibility with RHACS Cloud Service. Red Hat recommends enabling automatic upgrades for Secured Clusters that are connected to RHACS Cloud Service. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information about upgrade versions. 1.10. Availability Availability and disaster avoidance are extremely important aspects of any security platform. Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) provides numerous protections against failures at multiple levels. To account for possible cloud provider failures, Red Hat established multiple availability zones. 1.10.1. Backup and disaster recovery The RHACS Cloud Service Disaster Recovery strategy includes backups of database and any customization. This also applies to customer data stored in the Central database. Recovery time varies based on the number of appliances and database sizes; however, because the appliances can be clustered and distributed, the RTO can be reduced upfront with proper architecture planning. All snapshots are created using the appropriate cloud provider snapshot APIs, encrypted and then uploaded to secure object storage, which for Amazon Web Services (AWS) is an S3 bucket. Red Hat does not commit to a Recovery Point Objective (RPO) or Recovery Time Objective (RTO). For more information, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . Site Reliability Engineering performs backups only as a precautionary measure. They are stored in the same region as the cluster. Customers should deploy multiple availability zone Secured Clusters with workloads that follow Kubernetes best practices to ensure high availability within a region. Disaster recovery plans are exercised annually at a minimum. A Business Continuity Management standard and guideline is in place so that the BC lifecycle is consistently followed throughout the organization. This policy includes a requirement for testing at least annually, or with major change of functional plans. Review sessions are required to be conducted after any plan exercise or activation, and plan updates are made as needed. Red Hat has generator backup systems. Our IT production systems are hosted in a Tier 3 data center facility that has recurring testing to ensure redundancy is operational. They are audited yearly to validate compliance. 1.11. Getting support for RHACS Cloud Service If you experience difficulty with a procedure described in this documentation, or with RHACS Cloud Service in general, visit the Red Hat Customer Portal . From the Customer Portal, you can perform the following actions: Search or browse through the Red Hat Knowledgebase of articles and solutions relating to Red Hat products. Submit a support case to Red Hat Support. Access other product documentation. To identify issues with your cluster, you can use Insights in RHACS Cloud Service. Insights provides details about issues and, if available, information on how to solve a problem. 1.12. Service removal You can delete RHACS Cloud Service using the default delete operations from the Red Hat Hybrid Cloud Console . Deleting the RHACS Cloud Service Central instance automatically removes all RHACS components. Deleting is not reversible. 1.13. Pricing For information about subscription fees, see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES . 1.14. Service Level Agreement For more information about the Service Level Agreements (SLAs) offered for Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), see PRODUCT APPENDIX 4 RED HAT ONLINE SERVICES .
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/rhacs_cloud_service/rhacs-cloud-service-service-description
33.9. Additional Resources
33.9. Additional Resources To learn more about printing on Red Hat Enterprise Linux, refer to the following resources. 33.9.1. Installed Documentation map lpr - The manual page for the lpr command that allows you to print files from the command line. man lprm - The manual page for the command line utility to remove print jobs from the print queue. man mpage - The manual page for the command line utility to print multiple pages on one sheet of paper. man cupsd - The manual page for the CUPS printer daemon. man cupsd.conf - The manual page for the CUPS printer daemon configuration file. man classes.conf - The manual page for the class configuration file for CUPS.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-printing-additional-resources
Chapter 2. Node [v1]
Chapter 2. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NodeSpec describes the attributes that a node is created with. status object NodeStatus is information about the current status of a node. 2.1.1. .spec Description NodeSpec describes the attributes that a node is created with. Type object Property Type Description configSource object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 externalID string Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https://issues.k8s.io/61966 podCIDR string PodCIDR represents the pod IP range assigned to the node. podCIDRs array (string) podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node. If this field is specified, the 0th entry must match the podCIDR field. It may contain at most 1 value for each of IPv4 and IPv6. providerID string ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID> taints array If specified, the node's taints. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. unschedulable boolean Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https://kubernetes.io/docs/concepts/nodes/node/#manual-node-administration 2.1.2. .spec.configSource Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.3. .spec.configSource.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.4. .spec.taints Description If specified, the node's taints. Type array 2.1.5. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required key effect Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Required. The taint key to be applied to a node. timeAdded Time TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 2.1.6. .status Description NodeStatus is information about the current status of a node. Type object Property Type Description addresses array List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). addresses[] object NodeAddress contains information for the node's address. allocatable object (Quantity) Allocatable represents the resources of a node that are available for scheduling. Defaults to Capacity. capacity object (Quantity) Capacity represents the total resources of a node. More info: https://kubernetes.io/docs/reference/node/node-status/#capacity conditions array Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition conditions[] object NodeCondition contains condition information for a node. config object NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. daemonEndpoints object NodeDaemonEndpoints lists ports opened by daemons running on the Node. features object NodeFeatures describes the set of features implemented by the CRI implementation. The features contained in the NodeFeatures should depend only on the cri implementation independent of runtime handlers. images array List of container images on this node images[] object Describe a container image nodeInfo object NodeSystemInfo is a set of ids/uuids to uniquely identify the node. phase string NodePhase is the recently observed lifecycle phase of the node. More info: https://kubernetes.io/docs/concepts/nodes/node/#phase The field is never populated, and now is deprecated. Possible enum values: - "Pending" means the node has been created/added by the system, but not configured. - "Running" means the node has been configured and has Kubernetes components running. - "Terminated" means the node has been removed from the cluster. runtimeHandlers array The available runtime handlers. runtimeHandlers[] object NodeRuntimeHandler is a set of runtime handler information. volumesAttached array List of volumes that are attached to the node. volumesAttached[] object AttachedVolume describes a volume attached to a node volumesInUse array (string) List of attachable volumes in use (mounted) by the node. 2.1.7. .status.addresses Description List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). Type array 2.1.8. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required type address Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 2.1.9. .status.conditions Description Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition Type array 2.1.10. .status.conditions[] Description NodeCondition contains condition information for a node. Type object Required type status Property Type Description lastHeartbeatTime Time Last time we got an update on a given condition. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of node condition. 2.1.11. .status.config Description NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. Type object Property Type Description active object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 assigned object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 error string Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions. lastKnownGood object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 2.1.12. .status.config.active Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.13. .status.config.active.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.14. .status.config.assigned Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.15. .status.config.assigned.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.16. .status.config.lastKnownGood Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.17. .status.config.lastKnownGood.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.18. .status.daemonEndpoints Description NodeDaemonEndpoints lists ports opened by daemons running on the Node. Type object Property Type Description kubeletEndpoint object DaemonEndpoint contains information about a single Daemon endpoint. 2.1.19. .status.daemonEndpoints.kubeletEndpoint Description DaemonEndpoint contains information about a single Daemon endpoint. Type object Required Port Property Type Description Port integer Port number of the given endpoint. 2.1.20. .status.features Description NodeFeatures describes the set of features implemented by the CRI implementation. The features contained in the NodeFeatures should depend only on the cri implementation independent of runtime handlers. Type object Property Type Description supplementalGroupsPolicy boolean SupplementalGroupsPolicy is set to true if the runtime supports SupplementalGroupsPolicy and ContainerUser. 2.1.21. .status.images Description List of container images on this node Type array 2.1.22. .status.images[] Description Describe a container image Type object Property Type Description names array (string) Names by which this image is known. e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"] sizeBytes integer The size of the image in bytes. 2.1.23. .status.nodeInfo Description NodeSystemInfo is a set of ids/uuids to uniquely identify the node. Type object Required machineID systemUUID bootID kernelVersion osImage containerRuntimeVersion kubeletVersion kubeProxyVersion operatingSystem architecture Property Type Description architecture string The Architecture reported by the node bootID string Boot ID reported by the node. containerRuntimeVersion string ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2). kernelVersion string Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64). kubeProxyVersion string Deprecated: KubeProxy Version reported by the node. kubeletVersion string Kubelet Version reported by the node. machineID string MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http://man7.org/linux/man-pages/man5/machine-id.5.html operatingSystem string The Operating System reported by the node osImage string OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)). systemUUID string SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid 2.1.24. .status.runtimeHandlers Description The available runtime handlers. Type array 2.1.25. .status.runtimeHandlers[] Description NodeRuntimeHandler is a set of runtime handler information. Type object Property Type Description features object NodeRuntimeHandlerFeatures is a set of features implemented by the runtime handler. name string Runtime handler name. Empty for the default runtime handler. 2.1.26. .status.runtimeHandlers[].features Description NodeRuntimeHandlerFeatures is a set of features implemented by the runtime handler. Type object Property Type Description recursiveReadOnlyMounts boolean RecursiveReadOnlyMounts is set to true if the runtime handler supports RecursiveReadOnlyMounts. userNamespaces boolean UserNamespaces is set to true if the runtime handler supports UserNamespaces, including for volumes. 2.1.27. .status.volumesAttached Description List of volumes that are attached to the node. Type array 2.1.28. .status.volumesAttached[] Description AttachedVolume describes a volume attached to a node Type object Required name devicePath Property Type Description devicePath string DevicePath represents the device path where the volume should be available name string Name of the attached volume 2.2. API endpoints The following API endpoints are available: /api/v1/nodes DELETE : delete collection of Node GET : list or watch objects of kind Node POST : create a Node /api/v1/watch/nodes GET : watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /api/v1/watch/nodes/{name} GET : watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 2.2.1. /api/v1/nodes HTTP method DELETE Description delete collection of Node Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Node Table 2.3. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Node schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 2.2.2. /api/v1/watch/nodes HTTP method GET Description watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /api/v1/nodes/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the Node HTTP method DELETE Description delete a Node Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body Node schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 2.2.4. /api/v1/watch/nodes/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /api/v1/nodes/{name}/status Table 2.19. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description read status of the specified Node Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 2.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.24. Body parameters Parameter Type Description body Node schema Table 2.25. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/node_apis/node-v1
Chapter 5. Installing a cluster on IBM Cloud with customizations
Chapter 5. Installing a cluster on IBM Cloud with customizations In OpenShift Container Platform version 4.17, you can install a customized cluster on infrastructure that the installation program provisions on IBM Cloud(R). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 5.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud(R). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 5.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 5.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 5.6.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 5.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 gx3d-160x1792x8h100 mx2-8x64 mx2d-4x32 mx3d-4x40 ox2-8x64 ux2d-2x56 vx2d-4x56 5.6.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13 1 8 10 11 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 13 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 5.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.9. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 5.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 5.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IC_API_KEY=<api_key>", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_ibm_cloud/installing-ibm-cloud-customizations
Appendix B. Business Central system properties
Appendix B. Business Central system properties The Business Central system properties listed in this section are passed to standalone*.xml files. Git directory Use the following properties to set the location and name for the Business Central Git directory: org.uberfire.nio.git.dir : Location of the Business Central Git directory. org.uberfire.nio.git.dirname : Name of the Business Central Git directory. Default value: .niogit . org.uberfire.nio.git.ketch : Enables or disables Git ketch. org.uberfire.nio.git.hooks : Location of the Git hooks directory. Git over HTTP Use the following properties to configure access to the Git repository over HTTP: org.uberfire.nio.git.proxy.ssh.over.http : Specifies whether SSH should use an HTTP proxy. Default value: false . http.proxyHost : Defines the host name of the HTTP proxy. Default value: null . http.proxyPort : Defines the host port (integer value) of the HTTP proxy. Default value: null . http.proxyUser : Defines the user name of the HTTP proxy. http.proxyPassword : Defines the user password of the HTTP proxy. org.uberfire.nio.git.http.enabled : Enables or disables the HTTP daemon. Default value: true . org.uberfire.nio.git.http.host : If the HTTP daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.http.hostname : If the HTTP daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.http.port : If the HTTP daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTP. The HTTP still relies on the servlet container. Default value: 8080 . Git over HTTPS Use the following properties to configure access to the Git repository over HTTPS: org.uberfire.nio.git.proxy.ssh.over.https : Specifies whether SSH uses an HTTPS proxy. Default value: false . https.proxyHost : Defines the host name of the HTTPS proxy. Default value: null . https.proxyPort : Defines the host port (integer value) of the HTTPS proxy. Default value: null . https.proxyUser : Defines the user name of the HTTPS proxy. https.proxyPassword : Defines the user password of the HTTPS proxy. user.dir : Location of the user directory. org.uberfire.nio.git.https.enabled : Enables or disables the HTTPS daemon. Default value: false org.uberfire.nio.git.https.host : If the HTTPS daemon is enabled, it uses this property as the host identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.https.hostname : If the HTTPS daemon is enabled, it uses this property as the host name identifier. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: localhost . org.uberfire.nio.git.https.port : If the HTTPS daemon is enabled, it uses this property as the port number. This is an informative property that is used to display how to access the Git repository over HTTPS. The HTTPS still relies on the servlet container. Default value: 8080 . JGit org.uberfire.nio.jgit.cache.instances : Defines the JGit cache size. org.uberfire.nio.jgit.cache.overflow.cleanup.size : Defines the JGit cache overflow cleanup size. org.uberfire.nio.jgit.remove.eldest.iterations : Enables or disables whether to remove eldest JGit iterations. org.uberfire.nio.jgit.cache.evict.threshold.duration : Defines the JGit evict threshold duration. org.uberfire.nio.jgit.cache.evict.threshold.time.unit : Defines the JGit evict threshold time unit. Git daemon Use the following properties to enable and configure the Git daemon: org.uberfire.nio.git.daemon.enabled : Enables or disables the Git daemon. Default value: true . org.uberfire.nio.git.daemon.host : If the Git daemon is enabled, it uses this property as the local host identifier. Default value: localhost . org.uberfire.nio.git.daemon.hostname : If the Git daemon is enabled, it uses this property as the local host name identifier. Default value: localhost org.uberfire.nio.git.daemon.port : If the Git daemon is enabled, it uses this property as the port number. Default value: 9418 . org.uberfire.nio.git.http.sslVerify : Enables or disables SSL certificate checking for Git repositories. Default value: true . Note If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information. Git SSH Use the following properties to enable and configure the Git SSH daemon: org.uberfire.nio.git.ssh.enabled : Enables or disables the SSH daemon. Default value: true . org.uberfire.nio.git.ssh.host : If the SSH daemon enabled, it uses this property as the local host identifier. Default value: localhost . org.uberfire.nio.git.ssh.hostname : If the SSH daemon is enabled, it uses this property as local host name identifier. Default value: localhost . org.uberfire.nio.git.ssh.port : If the SSH daemon is enabled, it uses this property as the port number. Default value: 8001 . Note If the default or assigned port is already in use, a new port is automatically selected. Ensure that the ports are available and check the log for more information. org.uberfire.nio.git.ssh.cert.dir : Location of the .security directory where local certificates are stored. Default value: Working directory. org.uberfire.nio.git.ssh.idle.timeout : Sets the SSH idle timeout. org.uberfire.nio.git.ssh.passphrase : Pass phrase used to access the public key store of your operating system when cloning git repositories with SCP style URLs. Example: [email protected]:user/repository.git . org.uberfire.nio.git.ssh.algorithm : Algorithm used by SSH. Default value: RSA . org.uberfire.nio.git.gc.limit : Sets the GC limit. org.uberfire.nio.git.ssh.ciphers : A comma-separated string of ciphers. The available ciphers are aes128-ctr , aes192-ctr , aes256-ctr , arcfour128 , arcfour256 , aes192-cbc , aes256-cbc . If the property is not used, all available ciphers are loaded. org.uberfire.nio.git.ssh.macs : A comma-separated string of message authentication codes (MACs). The available MACs are hmac-md5 , hmac-md5-96 , hmac-sha1 , hmac-sha1-96 , hmac-sha2-256 , hmac-sha2-512 . If the property is not used, all available MACs are loaded. Note If you plan to use RSA or any algorithm other than DSA, make sure you set up your application server to use the Bouncy Castle JCE library. KIE Server nodes and Process Automation Manager controller Use the following properties to configure the connections with the KIE Server nodes from the Process Automation Manager controller: org.kie.server.controller : The URL is used to connect to the Process Automation Manager controller. For example, ws://localhost:8080/business-central/websocket/controller . org.kie.server.user : User name used to connect to the KIE Server nodes from the Process Automation Manager controller. This property is only required when using this Business Central installation as a Process Automation Manager controller. org.kie.server.pwd : Password used to connect to the KIE Server nodes from the Process Automation Manager controller. This property is only required when using this Business Central installation as a Process Automation Manager controller. Maven and miscellaneous Use the following properties to configure Maven and other miscellaneous functions: kie.maven.offline.force : Forces Maven to behave as if offline. If true, disables online dependency resolution. Default value: false . Note Use this property for Business Central only. If you share a runtime environment with any other component, isolate the configuration and apply it only to Business Central. org.uberfire.gzip.enable : Enables or disables Gzip compression on the GzipFilter compression filter. Default value: true . org.kie.workbench.profile : Selects the Business Central profile. Possible values are FULL or PLANNER_AND_RULES . A prefix FULL_ sets the profile and hides the profile preferences from the administrator preferences. Default value: FULL org.appformer.m2repo.url : Business Central uses the default location of the Maven repository when looking for dependencies. It directs to the Maven repository inside Business Central, for example, http://localhost:8080/business-central/maven2 . Set this property before starting Business Central. Default value: File path to the inner m2 repository. appformer.ssh.keystore : Defines the custom SSH keystore to be used with Business Central by specifying a class name. If the property is not available, the default SSH keystore is used. appformer.ssh.keys.storage.folder : When using the default SSH keystore, this property defines the storage folder for the user's SSH public keys. If the property is not available, the keys are stored in the Business Central .security folder. appformer.experimental.features : Enables the experimental features framework. Default value: false . org.kie.demo : Enables an external clone of a demo application from GitHub. org.uberfire.metadata.index.dir : Place where the Lucene .index directory is stored. Default value: Working directory. org.uberfire.ldap.regex.role_mapper : Regex pattern used to map LDAP principal names to the application role name. Note that the variable role must be a part of the pattern as the application role name substitutes the variable role when matching a principle value and role name. org.uberfire.sys.repo.monitor.disabled : Disables the configuration monitor. Do not disable unless you are sure. Default value: false . org.uberfire.secure.key : Password used by password encryption. Default value: org.uberfire.admin . org.uberfire.secure.alg : Crypto algorithm used by password encryption. Default value: PBEWithMD5AndDES . org.uberfire.domain : Security-domain name used by uberfire. Default value: ApplicationRealm . org.guvnor.m2repo.dir : Place where the Maven repository folder is stored. Default value: <working-directory>/repositories/kie . org.guvnor.project.gav.check.disabled : Disables group ID, artifact ID, and version (GAV) checks. Default value: false . org.kie.build.disable-project-explorer : Disables automatic build of a selected project in Project Explorer. Default value: false . org.kie.builder.cache.size : Defines the cache size of the project builder. Default value: 20 . org.kie.library.assets_per_page : You can customize the number of assets per page in the project screen. Default value: 15 . org.kie.verification.disable-dtable-realtime-verification : Disables the real-time validation and verification of decision tables. Default value: false . Process Automation Manager controller Use the following properties to configure how to connect to the Process Automation Manager controller: org.kie.workbench.controller : The URL used to connect to the Process Automation Manager controller, for example, ws://localhost:8080/kie-server-controller/websocket/controller . org.kie.workbench.controller.user : The Process Automation Manager controller user. Default value: kieserver . org.kie.workbench.controller.pwd : The Process Automation Manager controller password. Default value: kieserver1! . org.kie.workbench.controller.token : The token string used to connect to the Process Automation Manager controller. Java Cryptography Extension KeyStore (JCEKS) Use the following properties to configure JCEKS: kie.keystore.keyStoreURL : The URL used to load a Java Cryptography Extension KeyStore (JCEKS). For example, file:///home/kie/keystores/keystore.jceks. kie.keystore.keyStorePwd : The password used for the JCEKS. kie.keystore.key.ctrl.alias : The alias of the key for the default REST Process Automation Manager controller. kie.keystore.key.ctrl.pwd : The password of the alias for the default REST Process Automation Manager controller. Rendering Use the following properties to switch between Business Central and KIE Server rendered forms: org.jbpm.wb.forms.renderer.ext : Switches the form rendering between Business Central and KIE Server. By default, the form rendering is performed by Business Central. Default value: false . org.jbpm.wb.forms.renderer.name : Enables you to switch between Business Central and KIE Server rendered forms. Default value: workbench .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/business-central-system-properties-ref_install-on-jws
Chapter 16. The Rapid Stock Market Quickstart
Chapter 16. The Rapid Stock Market Quickstart The Rapid Stock Market quickstart demonstrates how JBoss Data Grid's compatibility mode works with a Hot Rod client (to store data) and a HTTP client using REST (to retrieve data). This quickstart is only available in JBoss Data Grid's Remote Client-Server mode and does not use any containers. The Rapid Stock Market quickstart includes a server-side and a client-side application. Report a bug 16.1. Build and Run the Rapid Stock Market Quickstart The Rapid Stock Market quickstart requires the following configuration for the server and client sides of the application. Procedure 16.1. Rapid Stock Market Quickstart Server-side Configuration Navigate to the Root Directory Open a command line and navigate to the root directory of this quickstart. Build a server module for the JBoss Data Grid Server by packaging a class that is common for the client and server in a jar file: Place the new jar file in a directory structure that is similar to the server module. Install the server module into the server. Copy the prepared module to the server: Add the new module as a dependency of the org.infinispan.commons module by adding the following into the modules/system/layers/base/org/infinispan/commons/main/module.xml file: Build the application: Configure the JBoss Data Grid to use the appropriate configuration file. Copy the example configuration file for compatibility mode to a location where the JBoss Data Grid Server can locate and use it: Remove the security-domain and auth-method attributes from the rest-connector element to disable REST security. Start the JBoss Data Grid Server in compatibility mode: Procedure 16.2. Rapid Stock Market Quickstart Client-side Configuration In a new command line terminal window, start the client-side application: Use the instructions in the help menu for the client application. Report a bug
[ "mvn clean package -Pprepare-server-module", "cp -r target/modules USD{JDG_SERVER_HOME}/", "<module name=\"org.infinispan.quickstart.compatibility.common\"/>", "mvn clean package", "cp USD{JDG_SERVER_HOME}/docs/examples/configs/standalone-compatibility-mode.xml USD{JDG_SERVER_HOME}/standalone/configuration", "USD{JDG_SERVER_HOME}/bin/standalone.sh -c standalone-compatibility-mode.xml", "mvn exec:java -Pclient" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-the_rapid_stock_market_quickstart
Chapter 15. Tips for undercloud and overcloud services
Chapter 15. Tips for undercloud and overcloud services This section provides advice on tuning and managing specific OpenStack services on the undercloud. 15.1. Tuning deployment performance Red Hat OpenStack Platform (RHOSP) director uses OpenStack Orchestration (heat) to conduct the main deployment and provisioning functions. You can use the --heat-config-vars-file option on the openstack overcloud deploy command to tune the following parameters: debug Run in debug mode. log_file Specify log file path. max_json_body_size Maximum raw byte size of JSON request body. num_engine_workers Set the number of workers. Heat uses a series of workers to execute deployment tasks. To calculate the default number of workers, the director heat configuration halves the total CPU thread count of the undercloud. In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value. For example, if your undercloud has a CPU with 16 threads, heat spawns 8 workers by default. The director configuration also uses a minimum and maximum cap by default. The minimum is 4 and the max is 24. rpc_response_timeout Seconds to wait for a response from a call. Example Create a file such as heat-overrides.yaml . Enter parameters as needed: Include the file in an openstack overcloud deploy command: 15.2. Changing the SSL/TLS cipher rules for HAProxy If you enabled SSL/TLS in the undercloud (see Section 4.2, "Undercloud configuration parameters" ), you might want to harden the SSL/TLS ciphers and rules that are used with the HAProxy configuration. This hardening helps to avoid SSL/TLS vulnerabilities, such as the POODLE vulnerability . Set the following hieradata using the hieradata_override undercloud configuration option: tripleo::haproxy::ssl_cipher_suite The cipher suite to use in HAProxy. tripleo::haproxy::ssl_options The SSL/TLS rules to use in HAProxy. For example, you might want to use the following cipher and rules: Cipher: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS Rules: no-sslv3 no-tls-tickets Create a hieradata override file ( haproxy-hiera-overrides.yaml ) with the following content: Note The cipher collection is one continuous line. Set the hieradata_override parameter in the undercloud.conf file to use the hieradata override file you created before you ran openstack undercloud install :
[ "rpc_response_timeout: 1200 num_engine_workers: 24 debug: true", "openstack overcloud deploy --heat-config-vars-file heat-overrides.yaml --answers-file templates/answers.yaml", "tripleo::haproxy::ssl_cipher_suite: ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS tripleo::haproxy::ssl_options: no-sslv3 no-tls-tickets", "[DEFAULT] hieradata_override = haproxy-hiera-overrides.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_tips-for-undercloud-and-overcloud-services
Chapter 7. Reverting your JBoss EAP server updates using the Management CLI
Chapter 7. Reverting your JBoss EAP server updates using the Management CLI You can revert updates applied to your JBoss EAP server using the Management CLI. To revert the changes applied to your JBoss EAP server, use the installer history command to view the versions of JBoss EAP installations on your server. Once you have confirmed the correct version of JBoss EAP you want to revert to, prepare a candidate server using the installer revert command. After preparing the candidate server, restart your JBoss EAP server to complete the revert process. For more information see how to view the history of JBoss EAP installations on your server . 7.1. Reverting your JBoss EAP server updates in a stand-alone server or a managed domain You can revert your JBoss EAP server installation in a stand-alone server or a managed domain using the JBoss EAP Management CLI. The following steps outline the phases of the revert process. Prepare revert: In this phase, the JBoss EAP installation is prepared for the revert on the target machine. The candidate server is prepared in the server temporal directory, which is the directory represented by the file system path jboss.domain.temp.dir in a managed domain or jboss.server.temp.dir in stand-alone server mode. Once this phase is completed, no further server preparations can be performed on the same candidate server. However, you can clean the installation manager cache, which allows you to prepare a different installation if needed. For more information, see Cleaning the installer . Apply revert: Once you have completed the revert process, restart your JBoss EAP server to apply the candidate server prepared to revert your installation. Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh Revert your JBoss EAP server: Note Use the installer history command to view the installation state you want to revert your installation to. Revert your JBoss EAP server updates in a stand-alone server: [standalone@localhost:9990 /] installer revert --revision=abcd1234 Revert your JBoss EAP server updates in a managed domain: [domain@localhost:9990 /] installer revert --host=target-host --revision=abcd1234 Note For more information about additional command options use the help command. Restart your JBoss EAP server to complete the revert process: Note You must ensure that no other processes are launched from the JBOSS_EAP/bin folder, such as JBOSS_EAP/bin/jconsole.sh and JBOSS_EAP/bin/appclient.sh , when restarting the server with the --perform-installation option. This precaution prevents conflicts in writing files that might be in use by other processes during the server's revert. Restart your JBoss EAP server in a stand-alone server: [standalone@localhost:9990 /] shutdown --perform-installation Restart your JBoss EAP server in a managed domain: [domain@localhost:9990 /] shutdown --host=target-host --perform-installation Additional resources JBoss EAP Management CLI overview . 7.2. Reverting your JBoss EAP server installation offline using the Management CLI The following example describes how to use the Management CLI to revert your JBoss EAP installation offline in a stand-alone server and a managed domain. This is useful in scenarios where the target server installation lacks access to external Maven repositories. You can use the Management CLI to revert your JBoss EAP server installation. To do so, you need to specify the location of the Maven repository that contains the required artifacts to revert your server. You can download the Maven repository for your update from the Red Hat Customer Portal . Prerequisite You have the Maven archive repository containing the required artifacts locally on your machine. Procedure Launch the Management CLI: EAP_HOME/bin/jboss-cli.sh Revert JBoss EAP installation offline: Revert JBoss EAP installation offline in a stand-alone server: [standalone@localhost:9990 /] installer revert --revision=abcd1234 --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository> Revert JBoss EAP offline in a managed domain: [domain@localhost:9990 /] installer revert --host=target-host --revision=abcd1234 --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository> Note For more information about additional command options use the help command. Restart your JBoss EAP server to complete the revert process: Note You must ensure that no other processes are launched from the JBOSS_EAP/bin folder, such as JBOSS_EAP/bin/jconsole.sh and JBOSS_EAP/bin/appclient.sh , when restarting the server with the --perform-installation option. This precaution prevents conflicts in writing files that might be in use by other processes during the server's revert. Restart your JBoss EAP server in a stand-alone server: [standalone@localhost:9990 /] shutdown --perform-installation Restart your JBoss EAP server in a managed domain: [domain@localhost:9990 /] shutdown --host=target-host --perform-installation
[ "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer revert --revision=abcd1234", "[domain@localhost:9990 /] installer revert --host=target-host --revision=abcd1234", "[standalone@localhost:9990 /] shutdown --perform-installation", "[domain@localhost:9990 /] shutdown --host=target-host --perform-installation", "EAP_HOME/bin/jboss-cli.sh", "[standalone@localhost:9990 /] installer revert --revision=abcd1234 --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository>", "[domain@localhost:9990 /] installer revert --host=target-host --revision=abcd1234 --maven-repo-files=<An absolute or a relative path pointing to the local archive file that contains a maven repository>", "[standalone@localhost:9990 /] shutdown --perform-installation", "[domain@localhost:9990 /] shutdown --host=target-host --perform-installation" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/assembly_reverting-a-jboss-eap-server-update_default
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint
Chapter 13. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_hybrid_and_multicloud_resources/Accessing-the-RADOS-Object-Gateway-S3-endpoint_rhodf
Chapter 11. Configuring logging
Chapter 11. Configuring logging AMQ Interconnect contains internal logging modules that provide important information about each router. For each module, you can configure the logging level, the format of the log file, and the location to which the logs should be written. 11.1. Logging modules AMQ Interconnect logs are broken into different categories called logging modules . Each module provides important information about a particular aspect of AMQ Interconnect. DEFAULT The default module. This module applies defaults to all of the other logging modules. ROUTER This module provides information and statistics about the local router. This includes how the router connects to other routers in the network, and information about the remote destinations that are directly reachable from the router (link routes, waypoints, autolinks, and so on). ROUTER_HELLO This module provides information about the Hello protocol used by interior routers to exchange Hello messages, which include information about the router's ID and a list of its reachable neighbors (the other routers with which this router has bidirectional connectivity). ROUTER_LS This module provides information about link-state data between routers, including Router Advertisement (RA), Link State Request (LSR), and Link State Update (LSU) messages. Periodically, each router sends an LSR to the other routers and receives an LSU with the requested information. Exchanging the above information, each router can compute the hops in the topology, and the related costs. ROUTER_MA This module provides information about the exchange of mobile address information between routers, including Mobile Address Request (MAR) and Mobile Address Update (MAU) messages exchanged between routers. You can use this log to monitor the state of mobile addresses attached to each router. MESSAGE This module provides information about AMQP messages sent and received by the router, including information about the address, body, and link. You can use this log to find high-level information about messages on a particular router. SERVER This module provides information about how the router is listening for and connecting to other containers in the network (such as clients, routers, and brokers). This information includes the state of AMQP messages sent and received by the broker (open, begin, attach, transfer, flow, and so on), and the related content of those messages. AGENT This module provides information about configuration changes made to the router from either editing the router's configuration file or using qdmanage . CONTAINER This module provides information about the nodes related to the router. This includes only the AMQP relay node. ERROR This module provides detailed information about error conditions encountered during execution. POLICY This module provides information about policies that have been configured for the router. Additional resources For examples of these logging modules, see Section 16.2, "Troubleshooting using logs" . 11.2. Configuring default logging You can specify the types of events that should be logged, the format of the log entries, and where those entries should be sent. Procedure In the /etc/qpid-dispatch/qdrouterd.conf configuration file, add a log section to set the default logging properties: This example configures all logging modules to log events starting at the info level: module Specify DEFAULT . enable The logging level. You can specify any of the following levels (from lowest to highest): trace - provides the most information, but significantly affects system performance debug - useful for debugging, but affects system performance info - provides general information without affecting system performance notice - provides general information, but is less verbose than info warning - provides information about issues you should be aware of, but which are not errors error - error conditions that you should address critical - critical system issues that you must address immediately To specify multiple levels, use a comma-separated list. You can also use + to specify a level and all levels above it. For example, trace,debug,warning+ enables trace, debug, warning, error, and critical levels. For default logging, you should typically use the info+ or notice+ level. These levels will provide general information, warnings, and errors for all modules without affecting the performance of AMQ Interconnect. includeTimestamp Set this to yes to include the timestamp in all logs. For information about additional log attributes, see log in the qdrouterd.conf man page. If you want to configure non-default logging for any of the logging modules, add an additional log section for each module that should not follow the default. This example configures the ROUTER logging module to log debug events: Additional resources For more information about viewing and using logs, see Chapter 16, Troubleshooting AMQ Interconnect .
[ "log { module: DEFAULT enable: info+ includeTimestamp: yes }", "log { module: ROUTER enable: debug includeTimestamp: yes }" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/configuring-logging-router-rhel
Chapter 15. Annotating encrypted RBD storage classes
Chapter 15. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/managing_and_allocating_storage_resources/annotating-the-existing-encrypted-rbd-storageclasses_rhodf
4.4. Identifying Contended User-Space Locks
4.4. Identifying Contended User-Space Locks This section describes how to identify contended user-space locks throughout the system within a specific time period. The ability to identify contended user-space locks can help you investigate hangs that you suspect may be caused by futex contentions. Simply put, a futex contention occurs when multiple processes are trying to access the same region of memory. In some cases, this can result in a deadlock between the processes in contention, thereby appearing as an application hang. To do this, futexes.stp probes the futex system call. futexes.stp futexes.stp needs to be manually stopped; upon exit, it prints the following information: Name and ID of the process responsible for a contention The region of memory it contested How many times the region of memory was contended Average time of contention throughout the probe Example 4.17, "futexes.stp Sample Output" contains an excerpt from the output of futexes.stp upon exiting the script (after approximately 20 seconds). Example 4.17. futexes.stp Sample Output
[ "#! /usr/bin/env stap This script tries to identify contended user-space locks by hooking into the futex system call. global thread_thislock # short global thread_blocktime # global FUTEX_WAIT = 0 /*, FUTEX_WAKE = 1 */ global lock_waits # long-lived stats on (tid,lock) blockage elapsed time global process_names # long-lived pid-to-execname mapping probe syscall.futex { if (op != FUTEX_WAIT) next # don't care about WAKE event originator t = tid () process_names[pid()] = execname() thread_thislock[t] = USDuaddr thread_blocktime[t] = gettimeofday_us() } probe syscall.futex.return { t = tid() ts = thread_blocktime[t] if (ts) { elapsed = gettimeofday_us() - ts lock_waits[pid(), thread_thislock[t]] <<< elapsed delete thread_blocktime[t] delete thread_thislock[t] } } probe end { foreach ([pid+, lock] in lock_waits) printf (\"%s[%d] lock %p contended %d times, %d avg us\\n\", process_names[pid], pid, lock, @count(lock_waits[pid,lock]), @avg(lock_waits[pid,lock])) }", "[...] automount[2825] lock 0x00bc7784 contended 18 times, 999931 avg us synergyc[3686] lock 0x0861e96c contended 192 times, 101991 avg us synergyc[3758] lock 0x08d98744 contended 192 times, 101990 avg us synergyc[3938] lock 0x0982a8b4 contended 192 times, 101997 avg us [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/futexcontentionsect
Chapter 3. MTR 1.2.6
Chapter 3. MTR 1.2.6 3.1. Known issues The following known issues are in the MTR 1.2.6 release: Unable to migrate an application to MTR due to a SEVERE [org.jboss.windup.web.services.messaging.PackageDiscoveryMDB] error When uploading files for analysis, the server log would return a SEVERE [org.jboss.windup.web.services.messaging.PackageDiscoveryMDB] error. This error is caused by a null: java.lang.NullPointerException . (WINDUP-4189) For a complete list of all known issues, see the list of MTR 1.2.6 known issues in Jira. 3.2. Resolved issues MTR 1.2.6 has the following resolved issues: CVE-2024-1132: org.keycloak-keycloak-parent : keycloak path transversal in redirection validation A flaw was discovered in Keycloak, where it does not properly validate URLs included in a redirect. This flaw could allow an attacker to construct a malicious request to bypass validation, access other URLs and sensitive information within the domain, or conduct further attacks. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2024-1132) . CVE-2023-45857: Axios 1.5 exposes confidential data stored in cookies A flaw was discovered in Axios 1.5.1 that accidentally revealed the confidential XSRF-TOKEN , stored in cookies, by including it in the HTTP header X-XSRF-TOKEN for every request made to any host, thereby allowing attackers to view sensitive information. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2023-45857) . CVE-2024-28849: follow-redirects package clears authorization headers A flaw was discovered in the follow-redirects package, which clears authorization headers, but it fails to clear the proxy-authentication headers. This flaw could lead to credential leakage, which could have a high impact on data confidentiality. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2024-28849) CVE-2024-29131: Out-of-bounds Write vulnerability in Apache Commons Configuration A vulnerability was found in Apache Commons-Configuration2, where a Stack Overflow Error can occur when adding a property in the AbstractListDelimiterHandler.flattenIterator() method. This issue could allow an attacker to corrupt memory or execute a denial of service (DoS) attack by crafting a malicious property that triggers an out-of-bounds write issue when processed by the vulnerable method. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2024-29131) CVE-2024-29133: Out-of-bounds Write vulnerability in Apache Commons Configuration A vulnerability was found in Apache Commons-Configuration2, where a Stack Overflow Error occurs when calling the ListDelimiterHandler.flatten(Object, int) method with a cyclical object tree. This issue could allow an attacker to trigger an out-of-bounds write that could lead to memory corruption or cause a denial of service (DoS) attach. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2024-29133) CVE-2024-29180: webpack-dev-middleware lack of URL validation may lead to a file leak A flaw was found in the webpack-dev-middleware package, where it failed to validate the supplied URL address sufficiently before returning local files. This flaw allows an attacker to craft URLs to return arbitrary local files from the developer's machine. The lack of normalization before calling the middleware also allows the attacker to perform path traversal attacks on the target environment. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2024-29180) CVE-2023-4639: org.keycloak-keycloak-parent undertow Cookie Smuggling and Spoofing A flaw was found in Undertow, which incorrectly parses cookies with certain value-delimiting characters in incoming requests. This vulnerability has the potential to enable an attacker to construct a cookie value to intercept HttpOnly cookie values or spoof arbitrary additional cookie values, resulting in unauthorized data access or modification. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2023-4639) . CVE-2023-36479: com.google.guava-guava-parent improper addition of quotation marks to user inputs in Jetty CGI Servlet A flaw was found in Jetty's org.eclipse.jetty.servlets.CGI Servlet, which permits incorrect command execution in specific circumstances, such as requests with certain characters in requested filenames. This issue could allow an attacker to run permitted commands besides the ones requested. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2023-36479) . CVE-2023-26364: css-tools improper input validation causes denial of service A flaw was found in @adobe/css-tools , which could potentially lead to a minor denial of service (DoS) when parsing CSS. User interaction and privileges are not required to jeopardize an environment. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2023-26364) . CVE-2023-48631: css-tools : regular expression denial of service A flaw was found in @adobe/css-tools , which could lead to a regular expression denial of service (ReDoS) when attempting to parse CSS. Users are recommended to upgrade to MTR 1.2.6, which resolves this issue. For more details, see (CVE-2023-48631) . For a complete list of all issues resolved in this release, see the list of MTR 1.2.6 resolved issues in Jira.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/release_notes/mtr_1_2_6
Hosted control planes
Hosted control planes OpenShift Container Platform 4.17 Using hosted control planes with OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/hosted_control_planes/index