title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 40. ResourceTemplate schema reference | Chapter 40. ResourceTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , KafkaNodePoolTemplate , KafkaUserTemplate , ZookeeperClusterTemplate Property Description metadata Metadata applied to the resource. MetadataTemplate | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ResourceTemplate-reference |
Preface | Preface In this guide, the terms upgrade, update, and migrate have the following meanings: Upgrading The process of advancing your Satellite Server and Capsule Server installations from a y-stream release to the , for example Satellite 6.10 to Satellite 6.11. For more information, see Chapter 1, Upgrading Overview . Updating The process of advancing your Satellite Server and Capsule Server installations from a z-stream release to the , for example Satellite 6.11.0 to Satellite 6.11.1. Migrating The process of moving an existing Satellite installation to a new instance. For more information, see Chapter 5, Migrating Satellite to a New Red Hat Enterprise Linux System . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/upgrading_and_updating_red_hat_satellite/pr01 |
25.4. Using Pre-Existing Keys and Certificates | 25.4. Using Pre-Existing Keys and Certificates If you already have an existing key and certificate (for example, if you are installing the secure server to replace another company's secure server product), you can probably use your existing key and certificate with the secure server. The following two situations provide instances where you are not able to use your existing key and certificate: If you are changing your IP address or domain name - Certificates are issued for a particular IP address and domain name pair. You must get a new certificate if you are changing your IP address or domain name. If you have a certificate from VeriSign and you are changing your server software - VeriSign is a widely used CA. If you already have a VeriSign certificate for another purpose, you may have been considering using your existing VeriSign certificate with your new secure server. However, you are not be allowed to because VeriSign issues certificates for one specific server software and IP address/domain name combination. If you change either of those parameters (for example, if you previously used a different secure server product), the VeriSign certificate you obtained to use with the configuration will not work with the new configuration. You must obtain a new certificate. If you have an existing key and certificate that you can use, you do not have to generate a new key and obtain a new certificate. However, you may need to move and rename the files which contain your key and certificate. Move your existing key file to: Move your existing certificate file to: After you have moved your key and certificate, skip to Section 25.9, "Testing The Certificate" . If you are upgrading from the Red Hat Secure Web Server, your old key ( httpsd.key ) and certificate ( httpsd.crt ) are located in /etc/httpd/conf/ . Move and rename your key and certificate so that the secure server can use them. Use the following two commands to move and rename your key and certificate files: Then, start your secure server with the command: You are prompted to enter your passphrase. After you type it in and press Enter , the server starts. | [
"/etc/httpd/conf/ssl.key/server.key",
"/etc/httpd/conf/ssl.crt/server.crt",
"mv /etc/httpd/conf/httpsd.key /etc/httpd/conf/ssl.key/server.key mv /etc/httpd/conf/httpsd.crt /etc/httpd/conf/ssl.crt/server.crt",
"/sbin/service httpd start"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/apache_http_secure_server_configuration-using_pre_existing_keys_and_certificates |
21.6. virt-rescue: The Rescue Shell | 21.6. virt-rescue: The Rescue Shell This section provides information about the rescue shell. 21.6.1. Introduction This section describes virt-rescue , which can be considered analogous to a rescue CD for virtual machines. It boots a guest virtual machine into a rescue shell so that maintenance can be performed to correct errors and the guest virtual machine can be repaired. There is some overlap between virt-rescue and guestfish. It is important to distinguish their differing uses. virt-rescue is for making interactive, ad-hoc changes using ordinary Linux file system tools. It is particularly suited to rescuing a guest virtual machine that has failed . virt-rescue cannot be scripted. In contrast, guestfish is particularly useful for making scripted, structured changes through a formal set of commands (the libguestfs API), although it can also be used interactively. 21.6.2. Running virt-rescue Before you use virt-rescue on a guest virtual machine, make sure the guest virtual machine is not running, otherwise disk corruption will occur. When you are sure the guest virtual machine is not live, enter: (where GuestName is the guest name as known to libvirt), or: (where the path can be any file, any logical volume, LUN, or so on) containing a guest virtual machine disk. You will first see output scroll past, as virt-rescue boots the rescue VM. In the end you will see: The shell prompt here is an ordinary bash shell, and a reduced set of ordinary Red Hat Enterprise Linux commands is available. For example, you can enter: The command will list disk partitions. To mount a file system, it is suggested that you mount it under /sysroot , which is an empty directory in the rescue machine for the user to mount anything you like. Note that the files under / are files from the rescue VM itself: When you are finished rescuing the guest virtual machine, exit the shell by entering exit or Ctrl+d . virt-rescue has many command-line options. The options most often used are: --ro : Operate in read-only mode on the guest virtual machine. No changes will be saved. You can use this to experiment with the guest virtual machine. As soon as you exit from the shell, all of your changes are discarded. --network : Enable network access from the rescue shell. Use this for example if you need to download RPM or other files into the guest virtual machine. | [
"virt-rescue -d GuestName",
"virt-rescue -a /path/to/disk/image",
"Welcome to virt-rescue, the libguestfs rescue shell. Note: The contents of / are the rescue appliance. You have to mount the guest virtual machine's partitions under /sysroot before you can examine them. bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell ><rescue>",
"><rescue> fdisk -l /dev/vda",
"><rescue> mount /dev/vda1 /sysroot/ EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null) ><rescue> ls -l /sysroot/grub/ total 324 -rw-r--r--. 1 root root 63 Sep 16 18:14 device.map -rw-r--r--. 1 root root 13200 Sep 16 18:14 e2fs_stage1_5 -rw-r--r--. 1 root root 12512 Sep 16 18:14 fat_stage1_5 -rw-r--r--. 1 root root 11744 Sep 16 18:14 ffs_stage1_5 -rw-------. 1 root root 1503 Oct 15 11:19 grub.conf [...]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-virt_rescue_the_rescue_shell |
6.14. Migrating Virtual Machines Between Hosts | 6.14. Migrating Virtual Machines Between Hosts Live migration provides the ability to move a running virtual machine between physical hosts with no interruption to service. The virtual machine remains powered on and user applications continue to run while the virtual machine is relocated to a new physical host. In the background, the virtual machine's RAM is copied from the source host to the destination host. Storage and network connectivity are not altered. Note A virtual machine that is using a vGPU cannot be migrated to a different host. 6.14.1. Live Migration Prerequisites Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV You can use live migration to seamlessly move virtual machines to support a number of common maintenance tasks. Your Red Hat Virtualization environment must be correctly configured to support live migration well in advance of using it. At a minimum, the following prerequisites must be met to enable successful live migration of virtual machines: The source and destination hosts are members of the same cluster, ensuring CPU compatibility between them. Note Live migrating virtual machines between different clusters is generally not recommended. The source and destination hosts' status is Up . The source and destination hosts have access to the same virtual networks and VLANs. The source and destination hosts have access to the data storage domain on which the virtual machine resides. The destination host has sufficient CPU capacity to support the virtual machine's requirements. The destination host has sufficient unused RAM to support the virtual machine's requirements. The migrating virtual machine does not have the cache!=none custom property set. Live migration is performed using the management network and involves transferring large amounts of data between hosts. Concurrent migrations have the potential to saturate the management network. For best performance, Red Hat recommends creating separate logical networks for management, storage, display, and virtual machine data to minimize the risk of network saturation. Configuring Virtual Machines with SR-IOV-Enabled vNICs to Reduce Network Outage during Migration Virtual machines with vNICs that are directly connected to a virtual function (VF) of an SR-IOV-enabled host NIC can be further configured to reduce network outage during live migration: Ensure that the destination host has an available VF. Set the Passthrough and Migratable options in the passthrough vNIC's profile. See Enabling Passthrough on a vNIC Profile in the Administration Guide . Enable hotplugging for the virtual machine's network interface. Ensure that the virtual machine has a backup VirtIO vNIC, in addition to the passthrough vNIC, to maintain the virtual machine's network connection during migration. Set the VirtIO vNIC's No Network Filter option before configuring the bond. See Explanation of Settings in the VM Interface Profile Window in the Administration Guide . Add both vNICs as slaves under an active-backup bond on the virtual machine, with the passthrough vNIC as the primary interface. The bond and vNIC profiles can have one of the following configurations: Recommended : The bond is not configured with fail_over_mac=active and the VF vNIC is the primary slave. Disable the VirtIO vNIC profile's MAC-spoofing filter to ensure that traffic passing through the VirtIO vNIC is not dropped because it uses the VF vNIC MAC address. See Applying Network Filtering in the RHEL 7 Virtualization Deployment and Administration Guide . The bond is configured with fail_over_mac=active . This failover policy ensures that the MAC address of the bond is always the MAC address of the active slave. During failover, the virtual machine's MAC address changes, with a slight disruption in traffic. 6.14.2. Optimizing Live Migration Live virtual machine migration can be a resource-intensive operation. The following two options can be set globally for every virtual machine in the environment, at the cluster level, or at the individual virtual machine level to optimize live migration. The Auto Converge migrations option allows you to set whether auto-convergence is used during live migration of virtual machines. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. The Enable migration compression option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Both options are disabled globally by default. Configuring Auto-convergence and Migration Compression for Virtual Machine Migration Configure the optimization settings at the global level: Enable auto-convergence at the global level: Enable migration compression at the global level: Restart the ovirt-engine service to apply the changes: Configure the optimization settings at the cluster level: Click Compute Clusters and select a cluster. Click Edit . Click the Migration Policy tab. From the Auto Converge migrations list, select Inherit from global setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from global setting , Compress , or Don't Compress . Click OK . Configure the optimization settings at the virtual machine level: Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. From the Auto Converge migrations list, select Inherit from cluster setting , Auto Converge , or Don't Auto Converge . From the Enable migration compression list, select Inherit from cluster setting , Compress , or Don't Compress . Click OK . 6.14.3. Guest Agent Hooks Hooks are scripts that trigger activity within a virtual machine when key events occur: Before migration After migration Before hibernation After hibernation The hooks configuration base directory is /etc/ovirt-guest-agent/hooks.d on Linux systems and C:\Program Files\Redhat\RHEV\Drivers\Agent on Windows systems. Each event has a corresponding subdirectory: before_migration and after_migration , before_hibernation and after_hibernation . All files or symbolic links in that directory will be executed. The executing user on Linux systems is ovirtagent . If the script needs root permissions, the elevation must be executed by the creator of the hook script. The executing user on Windows systems is the System Service user. 6.14.4. Automatic Virtual Machine Migration Red Hat Virtualization Manager automatically initiates live migration of all virtual machines running on a host when the host is moved into maintenance mode. The destination host for each virtual machine is assessed as the virtual machine is migrated, in order to spread the load across the cluster. From version 4.3, all virtual machines defined with manual or automatic migration modes are migrated when the host is moved into maintenance mode. However, for high performance and/or pinned virtual machines, a Maintenance Host window is displayed, asking you to confirm the action because the performance on the target host may be less than the performance on the current host. The Manager automatically initiates live migration of virtual machines in order to maintain load-balancing or power-saving levels in line with scheduling policy. Specify the scheduling policy that best suits the needs of your environment. You can also disable automatic, or even manual, live migration of specific virtual machines where required. If your virtual machines are configured for high performance, and/or if they have been pinned (by setting Passthrough Host CPU, CPU Pinning, or NUMA Pinning), the migration mode is set to Allow manual migration only . However, this can be changed to Allow Manual and Automatic mode if required. Special care should be taken when changing the default migration setting so that it does not result in a virtual machine migrating to a host that does not support high performance or pinning. 6.14.5. Preventing Automatic Migration of a Virtual Machine Red Hat Virtualization Manager allows you to disable automatic migration of virtual machines. You can also disable manual migration of virtual machines by setting the virtual machine to run only on a specific host. The ability to disable automatic migration and require a virtual machine to run on a particular host is useful when using application high availability products, such as Red Hat High Availability or Cluster Suite. Preventing Automatic Migration of Virtual Machines Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Host tab. In the Start Running On section, select Any Host in Cluster or Specific Host(s) , which enables you to select multiple hosts. Warning Explicitly assigning a virtual machine to a specific host and disabling migration are mutually exclusive with Red Hat Virtualization high availability. Important If the virtual machine has host devices directly attached to it, and a different host is specified, the host devices from the host will be automatically removed from the virtual machine. Select Allow manual migration only or Do not allow migration from the Migration Options drop-down list. Optionally, select the Use custom migration downtime check box and specify a value in milliseconds. Click OK . 6.14.6. Manually Migrating Virtual Machines A running virtual machine can be live migrated to any host within its designated host cluster. Live migration of virtual machines does not cause any service interruption. Migrating virtual machines to a different host is especially useful if the load on a particular host is too high. For live migration prerequisites, see Section 6.14.1, "Live Migration Prerequisites" . For high performance virtual machines and/or virtual machines defined with Pass-Through Host CPU , CPU Pinning , or NUMA Pinning , the default migration mode is Manual . Select Select Host Automatically so that the virtual machine migrates to the host that offers the best performance. Note When you place a host into maintenance mode, the virtual machines running on that host are automatically migrated to other hosts in the same cluster. You do not need to manually migrate these virtual machines. Note Live migrating virtual machines between different clusters is generally not recommended. The currently only supported use case is documented at https://access.redhat.com/articles/1390733 . Manually Migrating Virtual Machines Click Compute Virtual Machines and select a running virtual machine. Click Migrate . Use the radio buttons to select whether to Select Host Automatically or to Select Destination Host , specifying the host using the drop-down list. Note When the Select Host Automatically option is selected, the system determines the host to which the virtual machine is migrated according to the load balancing and power management rules set up in the scheduling policy. Click OK . During migration, progress is shown in the Migration progress bar. Once migration is complete the Host column will update to display the host the virtual machine has been migrated to. 6.14.7. Setting Migration Priority Red Hat Virtualization Manager queues concurrent requests for migration of virtual machines off of a given host. The load balancing process runs every minute. Hosts already involved in a migration event are not included in the migration cycle until their migration event has completed. When there is a migration request in the queue and available hosts in the cluster to action it, a migration event is triggered in line with the load balancing policy for the cluster. You can influence the ordering of the migration queue by setting the priority of each virtual machine; for example, setting mission critical virtual machines to migrate before others. Migrations will be ordered by priority; virtual machines with the highest priority will be migrated first. Setting Migration Priority Click Compute Virtual Machines and select a virtual machine. Click Edit . Select the High Availability tab. Select Low , Medium , or High from the Priority drop-down list. Click OK . 6.14.8. Canceling Ongoing Virtual Machine Migrations A virtual machine migration is taking longer than you expected. You'd like to be sure where all virtual machines are running before you make any changes to your environment. Canceling Ongoing Virtual Machine Migrations Select the migrating virtual machine. It is displayed in Compute Virtual Machines with a status of Migrating from . Click More Actions ( ), then click Cancel Migration . The virtual machine status returns from Migrating from to Up . 6.14.9. Event and Log Notification upon Automatic Migration of Highly Available Virtual Servers When a virtual server is automatically migrated because of the high availability function, the details of an automatic migration are documented in the Events tab and in the engine log to aid in troubleshooting, as illustrated in the following examples: Example 6.4. Notification in the Events Tab of the Administration Portal Highly Available Virtual_Machine_Name failed. It will be restarted automatically. Virtual_Machine_Name was restarted on Host Host_Name Example 6.5. Notification in the Manager engine.log This log can be found on the Red Hat Virtualization Manager at /var/log/ovirt-engine/engine.log : Failed to start Highly Available VM. Attempting to restart. VM Name: Virtual_Machine_Name , VM Id:_Virtual_Machine_ID_Number_ | [
"engine-config -s DefaultAutoConvergence=True",
"engine-config -s DefaultMigrationCompression=True",
"systemctl restart ovirt-engine.service"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-Migrating_Virtual_Machines_Between_Hosts |
Chapter 16. Troubleshooting Logging | Chapter 16. Troubleshooting Logging 16.1. Viewing OpenShift Logging status You can view the status of the Red Hat OpenShift Logging Operator and for a number of logging subsystem components. 16.1.1. Viewing the status of the Red Hat OpenShift Logging Operator You can view the status of your Red Hat OpenShift Logging Operator. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging To view the OpenShift Logging status: Get the OpenShift Logging status: USD oc get clusterlogging instance -o yaml Example output apiVersion: logging.openshift.io/v1 kind: ClusterLogging .... status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1 1 In the output, the cluster status fields appear in the status stanza. 2 Information on the Fluentd pods. 3 Information on the Elasticsearch pods, including Elasticsearch cluster health, green , yellow , or red . 4 Information on the Kibana pods. 16.1.1.1. Example condition messages The following are examples of some condition messages from the Status.Nodes section of the OpenShift Logging instance. A status message similar to the following indicates a node has exceeded the configured low watermark and no shard will be allocated to this node: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {} A status message similar to the following indicates a node has exceeded the configured high watermark and shards will be relocated to other nodes: Example output nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {} A status message similar to the following indicates the Elasticsearch node selector in the CR does not match any nodes in the cluster: Example output Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: A status message similar to the following indicates that the requested PVC could not bind to PV: Example output Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable A status message similar to the following indicates that the Fluentd pods cannot be scheduled because the node selector did not match any nodes: Example output Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready: 16.1.2. Viewing the status of logging subsystem components You can view the status for a number of logging subsystem components. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging View the status of the logging subsystem for Red Hat OpenShift environment: USD oc describe deployment cluster-logging-operator Example output Name: cluster-logging-operator .... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1---- View the status of the logging subsystem replica set: Get the name of a replica set: Example output USD oc get replicaset Example output NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m Get the status of the replica set: USD oc describe replicaset cluster-logging-operator-574b8987df Example output Name: cluster-logging-operator-574b8987df .... Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv---- 16.2. Viewing the status of the Elasticsearch log store You can view the status of the OpenShift Elasticsearch Operator and for a number of Elasticsearch components. 16.2.1. Viewing the status of the log store You can view the status of your log store. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. Procedure Change to the openshift-logging project. USD oc project openshift-logging To view the status: Get the name of the log store instance: USD oc get Elasticsearch Example output NAME AGE elasticsearch 5h9m Get the log store status: USD oc get Elasticsearch <Elasticsearch-instance> -o yaml For example: USD oc get Elasticsearch elasticsearch -n openshift-logging -o yaml The output includes information similar to the following: Example output status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: "" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all 1 In the output, the cluster status fields appear in the status stanza. 2 The status of the log store: The number of active primary shards. The number of active shards. The number of shards that are initializing. The number of log store data nodes. The total number of log store nodes. The number of pending tasks. The log store status: green , red , yellow . The number of unassigned shards. 3 Any status conditions, if present. The log store status indicates the reasons from the scheduler if a pod could not be placed. Any events related to the following conditions are shown: Container Waiting for both the log store and proxy containers. Container Terminated for both the log store and proxy containers. Pod unschedulable. Also, a condition is shown for a number of issues; see Example condition messages . 4 The log store nodes in the cluster, with upgradeStatus . 5 The log store client, data, and master pods in the cluster, listed under 'failed`, notReady , or ready state. 16.2.1.1. Example condition messages The following are examples of some condition messages from the Status section of the Elasticsearch instance. The following status message indicates that a node has exceeded the configured low watermark, and no shard will be allocated to this node. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that a node has exceeded the configured high watermark, and shards will be relocated to other nodes. status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: "True" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {} The following status message indicates that the log store node selector in the CR does not match any nodes in the cluster: status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: "True" type: Unschedulable The following status message indicates that the log store CR uses a non-existent persistent volume claim (PVC). status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable The following status message indicates that your log store cluster does not have enough nodes to support the redundancy policy. status: clusterHealth: "" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: "True" type: InvalidRedundancy This status message indicates your cluster has too many control plane nodes: status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters The following status message indicates that Elasticsearch storage does not support the change you tried to make. For example: status: clusterHealth: green conditions: - lastTransitionTime: "2021-05-07T01:05:13Z" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored The reason and type fields specify the type of unsupported change: StorageClassNameChangeIgnored Unsupported change to the storage class name. StorageSizeChangeIgnored Unsupported change the storage size. StorageStructureChangeIgnored Unsupported change between ephemeral and persistent storage structures. Important If you try to configure the ClusterLogging custom resource (CR) to switch from ephemeral to persistent storage, the OpenShift Elasticsearch Operator creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the StorageStructureChangeIgnored status, you must revert the change to the ClusterLogging CR and delete the PVC. 16.2.2. Viewing the status of the log store components You can view the status for a number of the log store components. Elasticsearch indices You can view the status of the Elasticsearch indices. Get the name of an Elasticsearch pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of the indices: USD oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices Example output Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0 Log store pods You can view the status of the pods that host the log store. Get the name of a pod: USD oc get pods --selector component=elasticsearch -o name Example output pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7 Get the status of a pod: USD oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw The output includes the following status information: Example output .... Status: Running .... Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 .... Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True .... Events: <none> Log storage pod deployment configuration You can view the status of the log store deployment configuration. Get the name of a deployment configuration: USD oc get deployment --selector component=elasticsearch -o name Example output deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3 Get the deployment configuration status: USD oc describe deployment elasticsearch-cdm-1gon-1 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable .... Events: <none> Log store replica set You can view the status of the log store replica set. Get the name of a replica set: USD oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d Get the status of the replica set: USD oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495 The output includes the following status information: Example output .... Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 .... Events: <none> 16.2.3. Elasticsearch cluster status The Grafana dashboard in the Observe section of the OpenShift Container Platform web console displays the status of the Elasticsearch cluster. To get the status of the OpenShift Elasticsearch cluster, visit the Grafana dashboard in the Observe section of the OpenShift Container Platform web console at <cluster_url>/monitoring/dashboards/grafana-dashboard-cluster-logging . Elasticsearch status fields eo_elasticsearch_cr_cluster_management_state Shows whether the Elasticsearch cluster is in a managed or unmanaged state. For example: eo_elasticsearch_cr_cluster_management_state{state="managed"} 1 eo_elasticsearch_cr_cluster_management_state{state="unmanaged"} 0 eo_elasticsearch_cr_restart_total Shows the number of times the Elasticsearch nodes have restarted for certificate restarts, rolling restarts, or scheduled restarts. For example: eo_elasticsearch_cr_restart_total{reason="cert_restart"} 1 eo_elasticsearch_cr_restart_total{reason="rolling_restart"} 1 eo_elasticsearch_cr_restart_total{reason="scheduled_restart"} 3 es_index_namespaces_total Shows the total number of Elasticsearch index namespaces. For example: Total number of Namespaces. es_index_namespaces_total 5 es_index_document_count Shows the number of records for each namespace. For example: es_index_document_count{namespace="namespace_1"} 25 es_index_document_count{namespace="namespace_2"} 10 es_index_document_count{namespace="namespace_3"} 5 The "Secret Elasticsearch fields are either missing or empty" message If Elasticsearch is missing the admin-cert , admin-key , logging-es.crt , or logging-es.key files, the dashboard shows a status message similar to the following example: message": "Secret \"elasticsearch\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]", "reason": "Missing Required Secrets", 16.3. Understanding logging subsystem alerts All of the logging collector alerts are listed on the Alerting UI of the OpenShift Container Platform web console. 16.3.1. Viewing logging collector alerts Alerts are shown in the OpenShift Container Platform web console, on the Alerts tab of the Alerting UI. Alerts are in one of the following states: Firing . The alert condition is true for the duration of the timeout. Click the Options menu at the end of the firing alert to view more information or silence the alert. Pending The alert condition is currently true, but the timeout has not been reached. Not Firing . The alert is not currently triggered. Procedure To view the logging subsystem and other OpenShift Container Platform alerts: In the OpenShift Container Platform console, click Observe Alerting . Click the Alerts tab. The alerts are listed, based on the filters selected. Additional resources For more information on the Alerting UI, see Managing alerts . 16.3.2. About logging collector alerts The following alerts are generated by the logging collector. You can view these alerts in the OpenShift Container Platform web console, on the Alerts page of the Alerting UI. Table 16.1. Fluentd Prometheus alerts Alert Message Description Severity FluentDHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is high, by default more than 10 in the 15 minutes. Warning FluentdNodeDown Prometheus could not scrape fluentd <instance> for more than 10m. Fluentd is reporting that Prometheus could not scrape a specific Fluentd instance. Critical FluentdQueueLengthIncreasing In the last 12h, fluentd <instance> buffer queue length constantly increased more than 1. Current value is <value>. Fluentd is reporting that the queue size is increasing. Critical FluentDVeryHighErrorRate <value> of records have resulted in an error by fluentd <instance>. The number of FluentD output errors is very high, by default more than 25 in the 15 minutes. Critical 16.3.3. About Elasticsearch alerting rules You can view these alerting rules in Prometheus. Table 16.2. Alerting rules Alert Description Severity ElasticsearchClusterNotHealthy The cluster health status has been RED for at least 2 minutes. The cluster does not accept writes, shards may be missing, or the master node hasn't been elected yet. Critical ElasticsearchClusterNotHealthy The cluster health status has been YELLOW for at least 20 minutes. Some shard replicas are not allocated. Warning ElasticsearchDiskSpaceRunningLow The cluster is expected to be out of disk space within the 6 hours. Critical ElasticsearchHighFileDescriptorUsage The cluster is predicted to be out of file descriptors within the hour. Warning ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is high. Alert ElasticsearchNodeDiskWatermarkReached The specified node has hit the low watermark due to low free disk space. Shards can not be allocated to this node anymore. You should consider adding more disk space to the node. Info ElasticsearchNodeDiskWatermarkReached The specified node has hit the high watermark due to low free disk space. Some shards will be re-allocated to different nodes if possible. Make sure more disk space is added to the node or drop old indices allocated to this node. Warning ElasticsearchNodeDiskWatermarkReached The specified node has hit the flood watermark due to low free disk space. Every index that has a shard allocated on this node is enforced a read-only block. The index block must be manually released when the disk use falls below the high watermark. Critical ElasticsearchJVMHeapUseHigh The JVM Heap usage on the specified node is too high. Alert ElasticsearchWriteRequestsRejectionJumps Elasticsearch is experiencing an increase in write rejections on the specified node. This node might not be keeping up with the indexing speed. Warning AggregatedLoggingSystemCPUHigh The CPU used by the system on the specified node is too high. Alert ElasticsearchProcessCPUHigh The CPU used by Elasticsearch on the specified node is too high. Alert 16.4. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging subsystem components. For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift Logging. Note Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data. 16.4.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging subsystem, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 16.4.2. Prerequisites The logging subsystem and Elasticsearch must be installed. 16.4.3. Collecting OpenShift Logging data You can use the oc adm must-gather CLI command to collect information about your logging subsystem. Procedure To collect logging subsystem information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the OpenShift Logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal . 16.5. Troubleshooting for Critical Alerts 16.5.1. Elasticsearch Cluster Health is Red At least one primary shard and its replicas are not allocated to a node. Troubleshooting Check the Elasticsearch cluster health and verify that the cluster status is red. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health List the nodes that have joined the cluster. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v List the Elasticsearch pods and compare them with the nodes in the command output from the step. oc -n openshift-logging get pods -l component=elasticsearch If some of the Elasticsearch nodes have not joined the cluster, perform the following steps. Confirm that Elasticsearch has an elected control plane node. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v Review the pod logs of the elected control plane node for issues. oc logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging Review the logs of nodes that have not joined the cluster for issues. oc logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging If all the nodes have joined the cluster, perform the following steps, check if the cluster is in the process of recovering. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true If there is no command output, the recovery process might be delayed or stalled by pending tasks. Check if there are pending tasks. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks If there are pending tasks, monitor their status. If their status changes and indicates that the cluster is recovering, continue waiting. The recovery time varies according to the size of the cluster and other factors. Otherwise, if the status of the pending tasks does not change, this indicates that the recovery has stalled. If it seems like the recovery has stalled, check if cluster.routing.allocation.enable is set to none . oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty If cluster.routing.allocation.enable is set to none , set it to all . oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{"persistent": {"cluster.routing.allocation.enable":"all"}}' Check which indices are still red. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v If any indices are still red, try to clear them by performing the following steps. Clear the cache. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty Increase the max allocation retries. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{"index.allocation.max_retries":10}' Delete all the scroll items. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE Increase the timeout. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{"index.unassigned.node_left.delayed_timeout":"10m"}' If the preceding steps do not clear the red indices, delete the indices individually. Identify the red index name. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v Delete the red index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE If there are no red indices and the cluster status is red, check for a continuous heavy processing load on a data node. Check if the Elasticsearch JVM Heap usage is high. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty In the command output, review the node_name.jvm.mem.heap_used_percent field to determine the JVM Heap usage. Check for high CPU utilization. Additional resources Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status . 16.5.2. Elasticsearch Cluster Health is Yellow Replica shards for at least one primary shard are not allocated to nodes. Troubleshooting Increase the node count by adjusting nodeCount in the ClusterLogging CR. Additional resources About the Cluster Logging custom resource Configuring persistent storage for the log store Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status . 16.5.3. Elasticsearch Node Disk Low Watermark Reached Elasticsearch does not allocate shards to nodes that reach the low watermark . Troubleshooting Identify the node on which Elasticsearch is deployed. oc -n openshift-logging get po -o wide Check if there are unassigned shards . oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards If there are unassigned shards, check the disk space on each node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done Check the nodes.node_name.fs field to determine the free disk space on that node. If the used disk percentage is above 85%, the node has exceeded the low watermark, and shards can no longer be allocated to this node. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource 16.5.4. Elasticsearch Node Disk High Watermark Reached Elasticsearch attempts to relocate shards away from a node that has reached the high watermark . Troubleshooting Identify the node on which Elasticsearch is deployed. oc -n openshift-logging get po -o wide Check the disk space on each node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done Check if the cluster is rebalancing. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards If the command output shows relocating shards, the High Watermark has been exceeded. The default value of the High Watermark is 90%. The shards relocate to a node with low disk usage that has not crossed any watermark threshold limits. To allocate shards to a particular node, free up some space. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource 16.5.5. Elasticsearch Node Disk Flood Watermark Reached Elasticsearch enforces a read-only index block on every index that has both of these conditions: One or more shards are allocated to the node. One or more disks exceed the flood stage . Troubleshooting Check the disk space of the Elasticsearch node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done Check the nodes.node_name.fs field to determine the free disk space on that node. If the used disk percentage is above 95%, it signifies that the node has crossed the flood watermark. Writing is blocked for shards allocated on this particular node. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Continue freeing up and monitoring the disk space until the used disk space drops below 90%. Then, unblock write to this particular node. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{"index.blocks.read_only_allow_delete": null}' Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource 16.5.6. Elasticsearch JVM Heap Use is High The Elasticsearch node JVM Heap memory used is above 75%. Troubleshooting Consider increasing the heap size . 16.5.7. Aggregated Logging System CPU is High System CPU usage on the node is high. Troubleshooting Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 16.5.8. Elasticsearch Process CPU is High Elasticsearch process CPU usage on the node is high. Troubleshooting Check the CPU of the cluster node. Consider allocating more CPU resources to the node. 16.5.9. Elasticsearch Disk Space is Running Low The Elasticsearch Cluster is predicted to be out of disk space within the 6 hours based on current disk usage. Troubleshooting Get the disk space of the Elasticsearch node. for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done In the command output, check the nodes.node_name.fs field to determine the free disk space on that node. Try to increase the disk space on all nodes. If increasing the disk space is not possible, try adding a new data node to the cluster. If adding a new data node is problematic, decrease the total cluster redundancy policy. Check the current redundancyPolicy . oc -n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}' Note If you are using a ClusterLogging CR, enter: oc -n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}' If the cluster redundancyPolicy is higher than SingleRedundancy , set it to SingleRedundancy and save this change. If the preceding steps do not fix the issue, delete the old indices. Check the status of all indices on Elasticsearch. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices Identify an old index that can be deleted. Delete the index. oc exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE Additional resources Search for "redundancyPolicy" in the "Sample ClusterLogging custom resource (CR)" in About the Cluster Logging custom resource Search for "ElasticsearchDiskSpaceRunningLow" in About Elasticsearch alerting rules . Search for "Free up or increase disk space" in the Elasticsearch topic, Fix a red or yellow cluster status . 16.5.10. Elasticsearch FileDescriptor Usage is high Based on current usage trends, the predicted number of file descriptors on the node is insufficient. Troubleshooting Check and, if needed, configure the value of max_file_descriptors for each node, as described in the Elasticsearch File descriptors topic. Additional resources Search for "ElasticsearchHighFileDescriptorUsage" in About Elasticsearch alerting rules . Search for "File Descriptors In Use" in OpenShift Logging dashboards . | [
"oc project openshift-logging",
"oc get clusterlogging instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging . status: 1 collection: logs: fluentdStatus: daemonSet: fluentd 2 nodes: fluentd-2rhqp: ip-10-0-169-13.ec2.internal fluentd-6fgjh: ip-10-0-165-244.ec2.internal fluentd-6l2ff: ip-10-0-128-218.ec2.internal fluentd-54nx5: ip-10-0-139-30.ec2.internal fluentd-flpnn: ip-10-0-147-228.ec2.internal fluentd-n2frh: ip-10-0-157-45.ec2.internal pods: failed: [] notReady: [] ready: - fluentd-2rhqp - fluentd-54nx5 - fluentd-6fgjh - fluentd-6l2ff - fluentd-flpnn - fluentd-n2frh logstore: 3 elasticsearchStatus: - ShardAllocationEnabled: all cluster: activePrimaryShards: 5 activeShards: 5 initializingShards: 0 numDataNodes: 1 numNodes: 1 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterName: elasticsearch nodeConditions: elasticsearch-cdm-mkkdys93-1: nodeCount: 1 pods: client: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c data: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c master: failed: notReady: ready: - elasticsearch-cdm-mkkdys93-1-7f7c6-mjm7c visualization: 4 kibanaStatus: - deployment: kibana pods: failed: [] notReady: [] ready: - kibana-7fb4fd4cc9-f2nls replicaSets: - kibana-7fb4fd4cc9 replicas: 1",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-clientdatamaster-0-1 upgradeStatus: {}",
"nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: cluster-logging-operator upgradeStatus: {}",
"Elasticsearch Status: Shard Allocation Enabled: shard allocation unknown Cluster: Active Primary Shards: 0 Active Shards: 0 Initializing Shards: 0 Num Data Nodes: 0 Num Nodes: 0 Pending Tasks: 0 Relocating Shards: 0 Status: cluster health unknown Unassigned Shards: 0 Cluster Name: elasticsearch Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: 0/5 nodes are available: 5 node(s) didn't match node selector. Reason: Unschedulable Status: True Type: Unschedulable elasticsearch-cdm-mkkdys93-2: Node Count: 2 Pods: Client: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Data: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready: Master: Failed: Not Ready: elasticsearch-cdm-mkkdys93-1-75dd69dccd-f7f49 elasticsearch-cdm-mkkdys93-2-67c64f5f4c-n58vl Ready:",
"Node Conditions: elasticsearch-cdm-mkkdys93-1: Last Transition Time: 2019-06-26T03:37:32Z Message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) Reason: Unschedulable Status: True Type: Unschedulable",
"Status: Collection: Logs: Fluentd Status: Daemon Set: fluentd Nodes: Pods: Failed: Not Ready: Ready:",
"oc project openshift-logging",
"oc describe deployment cluster-logging-operator",
"Name: cluster-logging-operator . Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 62m deployment-controller Scaled up replica set cluster-logging-operator-574b8987df to 1----",
"oc get replicaset",
"NAME DESIRED CURRENT READY AGE cluster-logging-operator-574b8987df 1 1 1 159m elasticsearch-cdm-uhr537yu-1-6869694fb 1 1 1 157m elasticsearch-cdm-uhr537yu-2-857b6d676f 1 1 1 156m elasticsearch-cdm-uhr537yu-3-5b6fdd8cfd 1 1 1 155m kibana-5bd5544f87 1 1 1 157m",
"oc describe replicaset cluster-logging-operator-574b8987df",
"Name: cluster-logging-operator-574b8987df . Replicas: 1 current / 1 desired Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed . Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 66m replicaset-controller Created pod: cluster-logging-operator-574b8987df-qjhqv----",
"oc project openshift-logging",
"oc get Elasticsearch",
"NAME AGE elasticsearch 5h9m",
"oc get Elasticsearch <Elasticsearch-instance> -o yaml",
"oc get Elasticsearch elasticsearch -n openshift-logging -o yaml",
"status: 1 cluster: 2 activePrimaryShards: 30 activeShards: 60 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: \"\" conditions: [] 3 nodes: 4 - deploymentName: elasticsearch-cdm-zjf34ved-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-zjf34ved-3 upgradeStatus: {} pods: 5 client: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt data: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt master: failed: [] notReady: [] ready: - elasticsearch-cdm-zjf34ved-1-6d7fbf844f-sn422 - elasticsearch-cdm-zjf34ved-2-dfbd988bc-qkzjz - elasticsearch-cdm-zjf34ved-3-c8f566f7c-t7zkt shardAllocationEnabled: all",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T15:57:22Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be not be allocated on this node. reason: Disk Watermark Low status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-03-15T16:04:45Z message: Disk storage usage for node is 27.5gb (36.74%). Shards will be relocated from this node. reason: Disk Watermark High status: \"True\" type: NodeStorage deploymentName: example-elasticsearch-cdm-0-1 upgradeStatus: {}",
"status: nodes: - conditions: - lastTransitionTime: 2019-04-10T02:26:24Z message: '0/8 nodes are available: 8 node(s) didn''t match node selector.' reason: Unschedulable status: \"True\" type: Unschedulable",
"status: nodes: - conditions: - last Transition Time: 2019-04-10T05:55:51Z message: pod has unbound immediate PersistentVolumeClaims (repeated 5 times) reason: Unschedulable status: True type: Unschedulable",
"status: clusterHealth: \"\" conditions: - lastTransitionTime: 2019-04-17T20:01:31Z message: Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles reason: Invalid Settings status: \"True\" type: InvalidRedundancy",
"status: clusterHealth: green conditions: - lastTransitionTime: '2019-04-17T20:12:34Z' message: >- Invalid master nodes count. Please ensure there are no more than 3 total nodes with master roles reason: Invalid Settings status: 'True' type: InvalidMasters",
"status: clusterHealth: green conditions: - lastTransitionTime: \"2021-05-07T01:05:13Z\" message: Changing the storage structure for a custom resource is not supported reason: StorageStructureChangeIgnored status: 'True' type: StorageStructureChangeIgnored",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc exec elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -- indices",
"Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-4vjor49p-2-6d4d7db474-q2w7z -n openshift-logging' to see all of the containers in this pod. green open infra-000002 S4QANnf1QP6NgCegfnrnbQ 3 1 119926 0 157 78 green open audit-000001 8_EQx77iQCSTzFOXtxRqFw 3 1 0 0 0 0 green open .security iDjscH7aSUGhIdq0LheLBQ 1 1 5 0 0 0 green open .kibana_-377444158_kubeadmin yBywZ9GfSrKebz5gWBZbjw 3 1 1 0 0 0 green open infra-000001 z6Dpe__ORgiopEpW6Yl44A 3 1 871000 0 874 436 green open app-000001 hIrazQCeSISewG3c2VIvsQ 3 1 2453 0 3 1 green open .kibana_1 JCitcBMSQxKOvIq6iQW6wg 1 1 0 0 0 0 green open .kibana_-1595131456_user1 gIYFIEGRRe-ka0W3okS-mQ 3 1 1 0 0 0",
"oc get pods --selector component=elasticsearch -o name",
"pod/elasticsearch-cdm-1godmszn-1-6f8495-vp4lw pod/elasticsearch-cdm-1godmszn-2-5769cf-9ms2n pod/elasticsearch-cdm-1godmszn-3-f66f7d-zqkz7",
"oc describe pod elasticsearch-cdm-1godmszn-1-6f8495-vp4lw",
". Status: Running . Containers: elasticsearch: Container ID: cri-o://b7d44e0a9ea486e27f47763f5bb4c39dfd2 State: Running Started: Mon, 08 Jun 2020 10:17:56 -0400 Ready: True Restart Count: 0 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . proxy: Container ID: cri-o://3f77032abaddbb1652c116278652908dc01860320b8a4e741d06894b2f8f9aa1 State: Running Started: Mon, 08 Jun 2020 10:18:38 -0400 Ready: True Restart Count: 0 . Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True . Events: <none>",
"oc get deployment --selector component=elasticsearch -o name",
"deployment.extensions/elasticsearch-cdm-1gon-1 deployment.extensions/elasticsearch-cdm-1gon-2 deployment.extensions/elasticsearch-cdm-1gon-3",
"oc describe deployment elasticsearch-cdm-1gon-1",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Conditions: Type Status Reason ---- ------ ------ Progressing Unknown DeploymentPaused Available True MinimumReplicasAvailable . Events: <none>",
"oc get replicaSet --selector component=elasticsearch -o name replicaset.extensions/elasticsearch-cdm-1gon-1-6f8495 replicaset.extensions/elasticsearch-cdm-1gon-2-5769cf replicaset.extensions/elasticsearch-cdm-1gon-3-f66f7d",
"oc describe replicaSet elasticsearch-cdm-1gon-1-6f8495",
". Containers: elasticsearch: Image: registry.redhat.io/openshift-logging/elasticsearch6-rhel8@sha256:4265742c7cdd85359140e2d7d703e4311b6497eec7676957f455d6908e7b1c25 Readiness: exec [/usr/share/elasticsearch/probe/readiness.sh] delay=10s timeout=30s period=5s #success=1 #failure=3 . Events: <none>",
"eo_elasticsearch_cr_cluster_management_state{state=\"managed\"} 1 eo_elasticsearch_cr_cluster_management_state{state=\"unmanaged\"} 0",
"eo_elasticsearch_cr_restart_total{reason=\"cert_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"rolling_restart\"} 1 eo_elasticsearch_cr_restart_total{reason=\"scheduled_restart\"} 3",
"Total number of Namespaces. es_index_namespaces_total 5",
"es_index_document_count{namespace=\"namespace_1\"} 25 es_index_document_count{namespace=\"namespace_2\"} 10 es_index_document_count{namespace=\"namespace_3\"} 5",
"message\": \"Secret \\\"elasticsearch\\\" fields are either missing or empty: [admin-cert, admin-key, logging-es.crt, logging-es.key]\", \"reason\": \"Missing Required Secrets\",",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/nodes?v",
"-n openshift-logging get pods -l component=elasticsearch",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/master?v",
"logs <elasticsearch_master_pod_name> -c elasticsearch -n openshift-logging",
"logs <elasticsearch_node_name> -c elasticsearch -n openshift-logging",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/recovery?active_only=true",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- health |grep number_of_pending_tasks",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/settings?pretty -X PUT -d '{\"persistent\": {\"cluster.routing.allocation.enable\":\"all\"}}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_cache/clear?pretty",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.allocation.max_retries\":10}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_search/scroll/_all -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name>/_settings?pretty -X PUT -d '{\"index.unassigned.node_left.delayed_timeout\":\"10m\"}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cat/indices?v",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_red_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_nodes/stats?pretty",
"-n openshift-logging get po -o wide",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep unassigned_shards",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"-n openshift-logging get po -o wide",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_cluster/health?pretty | grep relocating_shards",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=_all/_settings?pretty -X PUT -d '{\"index.blocks.read_only_allow_delete\": null}'",
"for pod in `oc -n openshift-logging get po -l component=elasticsearch -o jsonpath='{.items[*].metadata.name}'`; do echo USDpod; oc -n openshift-logging exec -c elasticsearch USDpod -- df -h /elasticsearch/persistent; done",
"-n openshift-logging get es elasticsearch -o jsonpath='{.spec.redundancyPolicy}'",
"-n openshift-logging get cl -o jsonpath='{.items[*].spec.logStore.elasticsearch.redundancyPolicy}'",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- indices",
"exec -n openshift-logging -c elasticsearch <elasticsearch_pod_name> -- es_util --query=<elasticsearch_index_name> -X DELETE"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/troubleshooting-logging |
Chapter 11. Migrating | Chapter 11. Migrating Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. If you are already using the Red Hat OpenShift distributed tracing platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project. The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications. Migration from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments. 11.1. Migrating with sidecars The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a distributed tracing platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This endpoint points to the Gateway of a TempoStack instance deployed by using the <example> Tempo Operator. Create a service account for running your application. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar Create a cluster role for the permissions needed by some processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The resourcedetectionprocessor requires permissions for infrastructures and infrastructures/status. Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. Remove the injected Jaeger Agent from your application by removing the "sidecar.jaegertracing.io/inject": "true" annotation from your Deployment object. Enable automatic injection of the OpenTelemetry sidecar by adding the sidecar.opentelemetry.io/inject: "true" annotation to the .spec.template.metadata.annotations field of your Deployment object. Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces. 11.2. Migrating without sidecars You can migrate from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure OpenTelemetry Collector deployment. Create the project where the OpenTelemetry Collector will be deployed. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account for running the OpenTelemetry Collector instance. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Create a cluster role for setting the required permissions for the processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 Permissions for the pods and namespaces resources are required for the k8sattributesprocessor . 2 Permissions for infrastructures and infrastructures/status are required for resourcedetectionprocessor . Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the OpenTelemetry Collector instance. Note This collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-example-gateway:8090" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] Point your tracing endpoint to the OpenTelemetry Operator. If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint. Example of exporting traces by using the jaegerexporter with Golang exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1 1 The URL points to the OpenTelemetry Collector API endpoint. | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/red_hat_build_of_opentelemetry/dist-tracing-otel-migrating |
Chapter 1. Overview | Chapter 1. Overview From the perspective of a Ceph client, interacting with the Ceph storage cluster is remarkably simple: Connect to the Cluster Create a Pool I/O Context This remarkably simple interface is how a Ceph client selects one of the storage strategies you define. Storage strategies are invisible to the Ceph client in all but storage capacity and performance. The diagram below shows the logical data flow starting from the client into the Red Hat Ceph Storage cluster. 1.1. What are storage strategies? A storage strategy is a method of storing data that serves a particular use case. For example, if you need to store volumes and images for a cloud platform like OpenStack, you might choose to store data on reasonably performant SAS drives with SSD-based journals. By contrast, if you need to store object data for an S3- or Swift-compliant gateway, you might choose to use something more economical, like SATA drives. Ceph can accommodate both scenarios in the same Ceph cluster, but you need a means of providing the SAS/SSD storage strategy to the cloud platform (for example, Glance and Cinder in OpenStack), and a means of providing SATA storage for your object store. Storage strategies include the storage media (hard drives, SSDs, and the rest), the CRUSH maps that set up performance and failure domains for the storage media, the number of placement groups, and the pool interface. Ceph supports multiple storage strategies. Use cases, cost/benefit performance tradeoffs and data durability are the primary considerations that drive storage strategies. Use Cases: Ceph provides massive storage capacity, and it supports numerous use cases. For example, the Ceph Block Device client is a leading storage backend for cloud platforms like OpenStack- providing limitless storage for volumes and images with high performance features like copy-on-write cloning. Likewise, Ceph can provide container-based storage for OpenShift environments. By contrast, the Ceph Object Gateway client is a leading storage backend for cloud platforms that provides RESTful S3-compliant and Swift-compliant object storage for objects like audio, bitmap, video and other data. Cost/Benefit of Performance: Faster is better. Bigger is better. High durability is better. However, there is a price for each superlative quality, and a corresponding cost/benefit trade off. Consider the following use cases from a performance perspective: SSDs can provide very fast storage for relatively small amounts of data and journaling. Storing a database or object index might benefit from a pool of very fast SSDs, but prove too expensive for other data. SAS drives with SSD journaling provide fast performance at an economical price for volumes and images. SATA drives without SSD journaling provide cheap storage with lower overall performance. When you create a CRUSH hierarchy of OSDs, you need to consider the use case and an acceptable cost/performance trade off. Durability: In large scale clusters, hardware failure is an expectation, not an exception. However, data loss and service interruption remain unacceptable. For this reason, data durability is very important. Ceph addresses data durability with multiple deep copies of an object or with erasure coding and multiple coding chunks. Multiple copies or multiple coding chunks present an additional cost/benefit tradeoff: it's cheaper to store fewer copies or coding chunks, but it might lead to the inability to service write requests in a degraded state. Generally, one object with two additional copies (that is, size = 3 ) or two coding chunks might allow a cluster to service writes in a degraded state while the cluster recovers. The CRUSH algorithm aids this process by ensuring that Ceph stores additional copies or coding chunks in different locations within the cluster. This ensures that the failure of a single storage device or node doesn't lead to a loss of all of the copies or coding chunks necessary to preclude data loss. You can capture use cases, cost/benefit performance tradeoffs and data durability in a storage strategy and present it to a Ceph client as a storage pool. Important Ceph's object copies or coding chunks make RAID obsolete. Do not use RAID, because Ceph already handles data durability, a degraded RAID has a negative impact on performance, and recovering data using RAID is substantially slower than using deep copies or erasure coding chunks. 1.2. Configuring storage strategies Configuring storage strategies is about assigning Ceph OSDs to a CRUSH hierarchy, defining the number of placement groups for a pool, and creating a pool. The general steps are: Define a Storage Strategy: Storage strategies require you to analyze your use case, cost/benefit performance tradeoffs and data durability. Then, you create OSDs suitable for that use case. For example, you can create SSD-backed OSDs for a high performance pool; SAS drive/SSD journal-backed OSDs for high-performance block device volumes and images; or, SATA-backed OSDs for low cost storage. Ideally, each OSD for a use case should have the same hardware configuration so that you have a consistent performance profile. Define a CRUSH Hierarchy: Ceph rules select a node, usually the root , in a CRUSH hierarchy, and identify the appropriate OSDs for storing placement groups and the objects they contain. You must create a CRUSH hierarchy and a CRUSH rule for your storage strategy. CRUSH hierarchies get assigned directly to a pool by the CRUSH rule setting. Calculate Placement Groups: Ceph shards a pool into placement groups. You do not have to manually set the number of placement groups for your pool. PG autoscaler sets an appropriate number of placement groups for your pool that remains within a healthy maximum number of placement groups in the event that you assign multiple pools to the same CRUSH rule. Create a Pool: Finally, you must create a pool and determine whether it uses replicated or erasure-coded storage. You must set the number of placement groups for the pool, the rule for the pool and the durability, such as size or K+M coding chunks. Remember, the pool is the Ceph client's interface to the storage cluster, but the storage strategy is completely transparent to the Ceph client, except for capacity and performance. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/storage_strategies_guide/overview_strategy |
Chapter 3. User tasks | Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.17 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The page allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. About Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub by using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub by using the web console You can install and subscribe to an Operator from OperatorHub by using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page, configure your Operator installation: If you want to install a specific version of an Operator, select an Update channel and Version from the lists. You can browse the various versions of an Operator across any channels it might have, view the metadata for that channel and version, and select the exact version you want to install. Note The version selection defaults to the latest version for the channel selected. If the latest version for the channel is selected, the Automatic approval strategy is enabled by default. Otherwise, Manual approval is required when not installing the latest version for the selected channel. Installing an Operator with Manual approval causes all Operators installed within the namespace to function with the Manual approval strategy and all Operators are updated together. If you want to update Operators independently, install Operators into separate namespaces. Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. For clusters on cloud providers with token authentication enabled: If the cluster uses AWS Security Token Service ( STS Mode in the web console), enter the Amazon Resource Name (ARN) of the AWS IAM role of your service account in the role ARN field. To create the role's ARN, follow the procedure described in Preparing AWS account . If the cluster uses Microsoft Entra Workload ID ( Workload Identity / Federated Identity Mode in the web console), add the client ID, tenant ID, and subscription ID in the appropriate fields. If the cluster uses Google Cloud Platform Workload Identity ( GCP Workload Identity / Federated Identity Mode in the web console), add the project number, pool ID, provider ID, and service account email in the appropriate fields. For Update approval , select either the Automatic or Manual approval strategy. Important If the web console shows that the cluster uses AWS STS, Microsoft Entra Workload ID, or GCP Workload Identity, you must set Update approval to Manual . Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster: If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. Verification After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should eventually resolve to Succeeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to Succeeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. When the Operator is installed, the metadata indicates which channel and version are installed. Note The Channel and Version dropdown menus are still available for viewing other version metadata in this catalog context. 3.2.4. Installing from OperatorHub by using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub by using the CLI. Use the oc command to create or update a Subscription object. For SingleNamespace install mode, you must also ensure an appropriate Operator group exists in the related namespace. An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. Tip In most cases, the web console method of this procedure is preferred because it automates tasks in the background, such as handling the creation of OperatorGroup and Subscription objects automatically when choosing SingleNamespace mode. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You have installed the OpenShift CLI ( oc ). Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example 3.1. Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m # ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m # ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m # ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace Example 3.2. Example output # ... Kind: PackageManifest # ... Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces # ... Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 # ... Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4 1 Indicates which install modes are supported. 2 3 Example channel names. 4 The channel selected by default if one is not specified. Tip You can print an Operator's version and channel information in YAML format by running the following command: USD oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml If more than one catalog is installed in a namespace, run the following command to look up the available versions and channels of an Operator from a specific catalog: USD oc get packagemanifest \ --selector=catalog=<catalogsource_name> \ --field-selector metadata.name=<operator_name> \ -n <catalog_namespace> -o yaml Important If you do not specify the Operator's catalog, running the oc get packagemanifest and oc describe packagemanifest commands might return a package from an unexpected catalog if the following conditions are met: Multiple catalogs are installed in the same namespace. The catalogs contain the same Operators or Operators with the same name. If the Operator you intend to install supports the AllNamespaces install mode, and you choose to use this mode, skip this step, because the openshift-operators namespace already has an appropriate Operator group in place by default, called global-operators . If the Operator you intend to install supports the SingleNamespace install mode, and you choose to use this mode, you must ensure an appropriate Operator group exists in the related namespace. If one does not exist, you can create create one by following these steps: Important You can only have one Operator group per namespace. For more information, see "Operator groups". Create an OperatorGroup object YAML file, for example operatorgroup.yaml , for SingleNamespace install mode: Example OperatorGroup object for SingleNamespace install mode apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2 1 2 For SingleNamespace install mode, use the same <namespace> value for both the metadata.namespace and spec.targetNamespaces fields. Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object to subscribe a namespace to an Operator: Create a YAML file for the Subscription object, for example subscription.yaml : Note If you want to subscribe to a specific version of an Operator, set the startingCSV field to the desired version and set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For details, see the following "Example Subscription object with a specific starting Operator version". Example 3.3. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For default AllNamespaces install mode usage, specify the openshift-operators namespace. Alternatively, you can specify a custom global namespace, if you have created one. For SingleNamespace install mode usage, specify the relevant single namespace. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of environment variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate environment variables in the container. 8 The volumes parameter defines a list of volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of volume mounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Example 3.4. Example Subscription object with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. For clusters on cloud providers with token authentication enabled, such as Amazon Web Services (AWS) Security Token Service (STS), Microsoft Entra Workload ID, or Google Cloud Platform Workload Identity, configure your Subscription object by following these steps: Ensure the Subscription object is set to manual update approvals: Example 3.5. Example Subscription object with manual update approvals kind: Subscription # ... spec: installPlanApproval: Manual 1 1 Subscriptions with automatic approvals for updates are not recommended because there might be permission changes to make before updating. Subscriptions with manual approvals for updates ensure that administrators have the opportunity to verify the permissions of the later version, take any necessary steps, and then update. Include the relevant cloud provider-specific fields in the Subscription object's config section: If the cluster is in AWS STS mode, include the following fields: Example 3.6. Example Subscription object with AWS STS variables kind: Subscription # ... spec: config: env: - name: ROLEARN value: "<role_arn>" 1 1 Include the role ARN details. If the cluster is in Workload ID mode, include the following fields: Example 3.7. Example Subscription object with Workload ID variables kind: Subscription # ... spec: config: env: - name: CLIENTID value: "<client_id>" 1 - name: TENANTID value: "<tenant_id>" 2 - name: SUBSCRIPTIONID value: "<subscription_id>" 3 1 Include the client ID. 2 Include the tenant ID. 3 Include the subscription ID. If the cluster is in GCP Workload Identity mode, include the following fields: Example 3.8. Example Subscription object with GCP Workload Identity variables kind: Subscription # ... spec: config: env: - name: AUDIENCE value: "<audience_url>" 1 - name: SERVICE_ACCOUNT_EMAIL value: "<service_account_email>" 2 where: <audience> Created in GCP by the administrator when they set up GCP Workload Identity, the AUDIENCE value must be a preformatted URL in the following format: //iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id> <service_account_email> The SERVICE_ACCOUNT_EMAIL value is a GCP service account email that is impersonated during Operator operation, for example: <service_account_name>@<project_id>.iam.gserviceaccount.com Create the Subscription object by running the following command: USD oc apply -f subscription.yaml If you set the installPlanApproval field to Manual , manually approve the pending install plan to complete the Operator installation. For more information, see "Manually approving a pending Operator update". At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verification Check the status of the Subscription object for your installed Operator by running the following command: USD oc describe subscription <subscription_name> -n <namespace> If you created an Operator group for SingleNamespace install mode, check the status of the OperatorGroup object by running the following command: USD oc describe operatorgroup <operatorgroup_name> -n <namespace> Additional resources Operator groups Channel names Additional resources Manually approving a pending Operator update | [
"oc get csv",
"oc policy add-role-to-user edit <user> -n <target_project>",
"oc get packagemanifests -n openshift-marketplace",
"NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m",
"oc describe packagemanifests <operator_name> -n openshift-marketplace",
"Kind: PackageManifest Install Modes: 1 Supported: true Type: OwnNamespace Supported: true Type: SingleNamespace Supported: false Type: MultiNamespace Supported: true Type: AllNamespaces Entries: Name: example-operator.v3.7.11 Version: 3.7.11 Name: example-operator.v3.7.10 Version: 3.7.10 Name: stable-3.7 2 Entries: Name: example-operator.v3.8.5 Version: 3.8.5 Name: example-operator.v3.8.4 Version: 3.8.4 Name: stable-3.8 3 Default Channel: stable-3.8 4",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o yaml",
"oc get packagemanifest --selector=catalog=<catalogsource_name> --field-selector metadata.name=<operator_name> -n <catalog_namespace> -o yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> 1 spec: targetNamespaces: - <namespace> 2",
"oc apply -f operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: <namespace_per_install_mode> 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: <catalog_name> 4 sourceNamespace: <catalog_source_namespace> 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-operator spec: channel: stable-3.7 installPlanApproval: Manual 1 name: example-operator source: custom-operators sourceNamespace: openshift-marketplace startingCSV: example-operator.v3.7.10 2",
"kind: Subscription spec: installPlanApproval: Manual 1",
"kind: Subscription spec: config: env: - name: ROLEARN value: \"<role_arn>\" 1",
"kind: Subscription spec: config: env: - name: CLIENTID value: \"<client_id>\" 1 - name: TENANTID value: \"<tenant_id>\" 2 - name: SUBSCRIPTIONID value: \"<subscription_id>\" 3",
"kind: Subscription spec: config: env: - name: AUDIENCE value: \"<audience_url>\" 1 - name: SERVICE_ACCOUNT_EMAIL value: \"<service_account_email>\" 2",
"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>",
"<service_account_name>@<project_id>.iam.gserviceaccount.com",
"oc apply -f subscription.yaml",
"oc describe subscription <subscription_name> -n <namespace>",
"oc describe operatorgroup <operatorgroup_name> -n <namespace>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operators/user-tasks |
12.6.2. Deleting a Storage Pool Using virt-manager | 12.6.2. Deleting a Storage Pool Using virt-manager This procedure demonstrates how to delete a storage pool. To avoid any issues with other guests using the same pool, it is best to stop the storage pool and release any resources in use by it. To do this, select the storage pool you want to stop and click the red X icon at the bottom of the Storage window. Figure 12.28. Stop Icon Delete the storage pool by clicking the Trash can icon. This icon is only enabled if you stop the storage pool first. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/del-stor-pool-nfs |
Getting started | Getting started Red Hat OpenShift Service on AWS 4 Setting up clusters and accounts Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/getting_started/index |
Chapter 2. Understanding Operators | Chapter 2. Understanding Operators 2.1. What are Operators? Conceptually, Operators take human operational knowledge and encode it into software that is more easily shared with consumers. Operators are pieces of software that ease the operational complexity of running another piece of software. They act like an extension of the software vendor's engineering team, monitoring a Kubernetes environment (such as OpenShift Dedicated) and using its current state to make decisions in real time. Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time. More technically, Operators are a method of packaging, deploying, and managing a Kubernetes application. A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. To be able to make the most of Kubernetes, you require a set of cohesive APIs to extend in order to service and manage your apps that run on Kubernetes. Think of Operators as the runtime that manages this type of app on Kubernetes. 2.1.1. Why use Operators? Operators provide: Repeatability of installation and upgrade. Constant health checks of every system component. Over-the-air (OTA) updates for OpenShift components and ISV content. A place to encapsulate knowledge from field engineers and spread it to all users, not just one or two. Why deploy on Kubernetes? Kubernetes (and by extension, OpenShift Dedicated) contains all of the primitives needed to build complex distributed systems - secret handling, load balancing, service discovery, autoscaling - that work across on-premises and cloud providers. Why manage your app with Kubernetes APIs and kubectl tooling? These APIs are feature rich, have clients for all platforms and plug into the cluster's access control/auditing. An Operator uses the Kubernetes extension mechanism, custom resource definitions (CRDs), so your custom object, for example MongoDB , looks and acts just like the built-in, native Kubernetes objects. How do Operators compare with service brokers? A service broker is a step towards programmatic discovery and deployment of an app. However, because it is not a long running process, it cannot execute Day 2 operations like upgrade, failover, or scaling. Customizations and parameterization of tunables are provided at install time, versus an Operator that is constantly watching the current state of your cluster. Off-cluster services are a good match for a service broker, although Operators exist for these as well. 2.1.2. Operator Framework The Operator Framework is a family of tools and capabilities to deliver on the customer experience described above. It is not just about writing code; testing, delivering, and updating Operators is just as important. The Operator Framework components consist of open source tools to tackle these problems: Operator SDK The Operator SDK assists Operator authors in bootstrapping, building, testing, and packaging their own Operator based on their expertise without requiring knowledge of Kubernetes API complexities. Operator Lifecycle Manager Operator Lifecycle Manager (OLM) controls the installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. It is deployed by default in OpenShift Dedicated 4. Operator Registry The Operator Registry stores cluster service versions (CSVs) and custom resource definitions (CRDs) for creation in a cluster and stores Operator metadata about packages and channels. It runs in a Kubernetes or OpenShift cluster to provide this Operator catalog data to OLM. OperatorHub OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift Dedicated. These tools are designed to be composable, so you can use any that are useful to you. 2.1.3. Operator maturity model The level of sophistication of the management logic encapsulated within an Operator can vary. This logic is also in general highly dependent on the type of the service represented by the Operator. One can however generalize the scale of the maturity of the encapsulated operations of an Operator for certain set of capabilities that most Operators can include. To this end, the following Operator maturity model defines five phases of maturity for generic Day 2 operations of an Operator: Figure 2.1. Operator maturity model The above model also shows how these capabilities can best be developed through the Helm, Go, and Ansible capabilities of the Operator SDK. 2.2. Operator Framework packaging format This guide outlines the packaging format for Operators supported by Operator Lifecycle Manager (OLM) in OpenShift Dedicated. 2.2.1. Bundle format The bundle format for Operators is a packaging format introduced by the Operator Framework. To improve scalability and to better enable upstream users hosting their own catalogs, the bundle format specification simplifies the distribution of Operator metadata. An Operator bundle represents a single version of an Operator. On-disk bundle manifests are containerized and shipped as a bundle image , which is a non-runnable container image that stores the Kubernetes manifests and Operator metadata. Storage and distribution of the bundle image is then managed using existing container tools like podman and docker and container registries such as Quay. Operator metadata can include: Information that identifies the Operator, for example its name and version. Additional information that drives the UI, for example its icon and some example custom resources (CRs). Required and provided APIs. Related images. When loading manifests into the Operator Registry database, the following requirements are validated: The bundle must have at least one channel defined in the annotations. Every bundle has exactly one cluster service version (CSV). If a CSV owns a custom resource definition (CRD), that CRD must exist in the bundle. 2.2.1.1. Manifests Bundle manifests refer to a set of Kubernetes manifests that define the deployment and RBAC model of the Operator. A bundle includes one CSV per directory and typically the CRDs that define the owned APIs of the CSV in its /manifests directory. Example bundle format layout etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml Additionally supported objects The following object types can also be optionally included in the /manifests directory of a bundle: Supported optional object types ClusterRole ClusterRoleBinding ConfigMap ConsoleCLIDownload ConsoleLink ConsoleQuickStart ConsoleYamlSample PodDisruptionBudget PriorityClass PrometheusRule Role RoleBinding Secret Service ServiceAccount ServiceMonitor VerticalPodAutoscaler When these optional objects are included in a bundle, Operator Lifecycle Manager (OLM) can create them from the bundle and manage their lifecycle along with the CSV: Lifecycle for optional objects When the CSV is deleted, OLM deletes the optional object. When the CSV is upgraded: If the name of the optional object is the same, OLM updates it in place. If the name of the optional object has changed between versions, OLM deletes and recreates it. 2.2.1.2. Annotations A bundle also includes an annotations.yaml file in its /metadata directory. This file defines higher level aggregate data that helps describe the format and package information about how the bundle should be added into an index of bundles: Example annotations.yaml annotations: operators.operatorframework.io.bundle.mediatype.v1: "registry+v1" 1 operators.operatorframework.io.bundle.manifests.v1: "manifests/" 2 operators.operatorframework.io.bundle.metadata.v1: "metadata/" 3 operators.operatorframework.io.bundle.package.v1: "test-operator" 4 operators.operatorframework.io.bundle.channels.v1: "beta,stable" 5 operators.operatorframework.io.bundle.channel.default.v1: "stable" 6 1 The media type or format of the Operator bundle. The registry+v1 format means it contains a CSV and its associated Kubernetes objects. 2 The path in the image to the directory that contains the Operator manifests. This label is reserved for future use and currently defaults to manifests/ . The value manifests.v1 implies that the bundle contains Operator manifests. 3 The path in the image to the directory that contains metadata files about the bundle. This label is reserved for future use and currently defaults to metadata/ . The value metadata.v1 implies that this bundle has Operator metadata. 4 The package name of the bundle. 5 The list of channels the bundle is subscribing to when added into an Operator Registry. 6 The default channel an Operator should be subscribed to when installed from a registry. Note In case of a mismatch, the annotations.yaml file is authoritative because the on-cluster Operator Registry that relies on these annotations only has access to this file. 2.2.1.3. Dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 Additional resources Operator Lifecycle Manager dependency resolution 2.2.1.4. About the opm CLI The opm CLI tool is provided by the Operator Framework for use with the Operator bundle format. This tool allows you to create and maintain catalogs of Operators from a list of Operator bundles that are similar to software repositories. The result is a container image which can be stored in a container registry and then installed on a cluster. A catalog contains a database of pointers to Operator manifest content that can be queried through an included API that is served when the container image is run. On OpenShift Dedicated, Operator Lifecycle Manager (OLM) can reference the image in a catalog source, defined by a CatalogSource object, which polls the image at regular intervals to enable frequent updates to installed Operators on the cluster. See CLI tools for steps on installing the opm CLI. 2.2.2. Highlights File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility. Editing With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI. This editability enables the following features and user-defined extensions: Promoting an existing bundle to a new channel Changing the default channel of a package Custom algorithms for adding, updating, and removing upgrade paths Composability File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB . A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it. This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these. Note Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found. Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users. Extensibility The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations. For example, a tool could translate a high-level API, such as (mode=semver) , down to the low-level, file-based catalog format for upgrade paths. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria. While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Dedicated releases, the major benefit is that catalog maintainers have this capability as well. Important As of OpenShift Dedicated 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Dedicated 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs . 2.2.2.1. Directory structure File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur. Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files. Example .indexignore file # Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package's file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format. Basic recommended structure catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog's root directory. 2.2.2.2. Schemas File-based catalogs use a format, based on the CUE language specification , that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to: _Meta schema _Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } Note No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE. An Operator Lifecycle Manager (OLM) catalog currently uses three schemas ( olm.package , olm.channel , and olm.bundle ), which correspond to OLM's existing package and bundle concepts. Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs. Note All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own. 2.2.2.2.1. olm.package schema The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon. Example 2.1. olm.package schema #Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } } 2.2.2.2.2. olm.channel schema The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade paths for those bundles. If a bundle entry represents an edge in multiple olm.channel blobs, it can only appear once per channel. It is valid for an entry's replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads. Example 2.2. olm.channel schema #Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" } Warning When using the skipRange field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV property of Subscription objects. You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange and replaces field. Ensure that the replaces field points to the immediate version of the Operator version in question. 2.2.2.2.3. olm.bundle schema Example 2.3. olm.bundle schema #Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" } 2.2.2.2.4. olm.deprecations schema The optional olm.deprecations schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog. When this schema is defined, the OpenShift Dedicated web console displays warning badges for the affected elements of the Operator, including any custom deprecation messages, on both the pre- and post-installation pages of the OperatorHub. An olm.deprecations schema entry contains one or more of the following reference types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object. Table 2.1. Deprecation reference types Type Scope Status condition olm.package Represents the entire package PackageDeprecated olm.channel Represents one channel ChannelDeprecated olm.bundle Represents one bundle version BundleDeprecated Each reference type has their own requirements, as detailed in the following example. Example 2.4. Example olm.deprecations schema with each reference type schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support. 1 Each deprecation schema must have a package value, and that package reference must be unique across the catalog. There must not be an associated name field. 2 The olm.package schema must not include a name field, because it is determined by the package field defined earlier in the schema. 3 All message fields, for any reference type, must be a non-zero length and represented as an opaque text blob. 4 The name field for the olm.channel schema is required. 5 The name field for the olm.bundle schema is required. Note The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle. Operator authors can save olm.deprecations schema entries as a deprecations.yaml file in the same directory as the package's index.yaml file: Example directory structure for a catalog with deprecations my-catalog └── my-operator ├── index.yaml └── deprecations.yaml Additional resources Updating or filtering a file-based catalog image 2.2.2.3. Properties Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML. OLM defines a handful of property types, again using the reserved olm.* prefix. 2.2.2.3.1. olm.package property The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle's first-class package field, and the version field must be a valid semantic version. Example 2.5. olm.package property #PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } } 2.2.2.3.2. olm.gvk property The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations. Example 2.6. olm.gvk property #PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.3.3. olm.package.required The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range. Example 2.7. olm.package.required property #PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } } 2.2.2.3.4. olm.gvk.required The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations. Example 2.8. olm.gvk.required property #PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } } 2.2.2.4. Example catalog With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog's root directory. There are many possible ways to build a file-based catalog; the following steps outline a simple approach: Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog: Example catalog configuration file name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 Run a script that parses the configuration file and creates a new catalog from its references: Example script name=USD(yq eval '.name' catalog.yaml) mkdir "USDname" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + "|" + USDcatalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render "USDimage" > "USDfile" done opm generate dockerfile "USDname" indexImage=USD(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "USDindexImage" -f "USDname.Dockerfile" . docker push "USDindexImage" 2.2.2.5. Guidelines Consider the following guidelines when maintaining file-based catalogs. 2.2.2.5.1. Immutable bundles The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable. If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade path from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog. However, there are some cases where a change in the catalog metadata is preferred: Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob. New upgrade paths: If you release a new 1.2.z bundle version, for example 1.2.4 , but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4 . 2.2.2.5.2. Source control Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps: Update the source-controlled catalog directory with a new commit. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version> , so that users can receive updates to a catalog as they become available. 2.2.2.6. CLI usage For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs . For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools . 2.2.2.7. Automation Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks: Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package's image reference. Check that the catalog updates pass the opm validate command. Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed. Automatically merge PRs that pass the checks. Automatically rebuild and republish the catalog image. 2.3. Operator Framework glossary of common terms This topic provides a glossary of common terms related to the Operator Framework, including Operator Lifecycle Manager (OLM) and the Operator SDK. 2.3.1. Bundle In the bundle format, a bundle is a collection of an Operator CSV, manifests, and metadata. Together, they form a unique version of an Operator that can be installed onto the cluster. 2.3.2. Bundle image In the bundle format, a bundle image is a container image that is built from Operator manifests and that contains one bundle. Bundle images are stored and distributed by Open Container Initiative (OCI) spec container registries, such as Quay.io or DockerHub. 2.3.3. Catalog source A catalog source represents a store of metadata that OLM can query to discover and install Operators and their dependencies. 2.3.4. Channel A channel defines a stream of updates for an Operator and is used to roll out updates for subscribers. The head points to the latest version of that channel. For example, a stable channel would have all stable versions of an Operator arranged from the earliest to the latest. An Operator can have several channels, and a subscription binding to a certain channel would only look for updates in that channel. 2.3.5. Channel head A channel head refers to the latest known update in a particular channel. 2.3.6. Cluster service version A cluster service version (CSV) is a YAML manifest created from Operator metadata that assists OLM in running the Operator in a cluster. It is the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its logo, description, and version. It is also a source of technical information that is required to run the Operator, like the RBAC rules it requires and which custom resources (CRs) it manages or depends on. 2.3.7. Dependency An Operator may have a dependency on another Operator being present in the cluster. For example, the Vault Operator has a dependency on the etcd Operator for its data persistence layer. OLM resolves dependencies by ensuring that all specified versions of Operators and CRDs are installed on the cluster during the installation phase. This dependency is resolved by finding and installing an Operator in a catalog that satisfies the required CRD API, and is not related to packages or bundles. 2.3.8. Extension Extensions enable cluster administrators to extend capabilities for users on their OpenShift Dedicated cluster. Extensions are managed by Operator Lifecycle Manager (OLM) v1. The ClusterExtension API streamlines management of installed extensions, which includes Operators via the registry+v1 bundle format, by consolidating user-facing APIs into a single object. Administrators and SREs can use the API to automate processes and define desired states by using GitOps principles. 2.3.9. Index image In the bundle format, an index image refers to an image of a database (a database snapshot) that contains information about Operator bundles including CSVs and CRDs of all versions. This index can host a history of Operators on a cluster and be maintained by adding or removing Operators using the opm CLI tool. 2.3.10. Install plan An install plan is a calculated list of resources to be created to automatically install or upgrade a CSV. 2.3.11. Multitenancy A tenant in OpenShift Dedicated is a user or group of users that share common access and privileges for a set of deployed workloads, typically represented by a namespace or project. You can use tenants to provide a level of isolation between different groups or teams. When a cluster is shared by multiple users or groups, it is considered a multitenant cluster. 2.3.12. Operator Operators are a method of packaging, deploying, and managing a Kubernetes application. A Kubernetes application is an app that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl or oc tooling. In Operator Lifecycle Manager (OLM) v1, the ClusterExtension API streamlines management of installed extensions, which includes Operators via the registry+v1 bundle format. 2.3.13. Operator group An Operator group configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their CR in a list of namespaces or cluster-wide. 2.3.14. Package In the bundle format, a package is a directory that encloses all released history of an Operator with each version. A released version of an Operator is described in a CSV manifest alongside the CRDs. 2.3.15. Registry A registry is a database that stores bundle images of Operators, each with all of its latest and historical versions in all channels. 2.3.16. Subscription A subscription keeps CSVs up to date by tracking a channel in a package. 2.3.17. Update graph An update graph links versions of CSVs together, similar to the update graph of any other packaged software. Operators can be installed sequentially, or certain versions can be skipped. The update graph is expected to grow only at the head with newer versions being added. Also known as update edges or update paths . 2.4. Operator Lifecycle Manager (OLM) 2.4.1. Operator Lifecycle Manager concepts and resources This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OpenShift Dedicated. 2.4.1.1. What is Operator Lifecycle Manager (OLM) Classic? Operator Lifecycle Manager (OLM) Classic helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OpenShift Dedicated clusters. It is part of the Operator Framework , an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Figure 2.2. OLM (Classic) workflow OLM runs by default in OpenShift Dedicated 4, which aids administrators with the dedicated-admin role in installing, upgrading, and granting access to Operators running on their cluster. The OpenShift Dedicated web console provides management screens for dedicated-admin administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster. For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it. 2.4.1.2. OLM resources The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM): Table 2.2. CRDs managed by OLM and Catalog Operators Resource Short name Description ClusterServiceVersion (CSV) csv Application metadata. For example: name, version, icon, required resources. CatalogSource catsrc A repository of CSVs, CRDs, and packages that define an application. Subscription sub Keeps CSVs up to date by tracking a channel in a package. InstallPlan ip Calculated list of resources to be created to automatically install or upgrade a CSV. OperatorGroup og Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. OperatorConditions - Creates a communication channel between OLM and an Operator it manages. Operators can write to the Status.Conditions array to communicate complex states to OLM. 2.4.1.2.1. Cluster service version A cluster service version (CSV) represents a specific version of a running Operator on an OpenShift Dedicated cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster. OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm , deb , or apk bundle. A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo. A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment. 2.4.1.2.2. Catalog source A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the OpenShift Dedicated web console also displays the Operators provided by catalog sources. Tip Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the Administration Cluster Settings Configuration OperatorHub page in the web console. The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API. Example 2.9. Example CatalogSource object \ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace 1 Name for the CatalogSource object. This value is also used as part of the name for the related pod that is created in the requested namespace. 2 Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to openshift-marketplace . The default Red Hat-provided catalog sources also use the openshift-marketplace namespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace. 3 Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog's index image version as part of cluster upgrades. Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites the spec.image field at run time. See the "Image template for custom catalog sources" section for more details. 4 Display name for the catalog in the web console and CLI. 5 Index image for the catalog. Optionally, can be omitted when using the olm.catalogImageTemplate annotation, which sets the pull spec at run time. 6 Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs. 7 Source types include the following: grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API. grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases. configmap : OLM parses config map data and runs a pod that can serve the gRPC API over it. 8 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Dedicated release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 9 Optional: For grpc type catalog sources, overrides the default node selector for the pod serving the content in spec.image , if defined. 10 Optional: For grpc type catalog sources, overrides the default priority class name for the pod serving the content in spec.image , if defined. Kubernetes provides system-cluster-critical and system-node-critical priority classes by default. Setting the field to empty ( "" ) assigns the pod the default priority. Other priority classes can be defined manually. 11 Optional: For grpc type catalog sources, overrides the default tolerations for the pod serving the content in spec.image , if defined. 12 Automatically check for new versions at a given interval to stay up-to-date. 13 Last observed state of the catalog connection. For example: READY : A connection is successfully established. CONNECTING : A connection is attempting to establish. TRANSIENT_FAILURE : A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back to CONNECTING and try again. See States of Connectivity in the gRPC documentation for more details. 14 Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date. 15 Status information for the catalog's Operator Registry service. Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator: Example 2.10. Example Subscription object referencing a catalog source apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace Additional resources Understanding OperatorHub Red Hat-provided Operator catalogs Adding a catalog source to a cluster Catalog priority Viewing Operator catalog source status by using the CLI Catalog source pod scheduling 2.4.1.2.2.1. Image template for custom catalog sources Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example OpenShift Dedicated 4. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Dedicated 4.17 to 4, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.18 to: registry.redhat.io/redhat/redhat-operator-index:v4.18 However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image. Starting in OpenShift Dedicated 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template: kube_major_version kube_minor_version kube_patch_version Note You must specify the Kubernetes cluster version and not an OpenShift Dedicated cluster version, as the latter is not currently available for templating. Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path. Important You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade. Example 2.11. Example catalog source with an image template apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.31 priority: -400 publisher: Example Org Note If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value. If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition. For an OpenShift Dedicated 4 cluster, which uses Kubernetes 1.31, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference: quay.io/example-org/example-catalog:v1.31 For future releases of OpenShift Dedicated, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later OpenShift Dedicated version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later OpenShift Dedicated version would then automatically update the catalog's index image as well. 2.4.1.2.2.2. Catalog health requirements Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster. For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A. As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator. As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the "Removing custom catalogs" or "Disabling the default OperatorHub catalog sources" sections for information about removing the unhealthy catalog. 2.4.1.2.3. Subscription A subscription , defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source. Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster. Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha , beta , or stable , helps determine which Operator stream should be installed from the catalog source. The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). In addition to being easily visible from the OpenShift Dedicated web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster. Additional resources Viewing Operator subscription status by using the CLI 2.4.1.2.4. Install plan An install plan , defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV). To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator. The install plan must then be approved according to one of the following approval strategies: If the subscription's spec.installPlanApproval field is set to Automatic , the install plan is approved automatically. If the subscription's spec.installPlanApproval field is set to Manual , the install plan must be manually approved by a cluster administrator or user with proper permissions. After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription. Example 2.12. Example InstallPlan object apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: ... catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- ... name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- ... name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- ... name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- ... name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created ... 2.4.1.2.5. Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. Additional resources Operator groups 2.4.1.2.6. Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. Additional resources Operator conditions 2.4.2. Operator Lifecycle Manager architecture This guide outlines the component architecture of Operator Lifecycle Manager (OLM) in OpenShift Dedicated. 2.4.2.1. Component responsibilities Operator Lifecycle Manager (OLM) is composed of two Operators: the OLM Operator and the Catalog Operator. The OLM and Catalog Operators are responsible for managing the custom resource definitions (CRDs) that are the basis for the OLM framework: Table 2.3. CRDs managed by OLM and Catalog Operators Resource Short name Owner Description ClusterServiceVersion (CSV) csv OLM Application metadata: name, version, icon, required resources, installation, and so on. InstallPlan ip Catalog Calculated list of resources to be created to automatically install or upgrade a CSV. CatalogSource catsrc Catalog A repository of CSVs, CRDs, and packages that define an application. Subscription sub Catalog Used to keep CSVs up to date by tracking a channel in a package. OperatorGroup og OLM Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide. Each of these Operators is also responsible for creating the following resources: Table 2.4. Resources created by OLM and Catalog Operators Resource Owner Deployments OLM ServiceAccounts (Cluster)Roles (Cluster)RoleBindings CustomResourceDefinitions (CRDs) Catalog ClusterServiceVersions 2.4.2.2. OLM Operator The OLM Operator is responsible for deploying applications defined by CSV resources after the required resources specified in the CSV are present in the cluster. The OLM Operator is not concerned with the creation of the required resources; you can choose to manually create these resources using the CLI or using the Catalog Operator. This separation of concern allows users incremental buy-in in terms of how much of the OLM framework they choose to leverage for their application. The OLM Operator uses the following workflow: Watch for cluster service versions (CSVs) in a namespace and check that requirements are met. If requirements are met, run the install strategy for the CSV. Note A CSV must be an active member of an Operator group for the install strategy to run. 2.4.2.3. Catalog Operator The Catalog Operator is responsible for resolving and installing cluster service versions (CSVs) and the required resources they specify. It is also responsible for watching catalog sources for updates to packages in channels and upgrading them, automatically if desired, to the latest available versions. To track a package in a channel, you can create a Subscription object configuring the desired package, channel, and the CatalogSource object you want to use for pulling updates. When updates are found, an appropriate InstallPlan object is written into the namespace on behalf of the user. The Catalog Operator uses the following workflow: Connect to each catalog source in the cluster. Watch for unresolved install plans created by a user, and if found: Find the CSV matching the name requested and add the CSV as a resolved resource. For each managed or required CRD, add the CRD as a resolved resource. For each required CRD, find the CSV that manages it. Watch for resolved install plans and create all of the discovered resources for it, if approved by a user or automatically. Watch for catalog sources and subscriptions and create install plans based on them. 2.4.2.4. Catalog Registry The Catalog Registry stores CSVs and CRDs for creation in a cluster and stores metadata about packages and channels. A package manifest is an entry in the Catalog Registry that associates a package identity with sets of CSVs. Within a package, channels point to a particular CSV. Because CSVs explicitly reference the CSV that they replace, a package manifest provides the Catalog Operator with all of the information that is required to update a CSV to the latest version in a channel, stepping through each intermediate version. 2.4.3. Operator Lifecycle Manager workflow This guide outlines the workflow of Operator Lifecycle Manager (OLM) in OpenShift Dedicated. 2.4.3.1. Operator installation and upgrade workflow in OLM In the Operator Lifecycle Manager (OLM) ecosystem, the following resources are used to resolve Operator installations and upgrades: ClusterServiceVersion (CSV) CatalogSource Subscription Operator metadata, defined in CSVs, can be stored in a collection called a catalog source. OLM uses catalog sources, which use the Operator Registry API , to query for available Operators as well as upgrades for installed Operators. Figure 2.3. Catalog source overview Within a catalog source, Operators are organized into packages and streams of updates called channels , which should be a familiar update pattern from OpenShift Dedicated or other software on a continuous release cycle like web browsers. Figure 2.4. Packages and channels in a Catalog source A user indicates a particular package and channel in a particular catalog source in a subscription , for example an etcd package and its alpha channel. If a subscription is made to a package that has not yet been installed in the namespace, the latest Operator for that package is installed. Note OLM deliberately avoids version comparisons, so the "latest" or "newest" Operator available from a given catalog channel package path does not necessarily need to be the highest version number. It should be thought of more as the head reference of a channel, similar to a Git repository. Each CSV has a replaces parameter that indicates which Operator it replaces. This builds a graph of CSVs that can be queried by OLM, and updates can be shared between channels. Channels can be thought of as entry points into the graph of updates: Figure 2.5. OLM graph of available channel updates Example channels in a package packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha For OLM to successfully query for updates, given a catalog source, package, channel, and CSV, a catalog must be able to return, unambiguously and deterministically, a single CSV that replaces the input CSV. 2.4.3.1.1. Example upgrade path For an example upgrade scenario, consider an installed Operator corresponding to CSV version 0.1.1 . OLM queries the catalog source and detects an upgrade in the subscribed channel with new CSV version 0.1.3 that replaces an older but not-installed CSV version 0.1.2 , which in turn replaces the older and installed CSV version 0.1.1 . OLM walks back from the channel head to versions via the replaces field specified in the CSVs to determine the upgrade path 0.1.3 0.1.2 0.1.1 ; the direction of the arrow indicates that the former replaces the latter. OLM upgrades the Operator one version at the time until it reaches the channel head. For this given scenario, OLM installs Operator version 0.1.2 to replace the existing Operator version 0.1.1 . Then, it installs Operator version 0.1.3 to replace the previously installed Operator version 0.1.2 . At this point, the installed operator version 0.1.3 matches the channel head and the upgrade is completed. 2.4.3.1.2. Skipping upgrades The basic path for upgrades in OLM is: A catalog source is updated with one or more updates to an Operator. OLM traverses every version of the Operator until reaching the latest version the catalog source contains. However, sometimes this is not a safe operation to perform. There will be cases where a published version of an Operator should never be installed on a cluster if it has not already, for example because a version introduces a serious vulnerability. In those cases, OLM must consider two cluster states and provide an update graph that supports both: The "bad" intermediate Operator has been seen by the cluster and installed. The "bad" intermediate Operator has not yet been installed onto the cluster. By shipping a new catalog and adding a skipped release, OLM is ensured that it can always get a single unique update regardless of the cluster state and whether it has seen the bad update yet. Example CSV with skipped release apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1 Consider the following example of Old CatalogSource and New CatalogSource . Figure 2.6. Skipping updates This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . If the bad update has not yet been installed, it will never be. 2.4.3.1.3. Replacing multiple Operators Creating New CatalogSource as described requires publishing CSVs that replace one Operator, but can skip several. This can be accomplished using the skipRange annotation: olm.skipRange: <semver_range> where <semver_range> has the version range format supported by the semver library . When searching catalogs for updates, if the head of a channel has a skipRange annotation and the currently installed Operator has a version field that falls in the range, OLM updates to the latest entry in the channel. The order of precedence is: Channel head in the source specified by sourceName on the subscription, if the other criteria for skipping are met. The Operator that replaces the current one, in the source specified by sourceName . Channel head in another source that is visible to the subscription, if the other criteria for skipping are met. The Operator that replaces the current one in any source visible to the subscription. Example CSV with skipRange apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2' 2.4.3.1.4. Z-stream support A z-stream , or patch release, must replace all z-stream releases for the same minor version. OLM does not consider major, minor, or patch versions, it just needs to build the correct graph in a catalog. In other words, OLM must be able to take a graph as in Old CatalogSource and, similar to before, generate a graph as in New CatalogSource : Figure 2.7. Replacing several Operators This graph maintains that: Any Operator found in Old CatalogSource has a single replacement in New CatalogSource . Any Operator found in New CatalogSource has a single replacement in New CatalogSource . Any z-stream release in Old CatalogSource will update to the latest z-stream release in New CatalogSource . Unavailable releases can be considered "virtual" graph nodes; their content does not need to exist, the registry just needs to respond as if the graph looks like this. 2.4.4. Operator Lifecycle Manager dependency resolution This guide outlines dependency resolution and custom resource definition (CRD) upgrade lifecycles with Operator Lifecycle Manager (OLM) in OpenShift Dedicated. 2.4.4.1. About dependency resolution Operator Lifecycle Manager (OLM) manages the dependency resolution and upgrade lifecycle of running Operators. In many ways, the problems OLM faces are similar to other system or language package managers, such as yum and rpm . However, there is one constraint that similar systems do not generally have that OLM does: because Operators are always running, OLM attempts to ensure that you are never left with a set of Operators that do not work with each other. As a result, OLM must never create the following scenarios: Install a set of Operators that require APIs that cannot be provided Update an Operator in a way that breaks another that depends upon it This is made possible with two types of data: Properties Typed metadata about the Operator that constitutes the public interface for it in the dependency resolver. Examples include the group/version/kind (GVK) of the APIs provided by the Operator and the semantic version (semver) of the Operator. Constraints or dependencies An Operator's requirements that should be satisfied by other Operators that might or might not have already been installed on the target cluster. These act as queries or filters over all available Operators and constrain the selection during dependency resolution and installation. Examples include requiring a specific API to be available on the cluster or expecting a particular Operator with a particular version to be installed. OLM converts these properties and constraints into a system of Boolean formulas and passes them to a SAT solver, a program that establishes Boolean satisfiability, which does the work of determining what Operators should be installed. 2.4.4.2. Operator properties All Operators in a catalog have the following properties: olm.package Includes the name of the package and the version of the Operator olm.gvk A single property for each provided API from the cluster service version (CSV) Additional properties can also be directly declared by an Operator author by including a properties.yaml file in the metadata/ directory of the Operator bundle. Example arbitrary property properties: - type: olm.kubeversion value: version: "1.16.0" 2.4.4.2.1. Arbitrary properties Operator authors can declare arbitrary properties in a properties.yaml file in the metadata/ directory of the Operator bundle. These properties are translated into a map data structure that is used as an input to the Operator Lifecycle Manager (OLM) resolver at runtime. These properties are opaque to the resolver as it does not understand the properties, but it can evaluate the generic constraints against those properties to determine if the constraints can be satisfied given the properties list. Example arbitrary properties properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource This structure can be used to construct a Common Expression Language (CEL) expression for generic constraints. Additional resources Common Expression Language (CEL) constraints 2.4.4.3. Operator dependencies The dependencies of an Operator are listed in a dependencies.yaml file in the metadata/ folder of a bundle. This file is optional and currently only used to specify explicit Operator-version dependencies. The dependency list contains a type field for each item to specify what kind of dependency this is. The following types of Operator dependencies are supported: olm.package This type indicates a dependency for a specific Operator version. The dependency information must include the package name and the version of the package in semver format. For example, you can specify an exact version such as 0.5.2 or a range of versions such as >0.5.1 . olm.gvk With this type, the author can specify a dependency with group/version/kind (GVK) information, similar to existing CRD and API-based usage in a CSV. This is a path to enable Operator authors to consolidate all dependencies, API or explicit versions, to be in the same place. olm.constraint This type declares generic constraints on arbitrary Operator properties. In the following example, dependencies are specified for a Prometheus Operator and etcd CRDs: Example dependencies.yaml file dependencies: - type: olm.package value: packageName: prometheus version: ">0.27.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 2.4.4.4. Generic constraints An olm.constraint property declares a dependency constraint of a particular type, differentiating non-constraint and constraint properties. Its value field is an object containing a failureMessage field holding a string-representation of the constraint message. This message is surfaced as an informative comment to users if the constraint is not satisfiable at runtime. The following keys denote the available constraint types: gvk Type whose value and interpretation is identical to the olm.gvk type package Type whose value and interpretation is identical to the olm.package type cel A Common Expression Language (CEL) expression evaluated at runtime by the Operator Lifecycle Manager (OLM) resolver over arbitrary bundle properties and cluster information all , any , not Conjunction, disjunction, and negation constraints, respectively, containing one or more concrete constraints, such as gvk or a nested compound constraint 2.4.4.4.1. Common Expression Language (CEL) constraints The cel constraint type supports Common Expression Language (CEL) as the expression language. The cel struct has a rule field which contains the CEL expression string that is evaluated against Operator properties at runtime to determine if the Operator satisfies the constraint. Example cel constraint type: olm.constraint value: failureMessage: 'require to have "certified"' cel: rule: 'properties.exists(p, p.type == "certified")' The CEL syntax supports a wide range of logical operators, such as AND and OR . As a result, a single CEL expression can have multiple rules for multiple conditions that are linked together by these logical operators. These rules are evaluated against a dataset of multiple different properties from a bundle or any given source, and the output is solved into a single bundle or Operator that satisfies all of those rules within a single constraint. Example cel constraint with multiple rules type: olm.constraint value: failureMessage: 'require to have "certified" and "stable" properties' cel: rule: 'properties.exists(p, p.type == "certified") && properties.exists(p, p.type == "stable")' 2.4.4.4.2. Compound constraints (all, any, not) Compound constraint types are evaluated following their logical definitions. The following is an example of a conjunctive constraint ( all ) of two packages and one GVK. That is, they must all be satisfied by installed bundles: Example all constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because... all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for... gvk: group: greens.example.com version: v1 kind: Green The following is an example of a disjunctive constraint ( any ) of three versions of the same GVK. That is, at least one must be satisfied by installed bundles: Example any constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because... any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue The following is an example of a negation constraint ( not ) of one version of a GVK. That is, this GVK cannot be provided by any bundle in the result set: Example not constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for... package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because... not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens The negation semantics might appear unclear in the not constraint context. To clarify, the negation is really instructing the resolver to remove any possible solution that includes a particular GVK, package at a version, or satisfies some child compound constraint from the result set. As a corollary, the not compound constraint should only be used within all or any constraints, because negating without first selecting a possible set of dependencies does not make sense. 2.4.4.4.3. Nested compound constraints A nested compound constraint, one that contains at least one child compound constraint along with zero or more simple constraints, is evaluated from the bottom up following the procedures for each previously described constraint type. The following is an example of a disjunction of conjunctions, where one, the other, or both can satisfy the constraint: Example nested compound constraint schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because... any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue Note The maximum raw size of an olm.constraint type is 64KB to limit resource exhaustion attacks. 2.4.4.5. Dependency preferences There can be many options that equally satisfy a dependency of an Operator. The dependency resolver in Operator Lifecycle Manager (OLM) determines which option best fits the requirements of the requested Operator. As an Operator author or user, it can be important to understand how these choices are made so that dependency resolution is clear. 2.4.4.5.1. Catalog priority On OpenShift Dedicated cluster, OLM reads catalog sources to know which Operators are available for installation. Example CatalogSource object apiVersion: "operators.coreos.com/v1alpha1" kind: "CatalogSource" metadata: name: "my-operators" namespace: "operators" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: "My Operators" priority: 100 1 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Dedicated release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . A CatalogSource object has a priority field, which is used by the resolver to know how to prefer options for a dependency. There are two rules that govern catalog preference: Options in higher-priority catalogs are preferred to options in lower-priority catalogs. Options in the same catalog as the dependent are preferred to any other catalogs. 2.4.4.5.2. Channel ordering An Operator package in a catalog is a collection of update channels that a user can subscribe to in an OpenShift Dedicated cluster. Channels can be used to provide a particular stream of updates for a minor release ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). It is likely that a dependency might be satisfied by Operators in the same package, but different channels. For example, version 1.2 of an Operator might exist in both the stable and fast channels. Each package has a default channel, which is always preferred to non-default channels. If no option in the default channel can satisfy a dependency, options are considered from the remaining channels in lexicographic order of the channel name. 2.4.4.5.3. Order within a channel There are almost always multiple options to satisfy a dependency within a single channel. For example, Operators in one package and channel provide the same set of APIs. When a user creates a subscription, they indicate which channel to receive updates from. This immediately reduces the search to just that one channel. But within the channel, it is likely that many Operators satisfy a dependency. Within a channel, newer Operators that are higher up in the update graph are preferred. If the head of a channel satisfies a dependency, it will be tried first. 2.4.4.5.4. Other constraints In addition to the constraints supplied by package dependencies, OLM includes additional constraints to represent the desired user state and enforce resolution invariants. 2.4.4.5.4.1. Subscription constraint A subscription constraint filters the set of Operators that can satisfy a subscription. Subscriptions are user-supplied constraints for the dependency resolver. They declare the intent to either install a new Operator if it is not already on the cluster, or to keep an existing Operator updated. 2.4.4.5.4.2. Package constraint Within a namespace, no two Operators may come from the same package. 2.4.4.5.5. Additional resources Catalog health requirements 2.4.4.6. CRD upgrades OLM upgrades a custom resource definition (CRD) immediately if it is owned by a singular cluster service version (CSV). If a CRD is owned by multiple CSVs, then the CRD is upgraded when it has satisfied all of the following backward compatible conditions: All existing serving versions in the current CRD are present in the new CRD. All existing instances, or custom resources, that are associated with the serving versions of the CRD are valid when validated against the validation schema of the new CRD. Additional resources Adding a new CRD version Deprecating or removing a CRD version 2.4.4.7. Dependency best practices When specifying dependencies, there are best practices you should consider. Depend on APIs or a specific version range of Operators Operators can add or remove APIs at any time; always specify an olm.gvk dependency on any APIs your Operators requires. The exception to this is if you are specifying olm.package constraints instead. Set a minimum version The Kubernetes documentation on API changes describes what changes are allowed for Kubernetes-style Operators. These versioning conventions allow an Operator to update an API without bumping the API version, as long as the API is backwards-compatible. For Operator dependencies, this means that knowing the API version of a dependency might not be enough to ensure the dependent Operator works as intended. For example: TestOperator v1.0.0 provides v1alpha1 API version of the MyObject resource. TestOperator v1.0.1 adds a new field spec.newfield to MyObject , but still at v1alpha1. Your Operator might require the ability to write spec.newfield into the MyObject resource. An olm.gvk constraint alone is not enough for OLM to determine that you need TestOperator v1.0.1 and not TestOperator v1.0.0. Whenever possible, if a specific Operator that provides an API is known ahead of time, specify an additional olm.package constraint to set a minimum. Omit a maximum version or allow a very wide range Because Operators provide cluster-scoped resources such as API services and CRDs, an Operator that specifies a small window for a dependency might unnecessarily constrain updates for other consumers of that dependency. Whenever possible, do not set a maximum version. Alternatively, set a very wide semantic range to prevent conflicts with other Operators. For example, >1.0.0 <2.0.0 . Unlike with conventional package managers, Operator authors explicitly encode that updates are safe through channels in OLM. If an update is available for an existing subscription, it is assumed that the Operator author is indicating that it can update from the version. Setting a maximum version for a dependency overrides the update stream of the author by unnecessarily truncating it at a particular upper bound. Note Cluster administrators cannot override dependencies set by an Operator author. However, maximum versions can and should be set if there are known incompatibilities that must be avoided. Specific versions can be omitted with the version range syntax, for example > 1.0.0 !1.2.1 . Additional resources Kubernetes documentation: Changing the API 2.4.4.8. Dependency caveats When specifying dependencies, there are caveats you should consider. No compound constraints (AND) There is currently no method for specifying an AND relationship between constraints. In other words, there is no way to specify that one Operator depends on another Operator that both provides a given API and has version >1.1.0 . This means that when specifying a dependency such as: dependencies: - type: olm.package value: packageName: etcd version: ">3.1.0" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2 It would be possible for OLM to satisfy this with two Operators: one that provides EtcdCluster and one that has version >3.1.0 . Whether that happens, or whether an Operator is selected that satisfies both constraints, depends on the ordering that potential options are visited. Dependency preferences and ordering options are well-defined and can be reasoned about, but to exercise caution, Operators should stick to one mechanism or the other. Cross-namespace compatibility OLM performs dependency resolution at the namespace scope. It is possible to get into an update deadlock if updating an Operator in one namespace would be an issue for an Operator in another namespace, and vice-versa. 2.4.4.9. Example dependency resolution scenarios In the following examples, a provider is an Operator which "owns" a CRD or API service. Example: Deprecating dependent APIs A and B are APIs (CRDs): The provider of A depends on B. The provider of B has a subscription. The provider of B updates to provide C but deprecates B. This results in: B no longer has a provider. A no longer works. This is a case OLM prevents with its upgrade strategy. Example: Version deadlock A and B are APIs: The provider of A requires B. The provider of B requires A. The provider of A updates to (provide A2, require B2) and deprecate A. The provider of B updates to (provide B2, require A2) and deprecate B. If OLM attempts to update A without simultaneously updating B, or vice-versa, it is unable to progress to new versions of the Operators, even though a new compatible set can be found. This is another case OLM prevents with its upgrade strategy. 2.4.5. Operator groups This guide outlines the use of Operator groups with Operator Lifecycle Manager (OLM) in OpenShift Dedicated. 2.4.5.1. About Operator groups An Operator group , defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators. The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments. 2.4.5.2. Operator group membership An Operator is considered a member of an Operator group if the following conditions are true: The CSV of the Operator exists in the same namespace as the Operator group. The install modes in the CSV of the Operator support the set of namespaces targeted by the Operator group. An install mode in a CSV consists of an InstallModeType field and a boolean Supported field. The spec of a CSV can contain a set of install modes of four distinct InstallModeTypes : Table 2.5. Install modes and supported Operator groups InstallModeType Description OwnNamespace The Operator can be a member of an Operator group that selects its own namespace. SingleNamespace The Operator can be a member of an Operator group that selects one namespace. MultiNamespace The Operator can be a member of an Operator group that selects more than one namespace. AllNamespaces The Operator can be a member of an Operator group that selects all namespaces (target namespace set is the empty string "" ). Note If the spec of a CSV omits an entry of InstallModeType , then that type is considered unsupported unless support can be inferred by an existing entry that implicitly supports it. 2.4.5.3. Target namespace selection You can explicitly name the target namespace for an Operator group using the spec.targetNamespaces parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace You can alternatively specify a namespace using a label selector with the spec.selector parameter: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: "true" Important Listing multiple namespaces via spec.targetNamespaces or use of a label selector via spec.selector is not recommended, as the support for more than one target namespace in an Operator group will likely be removed in a future release. If both spec.targetNamespaces and spec.selector are defined, spec.selector is ignored. Alternatively, you can omit both spec.selector and spec.targetNamespaces to specify a global Operator group, which selects all namespaces: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace The resolved set of selected namespaces is shown in the status.namespaces parameter of an Opeator group. The status.namespace of a global Operator group contains the empty string ( "" ), which signals to a consuming Operator that it should watch all namespaces. 2.4.5.4. Operator group CSV annotations Member CSVs of an Operator group have the following annotations: Annotation Description olm.operatorGroup=<group_name> Contains the name of the Operator group. olm.operatorNamespace=<group_namespace> Contains the namespace of the Operator group. olm.targetNamespaces=<target_namespaces> Contains a comma-delimited string that lists the target namespace selection of the Operator group. Note All annotations except olm.targetNamespaces are included with copied CSVs. Omitting the olm.targetNamespaces annotation on copied CSVs prevents the duplication of target namespaces between tenants. 2.4.5.5. Provided APIs annotation A group/version/kind (GVK) is a unique identifier for a Kubernetes API. Information about what GVKs are provided by an Operator group are shown in an olm.providedAPIs annotation. The value of the annotation is a string consisting of <kind>.<version>.<group> delimited with commas. The GVKs of CRDs and API services provided by all active member CSVs of an Operator group are included. Review the following example of an OperatorGroup object with a single active member CSV that provides the PackageManifest resource: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local ... spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local 2.4.5.6. Role-based access control When an Operator group is created, three cluster roles are generated. Each contains a single aggregation rule with a cluster role selector set to match a label, as shown below: Cluster role Label to match olm.og.<operatorgroup_name>-admin-<hash_value> olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> olm.og.<operatorgroup_name>-edit-<hash_value> olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> olm.og.<operatorgroup_name>-view-<hash_value> olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> The following RBAC resources are generated when a CSV becomes an active member of an Operator group, as long as the CSV is watching all namespaces with the AllNamespaces install mode and is not in a failed state with reason InterOperatorGroupOwnerConflict : Cluster roles for each API resource from a CRD Cluster roles for each API resource from an API service Additional roles and role bindings Table 2.6. Cluster roles generated for each API resource from a CRD Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> <kind>.<group>-<version>-view-crdview Verbs on apiextensions.k8s.io customresourcedefinitions <crd-name> : get Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Table 2.7. Cluster roles generated for each API resource from an API service Cluster role Settings <kind>.<group>-<version>-admin Verbs on <kind> : * Aggregation labels: rbac.authorization.k8s.io/aggregate-to-admin: true olm.opgroup.permissions/aggregate-to-admin: <operatorgroup_name> <kind>.<group>-<version>-edit Verbs on <kind> : create update patch delete Aggregation labels: rbac.authorization.k8s.io/aggregate-to-edit: true olm.opgroup.permissions/aggregate-to-edit: <operatorgroup_name> <kind>.<group>-<version>-view Verbs on <kind> : get list watch Aggregation labels: rbac.authorization.k8s.io/aggregate-to-view: true olm.opgroup.permissions/aggregate-to-view: <operatorgroup_name> Additional roles and role bindings If the CSV defines exactly one target namespace that contains * , then a cluster role and corresponding cluster role binding are generated for each permission defined in the permissions field of the CSV. All resources generated are given the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels. If the CSV does not define exactly one target namespace that contains * , then all roles and role bindings in the Operator namespace with the olm.owner: <csv_name> and olm.owner.namespace: <csv_namespace> labels are copied into the target namespace. 2.4.5.7. Copied CSVs OLM creates copies of all active member CSVs of an Operator group in each of the target namespaces of that Operator group. The purpose of a copied CSV is to tell users of a target namespace that a specific Operator is configured to watch resources created there. Copied CSVs have a status reason Copied and are updated to match the status of their source CSV. The olm.targetNamespaces annotation is stripped from copied CSVs before they are created on the cluster. Omitting the target namespace selection avoids the duplication of target namespaces between tenants. Copied CSVs are deleted when their source CSV no longer exists or the Operator group that their source CSV belongs to no longer targets the namespace of the copied CSV. Note By default, the disableCopiedCSVs field is disabled. After enabling a disableCopiedCSVs field, the OLM deletes existing copied CSVs on a cluster. When a disableCopiedCSVs field is disabled, the OLM adds copied CSVs again. Disable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF Enable the disableCopiedCSVs field: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF 2.4.5.8. Static Operator groups An Operator group is static if its spec.staticProvidedAPIs field is set to true . As a result, OLM does not modify the olm.providedAPIs annotation of an Operator group, which means that it can be set in advance. This is useful when a user wants to use an Operator group to prevent resource contention in a set of namespaces but does not have active member CSVs that provide the APIs for those resources. Below is an example of an Operator group that protects Prometheus resources in all namespaces with the something.cool.io/cluster-monitoring: "true" annotation: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: "true" 2.4.5.9. Operator group intersection Two Operator groups are said to have intersecting provided APIs if the intersection of their target namespace sets is not an empty set and the intersection of their provided API sets, defined by olm.providedAPIs annotations, is not an empty set. A potential issue is that Operator groups with intersecting provided APIs can compete for the same resources in the set of intersecting namespaces. Note When checking intersection rules, an Operator group namespace is always included as part of its selected target namespaces. Rules for intersection Each time an active member CSV synchronizes, OLM queries the cluster for the set of intersecting provided APIs between the Operator group of the CSV and all others. OLM then checks if that set is an empty set: If true and the CSV's provided APIs are a subset of the Operator group's: Continue transitioning. If true and the CSV's provided APIs are not a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the union of itself and the CSV's provided APIs. If false and the CSV's provided APIs are not a subset of the Operator group's: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason InterOperatorGroupOwnerConflict . If false and the CSV's provided APIs are a subset of the Operator group's: If the Operator group is static: Clean up any deployments that belong to the CSV. Transition the CSV to a failed state with status reason CannotModifyStaticOperatorGroupProvidedAPIs . If the Operator group is not static: Replace the Operator group's olm.providedAPIs annotation with the difference between itself and the CSV's provided APIs. Note Failure states caused by Operator groups are non-terminal. The following actions are performed each time an Operator group synchronizes: The set of provided APIs from active member CSVs is calculated from the cluster. Note that copied CSVs are ignored. The cluster set is compared to olm.providedAPIs , and if olm.providedAPIs contains any extra APIs, then those APIs are pruned. All CSVs that provide the same APIs across all namespaces are requeued. This notifies conflicting CSVs in intersecting groups that their conflict has possibly been resolved, either through resizing or through deletion of the conflicting CSV. 2.4.5.10. Limitations for multitenant Operator management OpenShift Dedicated provides limited support for simultaneously installing different versions of an Operator on the same cluster. Operator Lifecycle Manager (OLM) installs Operators multiple times in different namespaces. One constraint of this is that the Operator's API versions must be the same. Operators are control plane extensions due to their usage of CustomResourceDefinition objects (CRDs), which are global resources in Kubernetes. Different major versions of an Operator often have incompatible CRDs. This makes them incompatible to install simultaneously in different namespaces on a cluster. All tenants, or namespaces, share the same control plane of a cluster. Therefore, tenants in a multitenant cluster also share global CRDs, which limits the scenarios in which different instances of the same Operator can be used in parallel on the same cluster. The supported scenarios include the following: Operators of different versions that ship the exact same CRD definition (in case of versioned CRDs, the exact same set of versions) Operators of different versions that do not ship a CRD, and instead have their CRD available in a separate bundle on the OperatorHub All other scenarios are not supported, because the integrity of the cluster data cannot be guaranteed if there are multiple competing or overlapping CRDs from different Operator versions to be reconciled on the same cluster. Additional resources Operators in multitenant clusters 2.4.5.11. Troubleshooting Operator groups Membership An install plan's namespace must contain only one Operator group. When attempting to generate a cluster service version (CSV) in a namespace, an install plan considers an Operator group invalid in the following scenarios: No Operator groups exist in the install plan's namespace. Multiple Operator groups exist in the install plan's namespace. An incorrect or non-existent service account name is specified in the Operator group. If an install plan encounters an invalid Operator group, the CSV is not generated and the InstallPlan resource continues to install with a relevant message. For example, the following message is provided if more than one Operator group exists in the same namespace: attenuated service account query failed - more than one operator group(s) are managing this namespace count=2 where count= specifies the number of Operator groups in the namespace. If the install modes of a CSV do not support the target namespace selection of the Operator group in its namespace, the CSV transitions to a failure state with the reason UnsupportedOperatorGroup . CSVs in a failed state for this reason transition to pending after either the target namespace selection of the Operator group changes to a supported configuration, or the install modes of the CSV are modified to support the target namespace selection. 2.4.6. Multitenancy and Operator colocation This guide outlines multitenancy and Operator colocation in Operator Lifecycle Manager (OLM). 2.4.6.1. Colocation of Operators in a namespace Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. This default behavior manifests in two ways: InstallPlan resources of pending updates include ClusterServiceVersion (CSV) resources of all other Operators that are in the same namespace. All Operators in the same namespace share the same update policy. For example, if one Operator is set to manual updates, all other Operators' update policies are also set to manual. These scenarios can lead to the following issues: It becomes hard to reason about install plans for Operator updates, because there are many more resources defined in them than just the updated Operator. It becomes impossible to have some Operators in a namespace update automatically while other are updated manually, which is a common desire for cluster administrators. These issues usually surface because, when installing Operators with the OpenShift Dedicated web console, the default behavior installs Operators that support the All namespaces install mode into the default openshift-operators global namespace. As an administrator with the dedicated-admin role, you can bypass this default behavior manually by using the following workflow: Create a project for the installation of the Operator. Create a custom global Operator group , which is an Operator group that watches all namespaces. By associating this Operator group with the namespace you just created, it makes the installation namespace a global namespace, which makes Operators installed there available in all namespaces. Install the desired Operator in the installation namespace. If the Operator has dependencies, the dependencies are automatically installed in the pre-created namespace. As a result, it is then valid for the dependency Operators to have the same update policy and shared install plans. For a detailed procedure, see "Installing global Operators in custom namespaces". Additional resources Installing global Operators in custom namespaces Operators in multitenant clusters 2.4.7. Operator conditions This guide outlines how Operator Lifecycle Manager (OLM) uses Operator conditions. 2.4.7.1. About Operator conditions As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator. OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource. Note By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic. 2.4.7.2. Supported conditions Operator Lifecycle Manager (OLM) supports the following Operator conditions. 2.4.7.2.1. Upgradeable condition The Upgradeable Operator condition prevents an existing cluster service version (CSV) from being replaced by a newer version of the CSV. This condition is useful when: An Operator is about to start a critical process and should not be upgraded until the process is completed. An Operator is performing a migration of custom resources (CRs) that must be completed before the Operator is ready to be upgraded. Important Setting the Upgradeable Operator condition to the False value does not avoid pod disruption. If you must ensure your pods are not disrupted, see "Using pod disruption budgets to specify the number of pods that must be up" and "Graceful termination" in the "Additional resources" section. Example Upgradeable Operator condition apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: "False" 2 reason: "migration" message: "The Operator is performing a migration." lastTransitionTime: "2020-08-24T23:15:55Z" 1 Name of the condition. 2 A False value indicates the Operator is not ready to be upgraded. OLM prevents a CSV that replaces the existing CSV of the Operator from leaving the Pending phase. A False value does not block cluster upgrades. 2.4.7.3. Additional resources Managing Operator conditions Enabling Operator conditions 2.4.8. Operator Lifecycle Manager metrics 2.4.8.1. Exposed metrics Operator Lifecycle Manager (OLM) exposes certain OLM-specific resources for use by the Prometheus-based OpenShift Dedicated cluster monitoring stack. Table 2.8. Metrics exposed by OLM Name Description catalog_source_count Number of catalog sources. catalogsource_ready State of a catalog source. The value 1 indicates that the catalog source is in a READY state. The value of 0 indicates that the catalog source is not in a READY state. csv_abnormal When reconciling a cluster service version (CSV), present whenever a CSV version is in any state other than Succeeded , for example when it is not installed. Includes the name , namespace , phase , reason , and version labels. A Prometheus alert is created when this metric is present. csv_count Number of CSVs successfully registered. csv_succeeded When reconciling a CSV, represents whether a CSV version is in a Succeeded state (value 1 ) or not (value 0 ). Includes the name , namespace , and version labels. csv_upgrade_count Monotonic count of CSV upgrades. install_plan_count Number of install plans. installplan_warnings_total Monotonic count of warnings generated by resources, such as deprecated resources, included in an install plan. olm_resolution_duration_seconds The duration of a dependency resolution attempt. subscription_count Number of subscriptions. subscription_sync_total Monotonic count of subscription syncs. Includes the channel , installed CSV, and subscription name labels. 2.4.9. Webhook management in Operator Lifecycle Manager Webhooks allow Operator authors to intercept, modify, and accept or reject resources before they are saved to the object store and handled by the Operator controller. Operator Lifecycle Manager (OLM) can manage the lifecycle of these webhooks when they are shipped alongside your Operator. See Defining cluster service versions (CSVs) for details on how an Operator developer can define webhooks for their Operator, as well as considerations when running on OLM. 2.4.9.1. Additional resources Kubernetes documentation: Validating admission webhooks Mutating admission webhooks Conversion webhooks 2.5. Understanding OperatorHub 2.5.1. About OperatorHub OperatorHub is the web console interface in OpenShift Dedicated that cluster administrators use to discover and install Operators. With one click, an Operator can be pulled from its off-cluster source, installed and subscribed on the cluster, and made ready for engineering teams to self-service manage the product across deployment environments using Operator Lifecycle Manager (OLM). Cluster administrators can choose from catalogs grouped into the following categories: Category Description Red Hat Operators Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. Certified Operators Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. Red Hat Marketplace Certified software that can be purchased from Red Hat Marketplace . Community Operators Optionally-visible software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. Custom Operators Operators you add to the cluster yourself. If you have not added any custom Operators, the Custom category does not appear in the web console on your OperatorHub. Operators on OperatorHub are packaged to run on OLM. This includes a YAML file called a cluster service version (CSV) containing all of the CRDs, RBAC rules, deployments, and container images required to install and securely run the Operator. It also contains user-visible information like a description of its features and supported Kubernetes versions. The Operator SDK can be used to assist developers packaging their Operators for use on OLM and OperatorHub. If you have a commercial application that you want to make accessible to your customers, get it included using the certification workflow provided on the Red Hat Partner Connect portal at connect.redhat.com . 2.5.2. OperatorHub architecture The OperatorHub UI component is driven by the Marketplace Operator by default on OpenShift Dedicated in the openshift-marketplace namespace. 2.5.2.1. OperatorHub custom resource The Marketplace Operator manages an OperatorHub custom resource (CR) named cluster that manages the default CatalogSource objects provided with OperatorHub. 2.5.3. Additional resources Catalog source About the Operator SDK Defining cluster service versions (CSVs) Operator installation and upgrade workflow in OLM Red Hat Partner Connect Red Hat Marketplace 2.6. Red Hat-provided Operator catalogs Red Hat provides several Operator catalogs that are included with OpenShift Dedicated by default. Important As of OpenShift Dedicated 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Dedicated 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Managing custom catalogs , and Operator Framework packaging format . 2.6.1. About Operator catalogs An Operator catalog is a repository of metadata that Operator Lifecycle Manager (OLM) can query to discover and install Operators and their dependencies on a cluster. OLM always installs Operators from the latest version of a catalog. An index image, based on the Operator bundle format, is a containerized snapshot of a catalog. It is an immutable artifact that contains the database of pointers to a set of Operator manifest content. A catalog can reference an index image to source its content for OLM on the cluster. As catalogs are updated, the latest versions of Operators change, and older versions may be removed or altered. In addition, when OLM runs on an OpenShift Dedicated cluster in a restricted network environment, it is unable to access the catalogs directly from the internet to pull the latest content. As a cluster administrator, you can create your own custom index image, either based on a Red Hat-provided catalog or from scratch, which can be used to source the catalog content on the cluster. Creating and updating your own index image provides a method for customizing the set of Operators available on the cluster, while also avoiding the aforementioned restricted network environment issues. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Dedicated that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Dedicated versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. Note Support for the legacy package manifest format for Operators, including custom catalogs that were using the legacy format, is removed in OpenShift Dedicated 4.8 and later. When creating custom catalog images, versions of OpenShift Dedicated 4 required using the oc adm catalog build command, which was deprecated for several releases and is now removed. With the availability of Red Hat-provided index images starting in OpenShift Dedicated 4.6, catalog builders must use the opm index command to manage index images. Additional resources Managing custom catalogs Packaging format 2.6.2. About Red Hat-provided Operator catalogs The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces. The following Operator catalogs are distributed by Red Hat: Catalog Index image Description redhat-operators registry.redhat.io/redhat/redhat-operator-index:v4 Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. certified-operators registry.redhat.io/redhat/certified-operator-index:v4 Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. redhat-marketplace registry.redhat.io/redhat/redhat-marketplace-index:v4 Certified software that can be purchased from Red Hat Marketplace . community-operators registry.redhat.io/redhat/community-operator-index:v4 Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Dedicated 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.8 to: registry.redhat.io/redhat/redhat-operator-index:v4.9 2.7. Operators in multitenant clusters The default behavior for Operator Lifecycle Manager (OLM) aims to provide simplicity during Operator installation. However, this behavior can lack flexibility, especially in multitenant clusters. In order for multiple tenants on a OpenShift Dedicated cluster to use an Operator, the default behavior of OLM requires that administrators install the Operator in All namespaces mode, which can be considered to violate the principle of least privilege. Consider the following scenarios to determine which Operator installation workflow works best for your environment and requirements. Additional resources Common terms: Multitenant Limitations for multitenant Operator management 2.7.1. Default Operator install modes and behavior When installing Operators with the web console as an administrator, you typically have two choices for the install mode, depending on the Operator's capabilities: Single namespace Installs the Operator in the chosen single namespace, and makes all permissions that the Operator requests available in that namespace. All namespaces Installs the Operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. Makes all permissions that the Operator requests available in all namespaces. In some cases, an Operator author can define metadata to give the user a second option for that Operator's suggested namespace. This choice also means that users in the affected namespaces get access to the Operators APIs, which can leverage the custom resources (CRs) they own, depending on their role in the namespace: The namespace-admin and namespace-edit roles can read/write to the Operator APIs, meaning they can use them. The namespace-view role can read CR objects of that Operator. For Single namespace mode, because the Operator itself installs in the chosen namespace, its pod and service account are also located there. For All namespaces mode, the Operator's privileges are all automatically elevated to cluster roles, meaning the Operator has those permissions in all namespaces. Additional resources Adding Operators to a cluster Install modes types Setting a suggested namespace 2.7.2. Recommended solution for multitenant clusters While a Multinamespace install mode does exist, it is supported by very few Operators. As a middle ground solution between the standard All namespaces and Single namespace install modes, you can install multiple instances of the same Operator, one for each tenant, by using the following workflow: Create a namespace for the tenant Operator that is separate from the tenant's namespace. You can do this by creating a project. Create an Operator group for the tenant Operator scoped only to the tenant's namespace. Install the Operator in the tenant Operator namespace. As a result, the Operator resides in the tenant Operator namespace and watches the tenant namespace, but neither the Operator's pod nor its service account are visible or usable by the tenant. This solution provides better tenant separation, least privilege principle at the cost of resource usage, and additional orchestration to ensure the constraints are met. For a detailed procedure, see "Preparing for multiple instances of an Operator for multitenant clusters". Limitations and considerations This solution only works when the following constraints are met: All instances of the same Operator must be the same version. The Operator cannot have dependencies on other Operators. The Operator cannot ship a CRD conversion webhook. Important You cannot use different versions of the same Operator on the same cluster. Eventually, the installation of another instance of the Operator would be blocked when it meets the following conditions: The instance is not the newest version of the Operator. The instance ships an older revision of the CRDs that lack information or versions that newer revisions have that are already in use on the cluster. Additional resources Preparing for multiple instances of an Operator for multitenant clusters 2.7.3. Operator colocation and Operator groups Operator Lifecycle Manager (OLM) handles OLM-managed Operators that are installed in the same namespace, meaning their Subscription resources are colocated in the same namespace, as related Operators. Even if they are not actually related, OLM considers their states, such as their version and update policy, when any one of them is updated. For more information on Operator colocation and using Operator groups effectively, see Operator Lifecycle Manager (OLM) Multitenancy and Operator colocation . 2.8. CRDs 2.8.1. Managing resources from custom resource definitions This guide describes how developers can manage custom resources (CRs) that come from custom resource definitions (CRDs). 2.8.1.1. Custom resource definitions In the Kubernetes API, a resource is an endpoint that stores a collection of API objects of a certain kind. For example, the built-in Pods resource contains a collection of Pod objects. A custom resource definition (CRD) object defines a new, unique object type, called a kind , in the cluster and lets the Kubernetes API server handle its entire lifecycle. Custom resource (CR) objects are created from CRDs that have been added to the cluster by a cluster administrator, allowing all cluster users to add the new resource type into projects. Operators in particular make use of CRDs by packaging them with any required RBAC policy and other software-specific logic. 2.8.1.2. Creating custom resources from a file After a custom resource definition (CRD) has been added to the cluster, custom resources (CRs) can be created with the CLI from a file using the CR specification. Procedure Create a YAML file for the CR. In the following example definition, the cronSpec and image custom fields are set in a CR of Kind: CronTab . The Kind comes from the spec.kind field of the CRD object: Example YAML file for a CR apiVersion: "stable.example.com/v1" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: "* * * * /5" image: my-awesome-cron-image 1 Specify the group name and API version (name/version) from the CRD. 2 Specify the type in the CRD. 3 Specify a name for the object. 4 Specify the finalizers for the object, if any. Finalizers allow controllers to implement conditions that must be completed before the object can be deleted. 5 Specify conditions specific to the type of object. After you create the file, create the object: USD oc create -f <file_name>.yaml 2.8.1.3. Inspecting custom resources You can inspect custom resource (CR) objects that exist in your cluster using the CLI. Prerequisites A CR object exists in a namespace to which you have access. Procedure To get information on a specific kind of a CR, run: USD oc get <kind> For example: USD oc get crontab Example output NAME KIND my-new-cron-object CronTab.v1.stable.example.com Resource names are not case-sensitive, and you can use either the singular or plural forms defined in the CRD, as well as any short name. For example: USD oc get crontabs USD oc get crontab USD oc get ct You can also view the raw YAML data for a CR: USD oc get <kind> -o yaml For example: USD oc get ct -o yaml Example output apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: "" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: "285" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2 1 2 Custom data from the YAML that you used to create the object displays. | [
"etcd ├── manifests │ ├── etcdcluster.crd.yaml │ └── etcdoperator.clusterserviceversion.yaml │ └── secret.yaml │ └── configmap.yaml └── metadata └── annotations.yaml └── dependencies.yaml",
"annotations: operators.operatorframework.io.bundle.mediatype.v1: \"registry+v1\" 1 operators.operatorframework.io.bundle.manifests.v1: \"manifests/\" 2 operators.operatorframework.io.bundle.metadata.v1: \"metadata/\" 3 operators.operatorframework.io.bundle.package.v1: \"test-operator\" 4 operators.operatorframework.io.bundle.channels.v1: \"beta,stable\" 5 operators.operatorframework.io.bundle.channel.default.v1: \"stable\" 6",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"\\ufeffapiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog 1 namespace: openshift-marketplace 2 annotations: olm.catalogImageTemplate: 3 \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}\" spec: displayName: Example Catalog 4 image: quay.io/example-org/example-catalog:v1 5 priority: -400 6 publisher: Example Org sourceType: grpc 7 grpcPodConfig: securityContextConfig: <security_mode> 8 nodeSelector: 9 custom_label: <label> priorityClassName: system-cluster-critical 10 tolerations: 11 - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" updateStrategy: registryPoll: 12 interval: 30m0s status: connectionState: address: example-catalog.openshift-marketplace.svc:50051 lastConnect: 2021-08-26T18:14:31Z lastObservedState: READY 13 latestImageRegistryPoll: 2021-08-26T18:46:25Z 14 registryService: 15 createdAt: 2021-08-26T16:16:37Z port: 50051 protocol: grpc serviceName: example-catalog serviceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"registry.redhat.io/redhat/redhat-operator-index:v4.18",
"registry.redhat.io/redhat/redhat-operator-index:v4.18",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: generation: 1 name: example-catalog namespace: openshift-marketplace annotations: olm.catalogImageTemplate: \"quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}\" spec: displayName: Example Catalog image: quay.io/example-org/example-catalog:v1.31 priority: -400 publisher: Example Org",
"quay.io/example-org/example-catalog:v1.31",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: example-operator namespace: example-namespace spec: channel: stable name: example-operator source: example-catalog sourceNamespace: openshift-marketplace",
"apiVersion: operators.coreos.com/v1alpha1 kind: InstallPlan metadata: name: install-abcde namespace: operators spec: approval: Automatic approved: true clusterServiceVersionNames: - my-operator.v1.0.1 generation: 1 status: catalogSources: [] conditions: - lastTransitionTime: '2021-01-01T20:17:27Z' lastUpdateTime: '2021-01-01T20:17:27Z' status: 'True' type: Installed phase: Complete plan: - resolving: my-operator.v1.0.1 resource: group: operators.coreos.com kind: ClusterServiceVersion manifest: >- name: my-operator.v1.0.1 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1alpha1 status: Created - resolving: my-operator.v1.0.1 resource: group: apiextensions.k8s.io kind: CustomResourceDefinition manifest: >- name: webservers.web.servers.org sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1beta1 status: Created - resolving: my-operator.v1.0.1 resource: group: '' kind: ServiceAccount manifest: >- name: my-operator sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: Role manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created - resolving: my-operator.v1.0.1 resource: group: rbac.authorization.k8s.io kind: RoleBinding manifest: >- name: my-operator.v1.0.1-my-operator-6d7cbc6f57 sourceName: redhat-operators sourceNamespace: openshift-marketplace version: v1 status: Created",
"packageName: example channels: - name: alpha currentCSV: example.v0.1.2 - name: beta currentCSV: example.v0.1.3 defaultChannel: alpha",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: etcdoperator.v0.9.2 namespace: placeholder annotations: spec: displayName: etcd description: Etcd Operator replaces: etcdoperator.v0.9.0 skips: - etcdoperator.v0.9.1",
"olm.skipRange: <semver_range>",
"apiVersion: operators.coreos.com/v1alpha1 kind: ClusterServiceVersion metadata: name: elasticsearch-operator.v4.1.2 namespace: <namespace> annotations: olm.skipRange: '>=4.1.0 <4.1.2'",
"properties: - type: olm.kubeversion value: version: \"1.16.0\"",
"properties: - property: type: color value: red - property: type: shape value: square - property: type: olm.gvk value: group: olm.coreos.io version: v1alpha1 kind: myresource",
"dependencies: - type: olm.package value: packageName: prometheus version: \">0.27.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"type: olm.constraint value: failureMessage: 'require to have \"certified\"' cel: rule: 'properties.exists(p, p.type == \"certified\")'",
"type: olm.constraint value: failureMessage: 'require to have \"certified\" and \"stable\" properties' cel: rule: 'properties.exists(p, p.type == \"certified\") && properties.exists(p, p.type == \"stable\")'",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: All are required for Red because all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: GVK Green/v1 is needed for gvk: group: greens.example.com version: v1 kind: Green",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Any are required for Red because any: constraints: - gvk: group: blues.example.com version: v1beta1 kind: Blue - gvk: group: blues.example.com version: v1beta2 kind: Blue - gvk: group: blues.example.com version: v1 kind: Blue",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: all: constraints: - failureMessage: Package blue is needed for package: name: blue versionRange: '>=1.0.0' - failureMessage: Cannot be required for Red because not: constraints: - gvk: group: greens.example.com version: v1alpha1 kind: greens",
"schema: olm.bundle name: red.v1.0.0 properties: - type: olm.constraint value: failureMessage: Required for Red because any: constraints: - all: constraints: - package: name: blue versionRange: '>=1.0.0' - gvk: group: blues.example.com version: v1 kind: Blue - all: constraints: - package: name: blue versionRange: '<1.0.0' - gvk: group: blues.example.com version: v1beta1 kind: Blue",
"apiVersion: \"operators.coreos.com/v1alpha1\" kind: \"CatalogSource\" metadata: name: \"my-operators\" namespace: \"operators\" spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 1 image: example.com/my/operator-index:v1 displayName: \"My Operators\" priority: 100",
"dependencies: - type: olm.package value: packageName: etcd version: \">3.1.0\" - type: olm.gvk value: group: etcd.database.coreos.com kind: EtcdCluster version: v1beta2",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: targetNamespaces: - my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace spec: selector: cool.io/prod: \"true\"",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: my-group namespace: my-namespace",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: PackageManifest.v1alpha1.packages.apps.redhat.com name: olm-operators namespace: local spec: selector: {} serviceAccountName: metadata: creationTimestamp: null targetNamespaces: - local status: lastUpdated: 2019-02-19T16:18:28Z namespaces: - local",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: false EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OLMConfig metadata: name: cluster spec: features: disableCopiedCSVs: true EOF",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-monitoring namespace: cluster-monitoring annotations: olm.providedAPIs: Alertmanager.v1.monitoring.coreos.com,Prometheus.v1.monitoring.coreos.com,PrometheusRule.v1.monitoring.coreos.com,ServiceMonitor.v1.monitoring.coreos.com spec: staticProvidedAPIs: true selector: matchLabels: something.cool.io/cluster-monitoring: \"true\"",
"attenuated service account query failed - more than one operator group(s) are managing this namespace count=2",
"apiVersion: operators.coreos.com/v1 kind: OperatorCondition metadata: name: my-operator namespace: operators spec: conditions: - type: Upgradeable 1 status: \"False\" 2 reason: \"migration\" message: \"The Operator is performing a migration.\" lastTransitionTime: \"2020-08-24T23:15:55Z\"",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: \"stable.example.com/v1\" 1 kind: CronTab 2 metadata: name: my-new-cron-object 3 finalizers: 4 - finalizer.stable.example.com spec: 5 cronSpec: \"* * * * /5\" image: my-awesome-cron-image",
"oc create -f <file_name>.yaml",
"oc get <kind>",
"oc get crontab",
"NAME KIND my-new-cron-object CronTab.v1.stable.example.com",
"oc get crontabs",
"oc get crontab",
"oc get ct",
"oc get <kind> -o yaml",
"oc get ct -o yaml",
"apiVersion: v1 items: - apiVersion: stable.example.com/v1 kind: CronTab metadata: clusterName: \"\" creationTimestamp: 2017-05-31T12:56:35Z deletionGracePeriodSeconds: null deletionTimestamp: null name: my-new-cron-object namespace: default resourceVersion: \"285\" selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/my-new-cron-object uid: 9423255b-4600-11e7-af6a-28d2447dc82b spec: cronSpec: '* * * * /5' 1 image: my-awesome-cron-image 2"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/operators/understanding-operators |
6.3. About Knowledge References | 6.3. About Knowledge References After distributing the data over several databases, define the relationship between the distributed data using knowledge references , pointers to directory information held in different databases. The Directory Server provides the following types of knowledge references to help link the distributed data into a single directory tree: Referrals - The server returns a piece of information to the client application indicating that the client application needs to contact another server to fulfill the request. Chaining - The server contacts other servers on behalf of the client application and returns the combined results to the client application when the operation is finished. The following sections describe and compare these two types of knowledge references in more detail. 6.3.1. Using Referrals A referral is a piece of information returned by a server that informs a client application which server to contact to proceed with an operation request. This redirection mechanism occurs when a client application requests a directory entry that does not exist on the local server. Directory Server supports two types of referrals: Default referrals - The directory returns a default referral when a client application presents a DN for which the server does not have a matching suffix. Default referrals are stored in the configuration file of the server. One default referral can be set for the Directory Server and a separate default referral for each database. The default referral for each database is done through the suffix configuration information. When the suffix of the database is disabled, configure the directory service to return a default referral to client requests made to that suffix. For more information about suffixes, see Section 6.2.2, "About Suffixes" . For information on configuring suffixes, see the Red Hat Directory Server Administration Guide . Smart referrals - Smart referrals are stored on entries within the directory service itself. Smart referrals point to Directory Servers that have knowledge of the subtree whose DN matches the DN of the entry containing the smart referral. All referrals are returned in the format of an LDAP uniform resource locator, or LDAP URL. The following sections describe the structure of an LDAP referral, and then describe the two referral types supported by Directory Server. 6.3.1.1. The Structure of an LDAP Referral An LDAP referral contains information in the format of an LDAP URL. An LDAP URL contains the following information: The host name of the server to contact. The port number on the server that is configured to listen for LDAP requests. The base DN (for search operations) or target DN (for add, delete, and modify operations). For example, a client application searches dc=example,dc=com for entries with a surname value of Jensen . A referral returns the following LDAP URL to the client application: This referral instructs the client application to contact the host europe.example.com on port 389 and submit a search using the root suffix ou=people, l=europe,dc=example,dc=com . The LDAP client application determines how a referral is handled. Some client applications automatically retry the operation on the server to which they have been referred. Other client applications return the referral information to the user. Most LDAP client applications provided by Red Hat Directory Server (such as the command-line utilities) automatically follow the referral. The same bind credentials supplied on the initial directory request are used to access the server. Most client applications follow a limited number of referrals, or hops . The limit on the number of referrals that are followed reduces the time a client application spends trying to complete a directory lookup request and helps eliminate hung processes caused by circular referral patterns. 6.3.1.2. About Default Referrals Default referrals are returned to clients when the server or database that was contacted does not contain the requested data. Directory Server determines whether a default referral should be returned by comparing the DN of the requested directory object against the directory suffixes supported by the local server. If the DN does not match the supported suffixes, the Directory Server returns a default referral. For example, a directory client requests the following directory entry: uid=bjensen,ou=people,dc=example,dc=com However, the server only manages entries stored under the dc=europe,dc=example,dc=com suffix. The directory returns a referral to the client that indicates which server to contact for entries stored under the dc=example,dc=com suffix. The client then contacts the appropriate server and resubmits the original request. Configure the default referral to point to a Directory Server that has more information about the distribution of the directory service. Default referrals for the server are set by the nsslapd-referral attribute. Default referrals for each database in the directory installation are set by the nsslapd-referral attribute in the database entry in the configuration. These attribute values are stored in the dse.ldif file. For information on configuring default referrals, see the Red Hat Directory Server Administration Guide . 6.3.1.3. Smart Referrals The Directory Server can also use smart referrals . Smart referrals associate a directory entry or directory tree to a specific LDAP URL. This means that requests can be forwarded to any of the following: The same namespace contained on a different server. Different namespaces on a local server. Different namespaces on the same server. Unlike default referrals, smart referrals are stored within the directory service itself. For information on configuring and managing smart referrals, see the Red Hat Directory Server Administration Guide . For example, the directory service for the American office of the Example Corp. contains the ou=people,dc=example,dc=com directory branch point. Redirect all requests on this branch to the ou=people branch of the European office of Example Corp. by specifying a smart referral on the ou=people entry itself. The smart referral is ldap://europe.example.com:389/ou=people,dc=example,dc=com . Any requests made to the people branch of the American directory service are redirected to the European directory. This is illustrated below: Figure 6.7. Using Smart Referrals to Redirect Requests The same mechanism can be used to redirect queries to a different server that uses a different namespace. For example, an employee working in the Italian office of Example Corp. makes a request to the European directory service for the phone number of an Example Corp. employee in America. The directory service returns the referral ldap://europe.example.com:389/ou=US employees,dc=example,dc=com . Figure 6.8. Redirecting a Query to a Different Server and Namespace Finally, if multiple suffixes are served on the same server, queries can be redirected from one namespace to another namespace served on the same machine. For example, to redirect all queries on the local machine for o=example,c=us to dc=example,dc=com , then put the smart referral ldap:///dc=example,dc=com on the o=example,c=us entry. Figure 6.9. Redirecting a Query from One Namespace to Another Namespace on the Same Server Note The third slash in this LDAP URL indicates that the URL points to the same Directory Server. Creating a referral from one namespace to another works only for clients whose searches are based at that distinguished name. Other kinds of operations, such as searches below ou=people,o=example,c=US , are not performed correctly. For more information on LDAP URLS and on how to include smart URLs on Directory Server entries, see to the Red Hat Directory Server Administration Guide . 6.3.1.4. Tips for Designing Smart Referrals Even though smart referrals are easy to implement, consider the following points before using them: Keep the design simple. Deploying the directory service using a complex web of referrals makes administration difficult. Overusing smart referrals can also lead to circular referral patterns. For example, a referral points to an LDAP URL, which in turn points to another LDAP URL, and so on until a referral somewhere in the chain points back to the original server. This is illustrated below: Figure 6.10. A Circular Referral Pattern Redirect at major branchpoints. Limit referral usage to handle redirection at the suffix level of the directory tree. Smart referrals redirect lookup requests for leaf (non-branch) entries to different servers and DNs. As a result, it is tempting to use smart referrals as an aliasing mechanism, leading to a complex and difficult method to secure directory structure. Limiting referrals to the suffix or major branch points of the directory tree limits the number of referrals that have to be managed, subsequently reducing the directory's administrative overhead. Consider the security implications. Access control does not cross referral boundaries. Even if the server where the request originated allows access to an entry, when a smart referral sends a client request to another server, the client application may not be allowed access. In addition, the client's credentials need to be available on the server to which the client is referred for client authentication to occur. 6.3.2. Using Chaining Chaining is a method for relaying requests to another server. This method is implemented through database links. A database link, as described in Section 6.2, "Distributing the Directory Data" , contains no data. Instead, it redirects client application requests to remote servers that contain the data. During the chaining process, a server receives a request from a client application for data that the server does not contain. Using the database link, the server then contacts other servers on behalf of the client application and returns the results to the client application. Each database link is associated with a remote server holding data. Configure alternate remote servers containing replicas of the data for the database link to use in the event of a failure. For more information on configuring database links, see the Red Hat Directory Server Administration Guide . Database links provide the following features: Invisible access to remote data. Because the database link resolves client requests, data distribution is completely hidden from the client. Dynamic management. A part of the directory service can be added or removed from the system while the entire system remains available to client applications. The database link can temporarily return referrals to the application until entries have been redistributed across the directory service. This can also be implemented through the suffix itself, which can return a referral rather than forwarding a client application to the database. Access control. The database link impersonates the client application, providing the appropriate authorization identity to the remote server. User impersonation can be disabled on the remote servers when access control evaluation is not required. For more information on configuring database links, see the Red Hat Directory Server Administration Guide . 6.3.3. Deciding Between Referrals and Chaining Both methods of linking the directory partitions have advantages and disadvantages. The method, or combination of methods, to use depends upon the specific needs of the directory service. The major difference between the two knowledge references is the location of the intelligence that knows how to locate the distributed information. In a chained system, the intelligence is implemented in the servers. In a system that uses referrals, the intelligence is implemented in the client application. While chaining reduces client complexity, it does so at the cost of increased server complexity. Chained servers must work with remote servers and send the results to directory clients. With referrals, the client must handle locating the referral and collating search results. However, referrals offer more flexibility for the writers of client applications and allow developers to provide better feedback to users about the progress of a distributed directory operation. The following sections describe some of the more specific differences between referrals and chaining in greater detail. 6.3.3.1. Usage Differences Some client applications do not support referrals. Chaining allows client applications to communicate with a single server and still access the data stored on many servers. Sometimes referrals do not work when a company's network uses proxies. For example, a client application may have permissions to communicate with only one server inside a firewall. If that application is referred to a different server, it is not able to contact it successfully. A client must also be able to authenticate correctly when using referrals, which means that the servers to which clients are being referred need to contain the client's credentials. With chaining, client authentication takes place only once. Clients do not need to authenticate again on the servers to which their requests are chained. 6.3.3.2. Evaluating Access Controls Chaining evaluates access controls differently from referrals. With referrals, an entry for the client must exist on all of the target servers. With chaining, the client entry does not need to be on all of the target servers. Performing Search Requests Using Referrals The following diagram illustrates a client request to a server using referrals: Figure 6.11. Sending a Client Request to a Server Using Referrals In the illustration above, the client application performs the following steps: The client application first binds with Server A. Server A contains an entry for the client that provides a user name and password, so it returns a bind acceptance message. In order for the referral to work, the client entry must be present on server A. The client application sends the operation request to Server A. However, Server A does not contain the requested information. Instead, Server A returns a referral to the client application instructing it to contact Server B. The client application then sends a bind request to Server B. To bind successfully, Server B must also contain an entry for the client application. The bind is successful, and the client application can now resubmit its search operation to Server B. This approach requires Server B to have a replicated copy of the client's entry from Server A. Performing Search Requests Using Chaining The problem of replicating client entries across servers is resolved using chaining. On a chained system, the search request is forwarded multiple times until there is a response. Figure 6.12. Sending a Client Request to a Server Using Chaining In the illustration above, the following steps are performed: The client application binds with Server A, and Server A tries to confirm that the user name and password are correct. Server A does not contain an entry corresponding to the client application. Instead, it contains a database link to Server B, which contains the actual entry of the client. Server A sends a bind request to Server B. Server B sends an acceptance response to Server A. Server A then processes the client application's request using the database link. The database link contacts a remote data store located on Server B to process the search operation. In a chained system, the entry corresponding to the client application does not need to be located on the same server as the data the client requests. Figure 6.13. Authenticating a Client and Retrieving Data Using Different Servers In this illustration, the following steps are performed: The client application binds with Server A, and Server A tries to confirm that the user name and password are correct. Server A does not contain an entry corresponding to the client application. Instead, it contains a database link to Server B, which contains the actual entry of the client. Server A sends a bind request to Server B. Server B sends an acceptance response to Server A. Server A then processes the client application's request using another database link. The database link contacts a remote data store located on Server C to process the search operation. Unsupported Access Controls Database links do not support the following access controls: Controls that must access the content of the user entry are not supported when the user entry is located on a different server. This includes access controls based on groups, filters, and roles. Controls based on client IP addresses or DNS domains may be denied. This is because the database link impersonates the client when it contacts remote servers. If the remote database contains IP-based access controls, it evaluates them using the database link's domain rather than the original client domain. | [
"ldap://europe.example.com:389/ou=people, l=europe,dc=example,dc=com"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_the_directory_topology-about_knowledge_references |
21.3. Installation | 21.3. Installation To install libguestfs, guestfish, the libguestfs tools, and guestmount, enter the following command: To install every libguestfs-related package including the language bindings, enter the following command: | [
"yum install libguestfs libguestfs-tools",
"yum install '*guestf*'"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-guest_virtual_machine_disk_access_with_offline_tools-installation |
Chapter 14. Performing overcloud post-installation tasks | Chapter 14. Performing overcloud post-installation tasks This chapter contains information about tasks to perform immediately after you create your overcloud. These tasks ensure your overcloud is ready to use. 14.1. Checking overcloud deployment status To check the deployment status of the overcloud, use the openstack overcloud status command. This command returns the result of all deployment steps. Procedure Source the stackrc file: Run the deployment status command: The output of this command displays the status of the overcloud: If your overcloud uses a different name, use the --stack argument to select an overcloud with a different name: Replace <overcloud_name> with the name of your overcloud. 14.2. Creating basic overcloud flavors Validation steps in this guide assume that your installation contains flavors. If you have not already created at least one flavor, complete the following steps to create a basic set of default flavors that have a range of storage and processing capabilities: Procedure Source the overcloudrc file: Run the openstack flavor create command to create a flavor. Use the following options to specify the hardware requirements for each flavor: --disk Defines the hard disk space for a virtual machine volume. --ram Defines the RAM required for a virtual machine. --vcpus Defines the quantity of virtual CPUs for a virtual machine. The following example creates the default overcloud flavors: Note Use USD openstack flavor create --help to learn more about the openstack flavor create command. 14.3. Creating a default tenant network The overcloud requires a default Tenant network so that virtual machines can communicate internally. Procedure Source the overcloudrc file: Create the default Tenant network: Create a subnet on the network: Confirm the created network: These commands create a basic Networking service (neutron) network named default . The overcloud automatically assigns IP addresses from this network to virtual machines using an internal DHCP mechanism. 14.4. Creating a default floating IP network To access your virtual machines from outside of the overcloud, you must configure an external network that provides floating IP addresses to your virtual machines. This procedure contains two examples. Use the example that best suits your environment: Native VLAN (flat network) Non-Native VLAN (VLAN network) Both of these examples involve creating a network with the name public . The overcloud requires this specific name for the default floating IP pool. This name is also important for the validation tests in Section 14.7, "Validating the overcloud" . By default, Openstack Networking (neutron) maps a physical network name called datacentre to the br-ex bridge on your host nodes. You connect the public overcloud network to the physical datacentre and this provides a gateway through the br-ex bridge. Prerequisites A dedicated interface or native VLAN for the floating IP network. Procedure Source the overcloudrc file: Create the public network: Create a flat network for a native VLAN connection: Create a vlan network for non-native VLAN connections: Use the --provider-segment option to define the VLAN that you want to use. In this example, the VLAN is 201 . Create a subnet with an allocation pool for floating IP addresses. In this example, the IP range is 10.1.1.51 to 10.1.1.250 : Ensure that this range does not conflict with other IP addresses in your external network. 14.5. Creating a default provider network A provider network is another type of external network connection that routes traffic from private tenant networks to external infrastructure network. The provider network is similar to a floating IP network but the provider network uses a logical router to connect private networks to the provider network. This procedure contains two examples. Use the example that best suits your environment: Native VLAN (flat network) Non-Native VLAN (VLAN network) By default, Openstack Networking (neutron) maps a physical network name called datacentre to the br-ex bridge on your host nodes. You connect the public overcloud network to the physical datacentre and this provides a gateway through the br-ex bridge. Procedure Source the overcloudrc file: Create the provider network: Create a flat network for a native VLAN connection: Create a vlan network for non-native VLAN connections: Use the --provider-segment option to define the VLAN that you want to use. In this example, the VLAN is 201 . These example commands create a shared network. It is also possible to specify a tenant instead of specifying --share so that only the tenant has access to the new network. + If you mark a provider network as external, only the operator may create ports on that network. Add a subnet to the provider network to provide DHCP services: Create a router so that other networks can route traffic through the provider network: Set the external gateway for the router to the provider network: Attach other networks to this router. For example, run the following command to attach a subnet subnet1 to the router: This command adds subnet1 to the routing table and allows traffic from virtual machines using subnet1 to route to the provider network. 14.6. Creating additional bridge mappings Floating IP networks can use any bridge, not just br-ex , provided that you map the additional bridge during deployment. Procedure To map a new bridge called br-floating to the floating physical network, include the NeutronBridgeMappings parameter in an environment file: With this method, you can create separate external networks after creating the overcloud. For example, to create a floating IP network that maps to the floating physical network, run the following commands: 14.7. Validating the overcloud The overcloud uses the OpenStack Integration Test Suite (tempest) tool set to conduct a series of integration tests. This section contains information about preparations for running the integration tests. For full instructions about how to use the OpenStack Integration Test Suite, see the OpenStack Integration Test Suite Guide . The Integration Test Suite requires a few post-installation steps to ensure successful tests. Procedure If you run this test from the undercloud, ensure that the undercloud host has access to the Internal API network on the overcloud. For example, add a temporary VLAN on the undercloud host to access the Internal API network (ID: 201) using the 172.16.0.201/24 address: Run the integration tests as described in the OpenStack Integration Test Suite Guide . After completing the validation, remove any temporary connections to the overcloud Internal API. In this example, use the following commands to remove the previously created VLAN on the undercloud: 14.8. Protecting the overcloud from removal Set a custom policy for heat to protect your overcloud from being deleted. Procedure Create an environment file called prevent-stack-delete.yaml . Set the HeatApiPolicies parameter: Important The heat-deny-action is a default policy that you must include in your undercloud installation. Add the prevent-stack-delete.yaml environment file to the custom_env_files parameter in the undercloud.conf file: Run the undercloud installation command to refresh the configuration: This environment file prevents you from deleting any stacks in the overcloud, which means you cannot perform the following functions: Delete the overcloud Remove individual Compute nor Ceph Storage nodes Replace Controller nodes To enable stack deletion, remove the prevent-stack-delete.yaml file from the custom_env_files parameter and run the openstack undercloud install command. | [
"source ~/stackrc",
"openstack overcloud status",
"+-----------+---------------------+---------------------+-------------------+ | Plan Name | Created | Updated | Deployment Status | +-----------+---------------------+---------------------+-------------------+ | overcloud | 2018-05-03 21:24:50 | 2018-05-03 21:27:59 | DEPLOY_SUCCESS | +-----------+---------------------+---------------------+-------------------+",
"openstack overcloud status --stack <overcloud_name>",
"source ~/overcloudrc",
"openstack flavor create m1.tiny --ram 512 --disk 0 --vcpus 1 openstack flavor create m1.smaller --ram 1024 --disk 0 --vcpus 1 openstack flavor create m1.small --ram 2048 --disk 10 --vcpus 1 openstack flavor create m1.medium --ram 3072 --disk 10 --vcpus 2 openstack flavor create m1.large --ram 8192 --disk 10 --vcpus 4 openstack flavor create m1.xlarge --ram 8192 --disk 10 --vcpus 8",
"source ~/overcloudrc",
"(overcloud) USD openstack network create default",
"(overcloud) USD openstack subnet create default --network default --gateway 172.20.1.1 --subnet-range 172.20.0.0/16",
"(overcloud) USD openstack network list +-----------------------+-------------+--------------------------------------+ | id | name | subnets | +-----------------------+-------------+--------------------------------------+ | 95fadaa1-5dda-4777... | default | 7e060813-35c5-462c-a56a-1c6f8f4f332f | +-----------------------+-------------+--------------------------------------+",
"source ~/overcloudrc",
"(overcloud) USD openstack network create public --external --provider-network-type flat --provider-physical-network datacentre",
"(overcloud) USD openstack network create public --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 201",
"(overcloud) USD openstack subnet create public --network public --dhcp --allocation-pool start=10.1.1.51,end=10.1.1.250 --gateway 10.1.1.1 --subnet-range 10.1.1.0/24",
"source ~/overcloudrc",
"(overcloud) USD openstack network create provider --external --provider-network-type flat --provider-physical-network datacentre --share",
"(overcloud) USD openstack network create provider --external --provider-network-type vlan --provider-physical-network datacentre --provider-segment 201 --share",
"(overcloud) USD openstack subnet create provider-subnet --network provider --dhcp --allocation-pool start=10.9.101.50,end=10.9.101.100 --gateway 10.9.101.254 --subnet-range 10.9.101.0/24",
"(overcloud) USD openstack router create external",
"(overcloud) USD openstack router set --external-gateway provider external",
"(overcloud) USD openstack router add subnet external subnet1",
"parameter_defaults: NeutronBridgeMappings: \"datacentre:br-ex,floating:br-floating\"",
"source ~/overcloudrc (overcloud) USD openstack network create public --external --provider-physical-network floating --provider-network-type vlan --provider-segment 105 (overcloud) USD openstack subnet create public --network public --dhcp --allocation-pool start=10.1.2.51,end=10.1.2.250 --gateway 10.1.2.1 --subnet-range 10.1.2.0/24",
"source ~/stackrc (undercloud) USD sudo ovs-vsctl add-port br-ctlplane vlan201 tag=201 -- set interface vlan201 type=internal (undercloud) USD sudo ip l set dev vlan201 up; sudo ip addr add 172.16.0.201/24 dev vlan201",
"source ~/stackrc (undercloud) USD sudo ovs-vsctl del-port vlan201",
"parameter_defaults: HeatApiPolicies: heat-deny-action: key: 'actions:action' value: 'rule:deny_everybody' heat-protect-overcloud: key: 'stacks:delete' value: 'rule:deny_everybody'",
"custom_env_files = prevent-stack-delete.yaml",
"openstack undercloud install"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_performing-overcloud-post-installation-tasks |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.11/making-open-source-more-inclusive |
7.91. java-1.8.0-openjdk | 7.91. java-1.8.0-openjdk 7.91.1. RHBA-2015:1427 - java-1.8.0-openjdk bug fix and enhancement update Updated java-1.8.0-openjdk packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The java-1.8.0-openjdk packages contain the latest version of the Open Java Development Kit (OpenJDK), OpenJDK 8. These packages provide a fully compliant implementation of Java SE 8. Bug Fixes BZ# 1154143 In Red Hat Enterprise Linux 6, the java-1.8.0-openjdk packages mistakenly included the SunEC provider, which does not function properly on this system. With this update, SunEC has been removed from the Red Hat Enterprise Linux 6 version of java-1.8.0-openjdk. BZ# 1155783 Prior to this update, the java-1.8.0-openjdk packages incorrectly provided "java-devel", which could lead to their inclusion in inappropriate builds. As a consequence, the "yum install java-devel" command in some cases installed java-1.8.0-openjdk-devel instead of the intended Java package. This update removes the providing configuration, and java-1.8.0-openjdk-devel can now be installed only by using the "yum install java-1.8.0-openjdk-devel" command. BZ# 1182011 Previously, the OpenJDK utility displayed characters containing the umlaut diacritical mark (such as a, o, or u) and the eszett character (ss) in PostScript output incorrectly. A patch with support for umlaut and eszett characters has been applied, and OpenJDK now displays these characters correctly. BZ# 1189853 The java-1.8.0-openjdk package for Red Hat Enterprise Linux 6 did not provide the "java" virtual package. Consequently, when a package needed to use OpenJDK 8, it was necessary to require "java-1.8.0-openjdk" instead of commonly used "java". Now, it is sufficient to require "java" as expected. BZ# 1212592 OpenJDK used a copy of the system time zone data. This could cause a difference between OpenJDK time and the system time. Now, OpenJDK uses the system time zone data, and OpenJDK time and the system time are the same. Enhancement BZ# 1210007 Red Hat now provides debug builds of OpenJDK in optional channels. With installed debug builds and JVM or JDK switched to using them, it is possible to do detailed HotSpot debugging. The debug builds can be used via alternatives or direct execution, in the same way as regular Java builds. Note that debug builds are not suitable for use in production, as they operate at a slower rate. Users of java-1.8.0-openjdk are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. All running instances of OpenJDK Java must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-java-1.8.0-openjdk |
Chapter 7. Viewing and exporting logs | Chapter 7. Viewing and exporting logs Activity logs are gathered for all repositories and namespace in Quay.io. Viewing usage logs of Quay.io. can provide valuable insights and benefits for both operational and security purposes. Usage logs might reveal the following information: Resource Planning : Usage logs can provide data on the number of image pulls, pushes, and overall traffic to your registry. User Activity : Logs can help you track user activity, showing which users are accessing and interacting with images in the registry. This can be useful for auditing, understanding user behavior, and managing access controls. Usage Patterns : By studying usage patterns, you can gain insights into which images are popular, which versions are frequently used, and which images are rarely accessed. This information can help prioritize image maintenance and cleanup efforts. Security Auditing : Usage logs enable you to track who is accessing images and when. This is crucial for security auditing, compliance, and investigating any unauthorized or suspicious activity. Image Lifecycle Management : Logs can reveal which images are being pulled, pushed, and deleted. This information is essential for managing image lifecycles, including deprecating old images and ensuring that only authorized images are used. Compliance and Regulatory Requirements : Many industries have compliance requirements that mandate tracking and auditing of access to sensitive resources. Usage logs can help you demonstrate compliance with such regulations. Identifying Abnormal Behavior : Unusual or abnormal patterns in usage logs can indicate potential security breaches or malicious activity. Monitoring these logs can help you detect and respond to security incidents more effectively. Trend Analysis : Over time, usage logs can provide trends and insights into how your registry is being used. This can help you make informed decisions about resource allocation, access controls, and image management strategies. There are multiple ways of accessing log files: Viewing logs through the web UI. Exporting logs so that they can be saved externally. Accessing log entries using the API. To access logs, you must have administrative privileges for the selected repository or namespace. Note A maximum of 100 log results are available at a time via the API. To gather more results that that, you must use the log exporter feature described in this chapter. 7.1. Viewing logs using the UI Use the following procedure to view log entries for a repository or namespace using the web UI. Procedure Navigate to a repository or namespace for which you are an administrator of. In the navigation pane, select Usage Logs . Optional. On the usage logs page: Set the date range for viewing log entries by adding dates to the From and to boxes. By default, the UI shows you the most recent week of log entries. Type a string into the Filter Logs box to display log entries that of the specified keyword. For example, you can type delete to filter the logs to show deleted tags. Under Description , toggle the arrow of a log entry to see more, or less, text associated with a specific log entry. 7.2. Exporting repository logs You can obtain a larger number of log files and save them outside of Quay.io by using the Export Logs feature. This feature has the following benefits and constraints: You can choose a range of dates for the logs you want to gather from a repository. You can request that the logs be sent to you by an email attachment or directed to a callback URL. To export logs, you must be an administrator of the repository or namespace. 30 days worth of logs are retained for all users. Export logs only gathers log data that was previously produced. It does not stream logging data. When logs are gathered and made available to you, you should immediately copy that data if you want to save it. By default, the data expires after one hour. Use the following procedure to export logs. Procedure Select a repository for which you have administrator privileges. In the navigation pane, select Usage Logs . Optional. If you want to specify specific dates, enter the range in the From and to boxes. Click the Export Logs button. An Export Usage Logs pop-up appears, as shown Enter an email address or callback URL to receive the exported log. For the callback URL, you can use a URL to a specified domain, for example, <webhook.site>. Select Start Logs Export to start the process for gather the selected log entries. Depending on the amount of logging data being gathered, this can take anywhere from a few minutes to several hours to complete. When the log export is completed, the one of following two events happens: An email is received, alerting you to the available of your requested exported log entries. A successful status of your log export request from the webhook URL is returned. Additionally, a link to the exported data is made available for you to delete to download the logs. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/about_quay_io/use-quay-view-export-logs |
Chapter 144. KafkaNodePool schema reference | Chapter 144. KafkaNodePool schema reference Property Description spec The specification of the KafkaNodePool. KafkaNodePoolSpec status The status of the KafkaNodePool. KafkaNodePoolStatus | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaNodePool-reference |
Chapter 19. Preparing for Installation | Chapter 19. Preparing for Installation 19.1. Preparing for a Network Installation Note Make sure no installation DVD (or any other type of DVD or CD) is in your hosting partition's drive if you are performing a network-based installation. Having a DVD or CD in the drive might cause unexpected errors. Ensure that you have boot media available as described in Chapter 20, Booting (IPL) the Installer . The Red Hat Enterprise Linux installation medium must be available for either a network installation (via NFS, FTP, HTTP, or HTTPS) or installation via local storage. Use the following steps if you are performing an NFS, FTP, HTTP, or HTTPS installation. The NFS, FTP, HTTP, or HTTPS server to be used for installation over the network must be a separate, network-accessible server. The separate server can be a virtual machine, LPAR, or any other system (such as a Linux on Power Systems or x86 system). It must provide the complete contents of the installation DVD-ROM. Note The public directory used to access the installation files over FTP, NFS, HTTP, or HTTPS is mapped to local storage on the network server. For example, the local directory /var/www/inst/rhel6.9 on the network server can be accessed as http://network.server.com/inst/rhel6.9 . In the following examples, the directory on the installation staging server that will contain the installation files will be specified as /location/of/disk/space . The directory that will be made publicly available via FTP, NFS, HTTP, or HTTPS will be specified as /publicly_available_directory . For example, /location/of/disk/space may be a directory you create called /var/isos . /publicly_available_directory might be /var/www/html/rhel6.9 , for an HTTP install. In the following, you will require an ISO image . An ISO image is a file containing an exact copy of the content of a DVD. To create an ISO image from a DVD use the following command: where dvd is your DVD drive device, name_of_image is the name you give to the resulting ISO image file, and path_to_image is the path to the location on your system where the resulting ISO image will be stored. To copy the files from the installation DVD to a Linux instance, which acts as an installation staging server, continue with either Section 19.1.1, "Preparing for FTP, HTTP, and HTTPS Installation" or Section 19.1.2, "Preparing for an NFS Installation" . 19.1.1. Preparing for FTP, HTTP, and HTTPS Installation Warning If your Apache web server or tftp FTP server configuration enables SSL security, make sure to only enable the TLSv1 protocol, and disable SSLv2 and SSLv3 . This is due to the POODLE SSL vulnerability (CVE-2014-3566). See https://access.redhat.com/solutions/1232413 for details about securing Apache , and https://access.redhat.com/solutions/1234773 for information about securing tftp . Extract the files from the ISO image of the installation DVD and place them in a directory that is shared over FTP, HTTP, or HTTPS. , make sure that the directory is shared via FTP, HTTP, or HTTPS, and verify client access. Test to see whether the directory is accessible from the server itself, and then from another machine on the same subnet to which you will be installing. 19.1.2. Preparing for an NFS Installation For NFS installation it is not necessary to extract all the files from the ISO image. It is sufficient to make the ISO image itself, the install.img file, and optionally the product.img file available on the network server via NFS. Transfer the ISO image to the NFS exported directory. On a Linux system, run: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and publicly_available_directory is a directory that is available over NFS or that you intend to make available over NFS. Use a SHA256 checksum program to verify that the ISO image that you copied is intact. Many SHA256 checksum programs are available for various operating systems. On a Linux system, run: where name_of_image is the name of the ISO image file. The SHA256 checksum program displays a string of 64 characters called a hash . Compare this hash to the hash displayed for this particular image on the Downloads page in the Red Hat Customer Portal (refer to Chapter 1, Obtaining Red Hat Enterprise Linux ). The two hashes should be identical. Copy the images/ directory from inside the ISO image to the same directory in which you stored the ISO image file itself. Enter the following commands: where path_to_image is the path to the ISO image file, name_of_image is the name of the ISO image file, and mount_point is a mount point on which to mount the image while you copy files from the image. For example: The ISO image file and an images/ directory are now present, side-by-side, in the same directory. Verify that the images/ directory contains at least the install.img file, without which installation cannot proceed. Optionally, the images/ directory should contain the product.img file, without which only the packages for a Minimal installation will be available during the package group selection stage (refer to Section 23.17, "Package Group Selection" ). Ensure that an entry for the publicly available directory exists in the /etc/exports file on the network server so that the directory is available via NFS. To export a directory read-only to a specific system, use: To export a directory read-only to all systems, use: On the network server, start the NFS daemon (on a Red Hat Enterprise Linux system, use /sbin/service nfs start ). If NFS is already running, reload the configuration file (on a Red Hat Enterprise Linux system use /sbin/service nfs reload ). Be sure to test the NFS share following the directions in the Red Hat Enterprise Linux Deployment Guide . Refer to your NFS documentation for details on starting and stopping the NFS server. Note anaconda has the ability to test the integrity of the installation media. It works with the DVD, hard drive ISO, and NFS ISO installation methods. We recommend that you test all installation media before starting the installation process, and before reporting any installation-related bugs (many of the bugs reported are actually due to improperly-burned DVDs). To use this test, type the following command at the boot: prompt: | [
"dd if=/dev/ dvd of=/ path_to_image / name_of_image .iso",
"mv / path_to_image / name_of_image .iso / publicly_available_directory /",
"sha256sum name_of_image .iso",
"mount -t iso9660 / path_to_image / name_of_image .iso / mount_point -o loop,ro cp -pr / mount_point /images / publicly_available_directory / umount / mount_point",
"mount -t iso9660 /var/isos/RHEL6.iso /mnt/tmp -o loop,ro cp -pr /mnt/tmp/images /var/isos/ umount /mnt/tmp",
"/publicly_available_directory client.ip.address (ro)",
"/publicly_available_directory * (ro)",
"linux mediacheck"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-preparing-s390 |
Kafka configuration properties | Kafka configuration properties Red Hat Streams for Apache Kafka 2.9 Use configuration properties to configure Kafka components | [
"Further, when in `read_committed` the seekToEnd method will return the LSO .",
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html-single/kafka_configuration_properties/index |
Chapter 3. Red Hat Advanced Cluster Security Cloud Service architecture | Chapter 3. Red Hat Advanced Cluster Security Cloud Service architecture Discover Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) architecture and concepts. 3.1. Red Hat Advanced Cluster Security Cloud Service architecture overview Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) is a Red Hat managed Software-as-a-Service (SaaS) platform that lets you protect your Kubernetes and OpenShift Container Platform clusters and applications throughout the build, deploy, and runtime lifecycles. RHACS Cloud Service includes many built-in DevOps enforcement controls and security-focused best practices based on industry standards such as the Center for Internet Security (CIS) benchmarks and the National Institute of Standards Technology (NIST) guidelines. You can also integrate it with your existing DevOps tools and workflows to improve security and compliance. RHACS Cloud Service architecture The following graphic shows the architecture with the StackRox Scanner and Scanner V4. Installation of Scanner V4 is optional, but provides additional benefits. Central services include the user interface (UI), data storage, RHACS application programming interface (API), and image scanning capabilities. You deploy your Central service through the Red Hat Hybrid Cloud Console. When you create a new ACS instance, Red Hat creates your individual control plane for RHACS. RHACS Cloud Service allows you to secure self-managed clusters that communicate with a Central instance. The clusters you secure, called Secured Clusters, are managed by you, and not by Red Hat. Secured Cluster services include optional vulnerability scanning services, admission control services, and data collection services used for runtime monitoring and compliance. You install Secured Cluster services on any OpenShift or Kubernetes cluster you want to secure. 3.2. Central Red Hat manages Central, the control plane for RHACS Cloud Service. These services include the following components: Central : Central is the RHACS application management interface and services. It handles API interactions and user interface (RHACS Portal) access. Central DB : Central DB is the database for RHACS and handles all data persistence. It is currently based on PostgreSQL 13. Scanner V4 : Beginning with version 4.4, RHACS contains the Scanner V4 vulnerability scanner for scanning container images. Scanner V4 is built on ClairCore , which also powers the Clair scanner. Scanner V4 includes the Indexer, Matcher, and Scanner V4 DB components, which are used in scanning. StackRox Scanner : The StackRox Scanner is the default scanner in RHACS. The StackRox Scanner originates from a fork of the Clair v2 open source scanner. Scanner-DB : This database contains data for the StackRox Scanner. RHACS scanners analyze each image layer to determine the base operating system and identify programming language packages and packages that were installed by the operating system package manager. They match the findings against known vulnerabilities from various vulnerability sources. In addition, the StackRox Scanner identifies vulnerabilities in the node's operating system and platform. These capabilities are planned for Scanner V4 in a future release. 3.2.1. Vulnerability data sources Sources for vulnerabilities depend on the scanner that is used in your system. RHACS contains two scanners: StackRox Scanner and Scanner V4. StackRox Scanner is the default scanner and is deprecated beginning with release 4.6. Scanner V4 was introduced in release 4.4 and is the recommended image scanner. 3.2.1.1. StackRox Scanner sources StackRox Scanner uses the following vulnerability sources: Red Hat OVAL v2 Alpine Security Database Data tracked in Amazon Linux Security Center Debian Security Tracker Ubuntu CVE Tracker NVD : This is used for various purposes such as filling in information gaps when vendors do not provide information. For example, Alpine does not provide a description, CVSS score, severity, or published date. Note This product uses the NVD API but is not endorsed or certified by the NVD. Linux manual entries and NVD manual entries : The upstream StackRox project maintains a set of vulnerabilities that might not be discovered due to data formatting from other sources or absence of data. repository-to-cpe.json : Maps RPM repositories to their related CPEs, which is required for matching vulnerabilities for RHEL-based images. 3.2.1.2. Scanner V4 sources Scanner V4 uses the following vulnerability sources: Red Hat VEX Used with release 4.6 and later. This source provides vulnerability data in Vulnerability Exploitability eXchange(VEX) format. RHACS takes advantage of VEX benefits to significantly decrease the time needed for the initial loading of vulnerability data, and the space needed to store vulnerability data. RHACS might list a different number of vulnerabilities when you are scanning with a RHACS version that uses OVAL, such as RHACS version 4.5, and a version that uses VEX, such as version 4.6. For example, RHACS no longer displays vulnerabilities with a status of "under investigation," while these vulnerabilities were included with versions that used OVAL data. For more information about Red Hat security data, including information about the use of OVAL, Common Security Advisory Framework Version 2.0 (CSAF), and VEX, see The future of Red Hat security data . Red Hat CVE Map This is used in addition with VEX data for images which appear in the Red Hat Container Catalog . OSV This is used for language-related vulnerabilities, such as Go, Java, JavaScript, Python, and Ruby. This source might provide vulnerability IDs other than CVE IDs for vulnerabilities, such as a GitHub Security Advisory (GHSA) ID. Note RHACS uses the OSV database available at OSV.dev under Apache License 2.0 . NVD This is used for various purposes such as filling in information gaps when vendors do not provide information. For example, Alpine does not provide a description, CVSS score, severity, or published date. Note This product uses the NVD API but is not endorsed or certified by the NVD. Additional vulnerability sources Alpine Security Database Data tracked in Amazon Linux Security Center Debian Security Tracker Oracle OVAL Photon OVAL SUSE OVAL Ubuntu OVAL StackRox : The upstream StackRox project maintains a set of vulnerabilities that might not be discovered due to data formatting from other sources or absence of data. Scanner V4 Indexer sources Scanner V4 indexer uses the following files to index Red Hat containers: repository-to-cpe.json : Maps RPM repositories to their related CPEs, which is required for matching vulnerabilities for RHEL-based images. container-name-repos-map.json : This matches container names to their respective repositories. 3.3. Secured cluster services You install the secured cluster services on each cluster that you want to secure by using the Red Hat Advanced Cluster Security Cloud Service. Secured cluster services include the following components: Sensor : Sensor is the service responsible for analyzing and monitoring the cluster. Sensor listens to the OpenShift Container Platform or Kubernetes API and Collector events to report the current state of the cluster. Sensor also triggers deploy-time and runtime violations based on RHACS Cloud Service policies. In addition, Sensor is responsible for all cluster interactions, such as applying network policies, initiating reprocessing of RHACS Cloud Service policies, and interacting with the Admission controller. Admission controller : The Admission controller prevents users from creating workloads that violate security policies in RHACS Cloud Service. Collector : Collector analyzes and monitors container activity on cluster nodes. It collects container runtime and network activity information and sends the collected data to Sensor. StackRox Scanner : In Kubernetes, the secured cluster services include Scanner-slim as an optional component. However, on OpenShift Container Platform, RHACS Cloud Service installs a Scanner-slim version on each secured cluster to scan images in the OpenShift Container Platform integrated registry and optionally other registries. Scanner-DB : This database contains data for the StackRox Scanner. Scanner V4 : Scanner V4 components are installed on the secured cluster if enabled. Scanner V4 Indexer : The Scanner V4 Indexer performs image indexing, previously known as image analysis. Given an image and registry credentials, the Indexer pulls the image from the registry. It finds the base operating system, if it exists, and looks for packages. It stores and outputs an index report, which contains the findings for the given image. Scanner V4 DB : This component is installed if Scanner V4 is enabled. This database stores information for Scanner V4, including index reports. For best performance, configure a persistent volume claim (PVC) for Scanner V4 DB. Note When secured cluster services are installed on the same cluster as Central services and installed in the same namespace, secured cluster services do not deploy Scanner V4 components. Instead, it is assumed that Central services already include a deployment of Scanner V4. Additional resources External components 3.4. Data access and permissions Red Hat does not have access to the clusters on which you install the secured cluster services. Also, RHACS Cloud Service does not need permission to access the secured clusters. For example, you do not need to create new IAM policies, access roles, or API tokens. However, RHACS Cloud Service stores the data that secured cluster services send. All data is encrypted within RHACS Cloud Service. Encrypting the data within the RHACS Cloud Service platform helps to ensure the confidentiality and integrity of the data. When you install secured cluster services on a cluster, it generates data and transmits it to the RHACS Cloud Service. This data is kept secure within the RHACS Cloud Service platform, and only authorized SRE team members and systems can access this data. RHACS Cloud Service uses this data to monitor the security and compliance of your cluster and applications, and to provide valuable insights and analytics that can help you optimize your deployments. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/rhacs_cloud_service/rhacs-cloud-service-architecture |
20.4. Known Issues | 20.4. Known Issues SELinux MLS policy is not supported with kernel version 4.14 SELinux Multi-Level Security (MLS) Policy denies unknown classes and permissions, and kernel version 4.14 in the kernel-alt packages recognizes the map permission, which is not defined in any policy. Consequently, every command on a system with active MLS policy and SELinux in enforcing mode terminates with the Segmentation fault error. A lot of SELinux denial warnings occurs on systems with active MLS policy and SELinux in permissive mode. The combination of SELinux MLS policy with kernel version 4.14 is not supported. kdump saves the vmcore only if mpt3sas is blacklisted When kdump kernel loads the mpt3sas driver, the kdump kernel crashes and fails to save the vmcore on certain POWER9 systems. To work around this problem, blacklist mpt3sas from the kdump kernel environment by appending the module_blacklist=mpt3sas string to the KDUMP_COMMANDLINE_APPEND variable in the /etc/sysconfig/kdump file: Then restart the kdump service to pick up the changes to the configuration file by running the systemctl restart command as the root user: As a result, kdump is now able to save the vmcore on the POWER9 systems. (BZ#1496273) Recovering from OOM situation fails due to incorrect function of OOM-killer Recovering from an out-of-memory (OOM) situation does not work correctly on systems with large amounts of memory. Kernel's OOM-killer kills the process using the most memory and frees the memory to be used again. However, sometimes the OOM-killer does not wait long enough before killing a second process. Eventually, the OOM-killer kills all the processes on the system and logs this error: If this happens, the operating system must be rebooted. There is no available workaround. (BZ#1405748) HTM is disabled for guests running on IBM POWER systems The Hardware Transactional Memory (HTM) feature currently prevents migrating guest virtual machines from IBM POWER8 to IBM POWER9 hosts, and has therefore been disabled by default. As a consequence, guest virtual machines running on IBM POWER8 and IBM POWER9 hosts cannot use HTM, unless the feature is manually enabled. To do so, change the default pseries-rhel7.5 machine type of these guests to pseries-rhel7.4 . Note that guests configured this way cannot be migrated from an IBM POWER8 host to an IBM POWER9 host. (BZ#1525599) Migrating guests with huge pages from IBM POWER8 to IBM POWER9 fails IBM POWER8 hosts can only use 16MB and 16GB huge pages, but these huge-page sizes are not supported on IBM POWER9. As a consequence, migrating a guest from an IBM POWER8 host to an IBM POWER9 host fails if the guest is configured with static huge pages. To work around this problem, disable huge pages on the guest and reboot it prior to migration. (BZ#1538959) modprobe succeeds to load kernel modules with incorrect parameters When attempting to load a kernel module with an incorrect parameter using the modprobe command, the incorrect parameter is ignored, and the module loads as expected on Red Hat Enterprise Linux 7 for ARM and for IBM Power LE (POWER9). Note that this is a different behavior compared to Red Hat Enterprise Linux for traditional architectures, such as AMD64 and Intel 64, IBM Z and IBM Power Systems. On these systems, modprobe exits with an error, and the module with an incorrect parameter does not load in the described situation. On all architectures, an error message is recorded in the dmesg output. (BZ#1449439) | [
"KDUMP_COMMANDLINE_APPEND=\"irqpoll maxcpus=1 ... module_blacklist=mpt3sas\"",
"~]# systemctl restart kdump.service",
"Kernel panic - not syncing: Out of memory and no killable processes"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/power9-known-issues |
Chapter 28. Milestones | Chapter 28. Milestones Milestones are a special service task that can be configured in the case definition designer by adding the milestone node to the process designer palette. When creating a new case definition, a milestone configured as AdHoc Autostart is included on the design palette by default. Newly created milestones are not set to AdHoc Autostart by default. Case management milestones generally occur at the end of a stage, but they can also be the result of achieving other milestones. A milestone always requires a condition to be defined in order to track progress. Milestones react to case file data when data is added to a case. A milestone represents a single point of achievement within the case instance. It can be used to flag certain events, which can be useful for Key Performance Indicator (KPI) tracking or identifying the tasks that are still to be completed. Milestones can be in any of the following states during case execution: Active : The condition has been defined on the milestone but it has not been met. Completed : The milestone condition has been met, the milestone has been achieved, and the case can proceed to the task. Terminated : The milestone is no longer a part of the case process and is no longer required. While a milestone is available or completed it can be triggered manually by a signal or automatically if AdHoc Autostart is configured when a case instance starts. Milestones can be triggered as many times as required, however, it is directly achieved when the condition is met. 28.1. Creating the Hardware spec ready milestone Create a HardwareSpecReady milestone that is reached when the required hardware specification document is completed. Procedure In the process designer, expand Milestone in the Object Library and drag a new milestone on the canvas and place it on the right side of the Place order end event. Click the new milestone and click the Properties icon in the upper-right corner. Input Hardware spec ready in the Name field. Expand Implementation/Execution and select AdHoc Autostart . Expand Data Assignments , click in the Assignments field, and add the following: Click the Source column drop-down, select Constant , and input org.kie.api.runtime.process.CaseData(data.get("hwSpec") != null) . Click OK . 28.2. Creating the Manager decision milestone This milestone is reached when the managerDecision variable has been given a response. Procedure In the process designer, expand Milestone in the Object Library and drag a new milestone onto the canvas below the HardwareSpecReady milestone. Click the new milestone and click the Properties icon in the upper-right corner. Input Manager decision in the Name field. Expand Implementation/Execution and select AdHoc Autostart . Expand Data Assignments and click in the Assignments field and add the following: Click the Source column drop-down, select Constant , and input org.kie.api.runtime.process.CaseData(data.get("managerDecision") != null) . Click OK . 28.3. Creating the Order placed milestone This milestone is reached when the ordered variable, which is part of the Place order sub-process, has been given a response. Procedure In the process designer, expand Milestone in the Object Library and drag a new milestone on the canvas below the Prepare hardware spec user task. Click the new milestone and click the Properties icon in the upper-right corner. Input Milestone 1: Order placed in the Name field. Expand Implementation/Execution and select AdHoc Autostart . Expand Data Assignments , click in the Assignments field, and add the following: Click the Source column drop-down, select Constant , and input org.kie.api.runtime.process.CaseData(data.get("ordered") == true) . This means that a case variable named ordered exists with the value true . Click OK . Click Milestone 1: Order placed and create a new script task. Click the new script task and click the Properties icon in the upper-right corner. Input Notify requestor in the Name field. Expand Implementation/Execution and input System.out.println("Notification::Order placed"); . Click the Notify requestor script task and create a signal end event. Click the signal event and in the upper-right corner click the Properties . icon. Expand Implementation/Execution , click the down arrow in the Signal field, and select New . Input Milestone 2: Order shipped . Click the down arrow in the Signal Scope field, select Process Instance . Click Save . Figure 28.1. Order placed milestone 28.4. Creating the Order shipped milestone The condition for this milestone is that a case file variable named shipped is true . AdHoc Autostart is not enabled for this milestone. Instead, it is triggered by a signal event when the order is ready to be sent. Procedure In the process designer, expand Milestone in the Object Library and drag a new milestone on the canvas below the Notify requestor script task. Click the new milestone and click the Properties icon in the upper-right corner. Input Milestone 2: Order shipped in the Name field. Expand Implementation/Execution and ensure that AdHoc Autostart is not selected. Expand Data Assignments , click in the Assignments field, and add the following: Click the Source column drop-down, select Constant , and input org.kie.api.runtime.process.CaseData(data.get("shipped") == true) . This means that a case variable named shipped exists with the value true . Click OK . Click Milestone 2: Order shipped and create a new script task. Click the new script task and click the Properties icon in the upper-right corner. Input Send to tracking system in the Name field. Expand Implementation/Execution and input System.out.println("Order added to tracking system"); . Click the Send to tracking system script task and create a signal end event. Click the signal event and in the upper-right corner click the Properties . icon. Expand Implementation/Execution , click the down arrow in the Signal field, and select New . Input Milestone 3: Delivered to customer . Click the down arrow in the Signal Scope field, select Process Instance . Click Save . Figure 28.2. Order shipped milestone 28.5. Creating the Delivered to customer milestone The condition for this milestone is that a case file variable named delivered is true . AdHoc Autostart is not enabled for this milestone. Instead, it is triggered by a signal event after the order has successfully shipped to the customer. Procedure In the process designer, expand Milestone in the Object Library and drag a new milestone on the canvas below the Send to tracking system script task. Click the new milestone and click the Properties icon in the upper-right corner. Input Milestone 3: Delivered to customer in the Name field. Expand Implementation/Execution and ensure that AdHoc Autostart is not selected. Expand Data Assignments , click in the Assignments field, and add the following: Click the Source column drop-down, select Constant , and input org.kie.api.runtime.process.CaseData(data.get("delivered") == true) . This means that a case variable named delivered exists with the value true . Click OK . Click Milestone 3: Delivered to customer and create a new user task. Click the new user task and click the Properties icon in the upper-right corner. Input Customer satisfaction survey in the Name field. Expand Implementation/Execution , click Add below the Actors menu, click Select New , and input owner . Input CustomerSurvey in the Task Name field. Select the Skippable check box and enter the following text in the in the Description field: Satisfaction survey for order #{CaseId} click in the Assignments field and add the following: Click OK . Click the Customer satisfaction survey user task and create an end event. Click Save to confirm your changes. Figure 28.3. Delivered to customer milestone The IT Orders case can be closed after all milestone sequences are completed. However, due to the ad hoc nature of cases, the case could be reopened if, for example, the order was never received by the customer or the item is faulty. Tasks can be re-triggered or added to the case definition as required, even during run time. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/case-management-milestones-con |
Appendix E. Permissions required to provision hosts | Appendix E. Permissions required to provision hosts The following list provides an overview of the permissions a non-admin user requires to provision hosts. Resource name Permissions Additional details Activation Keys view_activation_keys Ansible role view_ansible_roles Required if Ansible is used. Architecture view_architectures Compute profile view_compute_profiles Compute resource view_compute_resources, create_compute_resources, destroy_compute_resources, power_compute_resources Required to provision bare-metal hosts. view_compute_resources_vms, create_compute_resources_vms, destroy_compute_resources_vms, power_compute_resources_vms Required to provision virtual machines. Content Views view_content_views Domain view_domains Environment view_environments Host view_hosts, create_hosts, edit_hosts, destroy_hosts, build_hosts, power_hosts, play_roles_on_host view_discovered_hosts, submit_discovered_hosts, auto_provision_discovered_hosts, provision_discovered_hosts, edit_discovered_hosts, destroy_discovered_hosts Required if the Discovery service is enabled. Hostgroup view_hostgroups, create_hostgroups, edit_hostgroups, play_roles_on_hostgroup Image view_images Lifecycle environment view_lifecycle_environments Location view_locations Medium view_media Operatingsystem view_operatingsystems Organization view_organizations Parameter view_params, create_params, edit_params, destroy_params Product and Repositories view_products Provisioning template view_provisioning_templates Ptable view_ptables Capsule view_smart_proxies, view_smart_proxies_puppetca view_openscap_proxies Required if the OpenSCAP plugin is enabled. Subnet view_subnets Additional resources Creating a Role in Administering Red Hat Satellite Adding Permissions to a Role in Administering Red Hat Satellite Assigning Roles to a User in Administering Red Hat Satellite | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/permissions-required-to-provision-hosts_provisioning |
9.8. Time Zone Configuration | 9.8. Time Zone Configuration Set your time zone by selecting the city closest to your computer's physical location. Click on the map to zoom in to a particular geographical region of the world. Specify a time zone even if you plan to use NTP (Network Time Protocol) to maintain the accuracy of the system clock. From here there are two ways for you to select your time zone: Using your mouse, click on the interactive map to select a specific city (represented by a yellow dot). A red X appears indicating your selection. You can also scroll through the list at the bottom of the screen to select your time zone. Using your mouse, click on a location to highlight your selection. If Red Hat Enterprise Linux is the only operating system on your computer, select System clock uses UTC . The system clock is a piece of hardware on your computer system. Red Hat Enterprise Linux uses the timezone setting to determine the offset between the local time and UTC on the system clock. This behavior is standard for systems that use UNIX, Linux, and similar operating systems. Click to proceed. Warning Do not enable the System clock uses UTC option if your machine also runs Microsoft Windows. Microsoft operating systems change the BIOS clock to match local time rather than UTC. This may cause unexpected behavior under Red Hat Enterprise Linux. Note To change your time zone configuration after you have completed the installation, use the Time and Date Properties Tool . Type the system-config-date command in a shell prompt to launch the Time and Date Properties Tool . If you are not root, it prompts you for the root password to continue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-timezone-x86 |
Chapter 9. Managing users and roles | Chapter 9. Managing users and roles A User defines a set of details for individuals using the system. Users can be associated with organizations and environments, so that when they create new entities, the default settings are automatically used. Users can also have one or more roles attached, which grants them rights to view and manage organizations and environments. See Section 9.1, "Managing users" for more information on working with users. You can manage permissions of several users at once by organizing them into user groups. User groups themselves can be further grouped to create a hierarchy of permissions. For more information on creating user groups, see Section 9.4, "Creating and managing user groups" . Roles define a set of permissions and access levels. Each role contains one on more permission filters that specify the actions allowed for the role. Actions are grouped according to the Resource type . Once a role has been created, users and user groups can be associated with that role. This way, you can assign the same set of permissions to large groups of users. Satellite provides a set of predefined roles and also enables creating custom roles and permission filters as described in Section 9.5, "Creating and managing roles" . 9.1. Managing users As an administrator, you can create, modify and remove Satellite users. You can also configure access permissions for a user or a group of users by assigning them different roles . 9.1.1. Creating a user Use this procedure to create a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click Create User . Enter the account details for the new user. Click Submit to create the user. The user account details that you can specify include the following: On the User tab, select an authentication source from the Authorized by list: INTERNAL : to manage the user inside Satellite Server. EXTERNAL : to manage the user with external authentication. For more information, see Configuring External Authentication in Installing Satellite Server in a connected network environment . On the Organizations tab, select an organization for the user. Specify the default organization Satellite selects for the user after login from the Default on login list. Important If a user is not assigned to an organization, their access is limited. CLI procedure Create a user: The --auth-source-id 1 setting means that the user is authenticated internally, you can specify an external authentication source as an alternative. Add the --admin option to grant administrator privileges to the user. Specifying organization IDs is not required. You can modify the user details later by using the hammer user update command. Additional resources For more information about creating user accounts by using Hammer, enter hammer user create --help . 9.1.2. Assigning roles to a user Use this procedure to assign roles to a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Users . Click the username of the user to be assigned one or more roles. Note If a user account is not listed, check that you are currently viewing the correct organization. To list all the users in Satellite, click Default Organization and then Any Organization . Click the Locations tab, and select a location if none is assigned. Click the Organizations tab, and check that an organization is assigned. Click the Roles tab to display the list of available roles. Select the roles to assign from the Roles list. To grant all the available permissions, select the Administrator checkbox. Click Submit . To view the roles assigned to a user, click the Roles tab; the assigned roles are listed under Selected items . To remove an assigned role, click the role name in Selected items . CLI procedure To assign roles to a user, enter the following command: 9.1.3. Impersonating a different user account Administrators can impersonate other authenticated users for testing and troubleshooting purposes by temporarily logging on to the Satellite web UI as a different user. When impersonating another user, the administrator has permissions to access exactly what the impersonated user can access in the system, including the same menus. Audits are created to record the actions that the administrator performs while impersonating another user. However, all actions that an administrator performs while impersonating another user are recorded as having been performed by the impersonated user. Prerequisites Ensure that you are logged on to the Satellite web UI as a user with administrator privileges for Satellite. Procedure In the Satellite web UI, navigate to Administer > Users . To the right of the user that you want to impersonate, from the list in the Actions column, select Impersonate . When you want to stop the impersonation session, in the upper right of the main menu, click the impersonation icon. 9.1.4. Creating an API-only user You can create users that can interact only with the Satellite API. Prerequisites You have created a user and assigned roles to them. Note that this user must be authorized internally. For more information, see the following sections: Section 9.1.1, "Creating a user" Section 9.1.2, "Assigning roles to a user" Procedure Log in to your Satellite as admin. Navigate to Administer > Users and select a user. On the User tab, set a password. Do not save or communicate this password with others. You can create pseudo-random strings on your console: Create a Personal Access Token for the user. For more information, see Section 9.3.1, "Creating a Personal Access Token" . 9.2. Managing SSH keys Adding SSH keys to a user allows deployment of SSH keys during provisioning. For information on deploying SSH keys during provisioning, see Deploying SSH Keys during Provisioning in Provisioning hosts . For information on SSH keys and SSH key creation, see Using SSH-based Authentication in Red Hat Enterprise Linux 8 Configuring basic system settings . 9.2.1. Managing SSH keys for a user Use this procedure to add or remove SSH keys for a user. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that you are logged in to the Satellite web UI as an Admin user of Red Hat Satellite or a user with the create_ssh_key permission enabled for adding SSH key and destroy_ssh_key permission for removing a key. Procedure In the Satellite web UI, navigate to Administer > Users . From the Username column, click on the username of the required user. Click on the SSH Keys tab. To Add SSH key Prepare the content of the public SSH key in a clipboard. Click Add SSH Key . In the Key field, paste the public SSH key content from the clipboard. In the Name field, enter a name for the SSH key. Click Submit . To Remove SSH key Click Delete on the row of the SSH key to be deleted. Click OK in the confirmation prompt. CLI procedure To add an SSH key to a user, you must specify either the path to the public SSH key file, or the content of the public SSH key copied to the clipboard. If you have the public SSH key file, enter the following command: If you have the content of the public SSH key, enter the following command: To delete an SSH key from a user, enter the following command: To view an SSH key attached to a user, enter the following command: To list SSH keys attached to a user, enter the following command: 9.3. Managing Personal Access Tokens Personal Access Tokens allow you to authenticate API requests without using your password. You can set an expiration date for your Personal Access Token and you can revoke it if you decide it should expire before the expiration date. 9.3.1. Creating a Personal Access Token Use this procedure to create a Personal Access Token. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to create a Personal Access Token. On the Personal Access Tokens tab, click Add Personal Access Token . Enter a Name for you Personal Access Token. Optional: Select the Expires date to set an expiration date. If you do not set an expiration date, your Personal Access Token will never expire unless revoked. Click Submit. You now have the Personal Access Token available to you on the Personal Access Tokens tab. Important Ensure to store your Personal Access Token as you will not be able to access it again after you leave the page or create a new Personal Access Token. You can click Copy to clipboard to copy your Personal Access Token. Verification Make an API request to your Satellite Server and authenticate with your Personal Access Token: You should receive a response with status 200 , for example: If you go back to Personal Access Tokens tab, you can see the updated Last Used time to your Personal Access Token. 9.3.2. Revoking a Personal Access Token Use this procedure to revoke a Personal Access Token before its expiration date. Procedure In the Satellite web UI, navigate to Administer > Users . Select a user for which you want to revoke the Personal Access Token. On the Personal Access Tokens tab, locate the Personal Access Token you want to revoke. Click Revoke in the Actions column to the Personal Access Token you want to revoke. Verification Make an API request to your Satellite Server and try to authenticate with the revoked Personal Access Token: You receive the following error message: 9.4. Creating and managing user groups 9.4.1. User groups With Satellite, you can assign permissions to groups of users. You can also create user groups as collections of other user groups. If using an external authentication source, you can map Satellite user groups to external user groups as described in Configuring External User Groups in Installing Satellite Server in a connected network environment . User groups are defined in an organizational context, meaning that you must select an organization before you can access user groups. 9.4.2. Creating a user group Use this procedure to create a user group. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Create User group . On the User Group tab, specify the name of the new user group and select group members: Select the previously created user groups from the User Groups list. Select users from the Users list. On the Roles tab, select the roles you want to assign to the user group. Alternatively, select the Admin checkbox to assign all available permissions. Click Submit . CLI procedure To create a user group, enter the following command: 9.4.3. Removing a user group Use the following procedure to remove a user group from Satellite. Procedure In the Satellite web UI, navigate to Administer > User Groups . Click Delete to the right of the user group you want to delete. Click Confirm to delete the user group. 9.5. Creating and managing roles Satellite provides a set of predefined roles with permissions sufficient for standard tasks, as listed in Section 9.6, "Predefined roles available in Satellite" . It is also possible to configure custom roles, and assign one or more permission filters to them. Permission filters define the actions allowed for a certain resource type. Certain Satellite plugins create roles automatically. 9.5.1. Creating a role Use this procedure to create a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Create Role . Provide a Name for the role. Click Submit to save your new role. CLI procedure To create a role, enter the following command: To serve its purpose, a role must contain permissions. After creating a role, proceed to Section 9.5.3, "Adding permissions to a role" . 9.5.2. Cloning a role Use the Satellite web UI to clone a role. Procedure In the Satellite web UI, navigate to Administer > Roles and select Clone from the drop-down menu to the right of the required role. Provide a Name for the role. Click Submit to clone the role. Click the name of the cloned role and navigate to Filters . Edit the permissions as required. Click Submit to save your new role. 9.5.3. Adding permissions to a role Use this procedure to add permissions to a role. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Administer > Roles . Select Add Filter from the drop-down list to the right of the required role. Select the Resource type from the drop-down list. The (Miscellaneous) group gathers permissions that are not associated with any resource group. Click the permissions you want to select from the Permission list. Depending on the Resource type selected, you can select or deselect the Unlimited and Override checkbox. The Unlimited checkbox is selected by default, which means that the permission is applied on all resources of the selected type. When you disable the Unlimited checkbox, the Search field activates. In this field you can specify further filtering with use of the Satellite search syntax. For more information, see Section 9.7, "Granular permission filtering" . When you enable the Override checkbox, you can add additional locations and organizations to allow the role to access the resource type in the additional locations and organizations; you can also remove an already associated location and organization from the resource type to restrict access. Click . Click Submit to save changes. CLI procedure List all available permissions: Add permissions to a role: For more information about roles and permissions parameters, enter the hammer role --help and hammer filter --help commands. 9.5.4. Viewing permissions of a role Use the Satellite web UI to view the permissions of a role. Procedure In the Satellite web UI, navigate to Administer > Roles . Click Filters to the right of the required role to get to the Filters page. The Filters page contains a table of permissions assigned to a role grouped by the resource type. It is also possible to generate a complete table of permissions and actions that you can use on your Satellite system. For more information, see Section 9.5.5, "Creating a complete permission table" . 9.5.5. Creating a complete permission table Use the Satellite CLI to create a permission table. Procedure Start the Satellite console with the following command: Insert the following code into the console: The above syntax creates a table of permissions and saves it to the /tmp/table.html file. Press Ctrl + D to exit the Satellite console. Insert the following text at the first line of /tmp/table.html : Append the following text at the end of /tmp/table.html : Open /tmp/table.html in a web browser to view the table. 9.5.6. Removing a role Use the following procedure to remove a role from Satellite. Procedure In the Satellite web UI, navigate to Administer > Roles . Select Delete from the drop-down list to the right of the role to be deleted. Click Confirm to delete the role. 9.6. Predefined roles available in Satellite The following table provides an overview of permissions that predefined roles in Satellite grant to a user. For a complete set of predefined roles and the permissions they grant, log in to Satellite web UI as the privileged user and navigate to Administer > Roles . For more information, see Section 9.5.4, "Viewing permissions of a role" . Predefined role Permissions the role provides Additional information Auditor View the Audit log. Default role View tasks and jobs invocations. Satellite automatically assigns this role to every user in the system. Manager View and edit global settings. Organization admin All permissions except permissions for managing organizations. An administrator role defined per organization. The role has no visibility into resources in other organizations. By cloning this role and assigning an organization, you can delegate administration of that organization to a user. Site manager View permissions for various items. Permissions to manage hosts in the infrastructure. A restrained version of the Manager role. System admin Edit global settings in Administer > Settings . View, create, edit, and destroy users, user groups, and roles. View, create, edit, destroy, and assign organizations and locations but not view resources within them. Users with this role can create users and assign all roles to them. Give this role only to trusted users. Viewer View the configuration of every element of the Satellite structure, logs, reports, and statistics. 9.7. Granular permission filtering As mentioned in Section 9.5.3, "Adding permissions to a role" , Red Hat Satellite provides the ability to limit the configured user permissions to selected instances of a resource type. These granular filters are queries to the Satellite database and are supported by the majority of resource types. 9.7.1. Creating a granular permission filter Use this procedure to create a granular filter. To use the CLI instead of the Satellite web UI, see the CLI procedure . Satellite does not apply search conditions to create actions. For example, limiting the create_locations action with name = "Default Location" expression in the search field does not prevent the user from assigning a custom name to the newly created location. Procedure Specify a query in the Search field on the Edit Filter page. Deselect the Unlimited checkbox for the field to be active. Queries have the following form: field_name marks the field to be queried. The range of available field names depends on the resource type. For example, the Partition Table resource type offers family , layout , and name as query parameters. operator specifies the type of comparison between field_name and value . See Section 9.7.3, "Supported operators for granular search" for an overview of applicable operators. value is the value used for filtering. This can be for example a name of an organization. Two types of wildcard characters are supported: underscore (_) provides single character replacement, while percent sign (%) replaces zero or more characters. For most resource types, the Search field provides a drop-down list suggesting the available parameters. This list appears after placing the cursor in the search field. For many resource types, you can combine queries using logical operators such as and , not and has operators. CLI procedure To create a granular filter, enter the hammer filter create command with the --search option to limit permission filters, for example: This command adds to the qa-user role a permission to view, create, edit, and destroy content views that only applies to content views with name starting with ccv . 9.7.2. Examples of using granular permission filters As an administrator, you can allow selected users to make changes in a certain part of the environment path. The following filter allows you to work with content while it is in the development stage of the application lifecycle, but the content becomes inaccessible once is pushed to production. 9.7.2.1. Applying permissions for the host resource type The following query applies any permissions specified for the Host resource type only to hosts in the group named host-editors. The following query returns records where the name matches XXXX , Yyyy , or zzzz example strings: You can also limit permissions to a selected environment. To do so, specify the environment name in the Search field, for example: You can limit user permissions to a certain organization or location with the use of the granular permission filter in the Search field. However, some resource types provide a GUI alternative, an Override checkbox that provides the Locations and Organizations tabs. On these tabs, you can select from the list of available organizations and locations. For more information, see Section 9.7.2.2, "Creating an organization-specific manager role" . 9.7.2.2. Creating an organization-specific manager role Use the Satellite web UI to create an administrative role restricted to a single organization named org-1 . Procedure In the Satellite web UI, navigate to Administer > Roles . Clone the existing Organization admin role. Select Clone from the drop-down list to the Filters button. You are then prompted to insert a name for the cloned role, for example org-1 admin . Click the desired locations and organizations to associate them with the role. Click Submit to create the role. Click org-1 admin , and click Filters to view all associated filters. The default filters work for most use cases. However, you can optionally click Edit to change the properties for each filter. For some filters, you can enable the Override option if you want the role to be able to access resources in additional locations and organizations. For example, by selecting the Domain resource type, the Override option, and then additional locations and organizations using the Locations and Organizations tabs, you allow this role to access domains in the additional locations and organizations that is not associated with this role. You can also click New filter to associate new filters with this role. 9.7.3. Supported operators for granular search Table 9.1. Logical operators Operator Description and Combines search criteria. not Negates an expression. has Object must have a specified property. Table 9.2. Symbolic operators Operator Description = Is equal to . An equality comparison that is case-sensitive for text fields. != Is not equal to . An inversion of the = operator. ~ Like . A case-insensitive occurrence search for text fields. !~ Not like . An inversion of the ~ operator. ^ In . An equality comparison that is case-sensitive search for text fields. This generates a different SQL query to the Is equal to comparison, and is more efficient for multiple value comparison. !^ Not in . An inversion of the ^ operator. >, >= Greater than , greater than or equal to . Supported for numerical fields only. <, ⇐ Less than , less than or equal to . Supported for numerical fields only. | [
"hammer user create --auth-source-id My_Authentication_Source --login My_User_Name --mail My_User_Mail --organization-ids My_Organization_ID_1 , My_Organization_ID_2 --password My_User_Password",
"hammer user add-role --id user_id --role role_name",
"openssl rand -hex 32",
"hammer user ssh-keys add --user-id user_id --name key_name --key-file ~/.ssh/id_rsa.pub",
"hammer user ssh-keys add --user-id user_id --name key_name --key ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNtYAAABBBHHS2KmNyIYa27Qaa7EHp+2l99ucGStx4P77e03ZvE3yVRJEFikpoP3MJtYYfIe8k 1/46MTIZo9CPTX4CYUHeN8= host@user",
"hammer user ssh-keys delete --id key_id --user-id user_id",
"hammer user ssh-keys info --id key_id --user-id user_id",
"hammer user ssh-keys list --user-id user_id",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{\"satellite_version\":\"6.15.0\",\"result\":\"ok\",\"status\":200,\"version\":\"3.5.1.10\",\"api_version\":2}",
"curl https:// satellite.example.com /api/status --user My_Username : My_Personal_Access_Token",
"{ \"error\": {\"message\":\"Unable to authenticate user My_Username \"} }",
"hammer user-group create --name My_User_Group_Name --role-ids My_Role_ID_1 , My_Role_ID_2 --user-ids My_User_ID_1 , My_User_ID_2",
"hammer role create --name My_Role_Name",
"hammer filter available-permissions",
"hammer filter create --permission-ids My_Permission_ID_1 , My_Permission_ID_2 --role My_Role_Name",
"foreman-rake console",
"f = File.open('/tmp/table.html', 'w') result = Foreman::AccessControl.permissions {|a,b| a.security_block <=> b.security_block}.collect do |p| actions = p.actions.collect { |a| \"<li>#{a}</li>\" } \"<tr><td>#{p.name}</td><td><ul>#{actions.join('')}</ul></td><td>#{p.resource_type}</td></tr>\" end.join(\"\\n\") f.write(result)",
"<table border=\"1\"><tr><td>Permission name</td><td>Actions</td><td>Resource type</td></tr>",
"</table>",
"field_name operator value",
"hammer filter create --permission-ids 91 --search \"name ~ ccv*\" --role qa-user",
"hostgroup = host-editors",
"name ^ (XXXX, Yyyy, zzzz)",
"Dev"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/administering_red_hat_satellite/managing_users_and_roles_admin |
Chapter 6. Managing alerts | Chapter 6. Managing alerts In OpenShift Container Platform 4.9, the Alerting UI enables you to manage alerts, silences, and alerting rules. Alerting rules . Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Alerts . An alert is fired when the conditions defined in an alerting rule are true. Alerts provide a notification that a set of circumstances are apparent within an OpenShift Container Platform cluster. Silences . A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue. Note The alerts, silences, and alerting rules that are available in the Alerting UI relate to the projects that you have access to. For example, if you are logged in with cluster-admin privileges, you can access all alerts, silences, and alerting rules. If you are a non-administator user, you can create and silence alerts if you are assigned the following user roles: The cluster-monitoring-view role, which allows you to access Alertmanager The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console The monitoring-rules-edit role, which permits you to create and silence alerts in the Developer perspective in the web console 6.1. Accessing the Alerting UI in the Administrator and Developer perspectives The Alerting UI is accessible through the Administrator perspective and the Developer perspective in the OpenShift Container Platform web console. In the Administrator perspective, select Observe Alerting . The three main pages in the Alerting UI in this perspective are the Alerts , Silences , and Alerting Rules pages. In the Developer perspective, select Observe <project_name> Alerts . In this perspective, alerts, silences, and alerting rules are all managed from the Alerts page. The results shown in the Alerts page are specific to the selected project. Note In the Developer perspective, you can select from core OpenShift Container Platform and user-defined projects that you have access to in the Project: list. However, alerts, silences, and alerting rules relating to core OpenShift Container Platform projects are not displayed if you do not have cluster-admin privileges. 6.2. Searching and filtering alerts, silences, and alerting rules You can filter the alerts, silences, and alerting rules that are displayed in the Alerting UI. This section provides a description of each of the available filtering options. Understanding alert filters In the Administrator perspective, the Alerts page in the Alerting UI provides details about alerts relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of severity, state, and source for each alert. The time at which an alert went into its current state is also shown. You can filter by alert state, severity, and source. By default, only Platform alerts that are Firing are displayed. The following describes each alert filtering option: Alert State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert will continue to fire as long as the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications will not be sent for alerts that match all the listed values or regular expressions. Severity filters: Critical . The condition that triggered the alert could have a critical impact. The alert requires immediate attention when fired and is typically paged to an individual or to a critical response team. Warning . The alert provides a warning notification about something that might require attention to prevent a problem from occurring. Warnings are typically routed to a ticketing system for non-immediate review. Info . The alert is provided for informational purposes only. None . The alert has no defined severity. You can also create custom severity definitions for alerts relating to user-defined projects. Source filters: Platform . Platform-level alerts relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User alerts relate to user-defined projects. These alerts are user-created and are customizable. User-defined workload monitoring can be enabled post-installation to provide observability into your own workloads. Understanding silence filters In the Administrator perspective, the Silences page in the Alerting UI provides details about silences applied to alerts in default OpenShift Container Platform and user-defined projects. The page includes a summary of the state of each silence and the time at which a silence ends. You can filter by silence state. By default, only Active and Pending silences are displayed. The following describes each silence state filter option: Silence State filters: Active . The silence is active and the alert will be muted until the silence is expired. Pending . The silence has been scheduled and it is not yet active. Expired . The silence has expired and notifications will be sent if the conditions for an alert are true. Understanding alerting rule filters In the Administrator perspective, the Alerting Rules page in the Alerting UI provides details about alerting rules relating to default OpenShift Container Platform and user-defined projects. The page includes a summary of the state, severity, and source for each alerting rule. You can filter alerting rules by alert state, severity, and source. By default, only Platform alerting rules are displayed. The following describes each alerting rule filtering option: Alert State filters: Firing . The alert is firing because the alert condition is true and the optional for duration has passed. The alert will continue to fire as long as the condition remains true. Pending . The alert is active but is waiting for the duration that is specified in the alerting rule before it fires. Silenced . The alert is now silenced for a defined time period. Silences temporarily mute alerts based on a set of label selectors that you define. Notifications will not be sent for alerts that match all the listed values or regular expressions. Not Firing . The alert is not firing. Severity filters: Critical . The conditions defined in the alerting rule could have a critical impact. When true, these conditions require immediate attention. Alerts relating to the rule are typically paged to an individual or to a critical response team. Warning . The conditions defined in the alerting rule might require attention to prevent a problem from occurring. Alerts relating to the rule are typically routed to a ticketing system for non-immediate review. Info . The alerting rule provides informational alerts only. None . The alerting rule has no defined severity. You can also create custom severity definitions for alerting rules relating to user-defined projects. Source filters: Platform . Platform-level alerting rules relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User-defined workload alerting rules relate to user-defined projects. These alerting rules are user-created and are customizable. User-defined workload monitoring can be enabled post-installation to provide observability into your own workloads. Searching and filtering alerts, silences, and alerting rules in the Developer perspective In the Developer perspective, the Alerts page in the Alerting UI provides a combined view of alerts and silences relating to the selected project. A link to the governing alerting rule is provided for each displayed alert. In this view, you can filter by alert state and severity. By default, all alerts in the selected project are displayed if you have permission to access the project. These filters are the same as those described for the Administrator perspective. 6.3. Getting information about alerts, silences, and alerting rules The Alerting UI provides detailed information about alerts and their governing alerting rules and silences. Prerequisites You have access to the cluster as a developer or as a user with view permissions for the project that you are viewing metrics for. Procedure To obtain information about alerts in the Administrator perspective : Open the OpenShift Container Platform web console and navigate to the Observe Alerting Alerts page. Optional: Search for alerts by name using the Name field in the search list. Optional: Filter alerts by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerts by clicking one or more of the Name , Severity , State , and Source column headers. Select the name of an alert to navigate to its Alert Details page. The page includes a graph that illustrates alert time series data. It also provides information about the alert, including: A description of the alert Messages associated with the alerts Labels attached to the alert A link to its governing alerting rule Silences for the alert, if any exist To obtain information about silences in the Administrator perspective : Navigate to the Observe Alerting Silences page. Optional: Filter the silences by name using the Search by name field. Optional: Filter silences by state by selecting filters in the Filter list. By default, Active and Pending filters are applied. Optional: Sort the silences by clicking one or more of the Name , Firing Alerts , and State column headers. Select the name of a silence to navigate to its Silence Details page. The page includes the following details: Alert specification Start time End time Silence state Number and list of firing alerts To obtain information about alerting rules in the Administrator perspective : Navigate to the Observe Alerting Alerting Rules page. Optional: Filter alerting rules by state, severity, and source by selecting filters in the Filter list. Optional: Sort the alerting rules by clicking one or more of the Name , Severity , Alert State , and Source column headers. Select the name of an alerting rule to navigate to its Alerting Rule Details page. The page provides the following details about the alerting rule: Alerting rule name, severity, and description The expression that defines the condition for firing the alert The time for which the condition should be true for an alert to fire A graph for each alert governed by the alerting rule, showing the value with which the alert is firing A table of all alerts governed by the alerting rule To obtain information about alerts, silences, and alerting rules in the Developer perspective : Navigate to the Observe <project_name> Alerts page. View details for an alert, silence, or an alerting rule: Alert Details can be viewed by selecting > to the left of an alert name and then selecting the alert in the list. Silence Details can be viewed by selecting a silence in the Silenced By section of the Alert Details page. The Silence Details page includes the following information: Alert specification Start time End time Silence state Number and list of firing alerts Alerting Rule Details can be viewed by selecting View Alerting Rule in the menu on the right of an alert in the Alerts page. Note Only alerts, silences, and alerting rules relating to the selected project are displayed in the Developer perspective. 6.4. Managing alerting rules OpenShift Container Platform monitoring ships with a set of default alerting rules. As a cluster administrator, you can view the default alerting rules. In OpenShift Container Platform 4.9, you can create, view, edit, and remove alerting rules in user-defined projects. Alerting rule considerations The default alerting rules are used specifically for the OpenShift Container Platform cluster. Some alerting rules intentionally have identical names. They send alerts about the same event with different thresholds, different severity, or both. Inhibition rules prevent notifications for lower severity alerts that are firing when a higher severity alert is also firing. 6.4.1. Optimizing alerting for user-defined projects You can optimize alerting for your own projects by considering the following recommendations when creating alerting rules: Minimize the number of alerting rules that you create for your project . Create alerting rules that notify you of conditions that impact you. It is more difficult to notice relevant alerts if you generate many alerts for conditions that do not impact you. Create alerting rules for symptoms instead of causes . Create alerting rules that notify you of conditions regardless of the underlying cause. The cause can then be investigated. You will need many more alerting rules if each relates only to a specific cause. Some causes are then likely to be missed. Plan before you write your alerting rules . Determine what symptoms are important to you and what actions you want to take if they occur. Then build an alerting rule for each symptom. Provide clear alert messaging . State the symptom and recommended actions in the alert message. Include severity levels in your alerting rules . The severity of an alert depends on how you need to react if the reported symptom occurs. For example, a critical alert should be triggered if a symptom requires immediate attention by an individual or a critical response team. Optimize alert routing . Deploy an alerting rule directly on the Prometheus instance in the openshift-user-workload-monitoring project if the rule does not query default OpenShift Container Platform metrics. This reduces latency for alerting rules and minimizes the load on monitoring components. Warning Default OpenShift Container Platform metrics for user-defined projects provide information about CPU and memory usage, bandwidth statistics, and packet rate information. Those metrics cannot be included in an alerting rule if you route the rule directly to the Prometheus instance in the openshift-user-workload-monitoring project. Alerting rule optimization should be used only if you have read the documentation and have a comprehensive understanding of the monitoring architecture. Additional resources See the Prometheus alerting documentation for further guidelines on optimizing alerts See Monitoring overview for details about OpenShift Container Platform 4.9 monitoring architecture 6.4.2. Creating alerting rules for user-defined projects You can create alerting rules for user-defined projects. Those alerting rules will fire alerts based on the values of chosen metrics. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file. For example: Note When you create an alerting rule, a project label is enforced on it if a rule with the same name exists in another project. apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert expr: version{job="prometheus-example-app"} == 0 This configuration creates an alerting rule named example-alert . The alerting rule fires an alert when the version metric exposed by the sample service becomes 0 . Important A user-defined alerting rule can include metrics for its own project and cluster metrics. You cannot include metrics for another user-defined project. For example, an alerting rule for the user-defined project ns1 can have metrics from ns1 and cluster metrics, such as the CPU and memory metrics. However, the rule cannot include metrics from ns2 . Additionally, you cannot create alerting rules for the openshift-* core OpenShift Container Platform projects. OpenShift Container Platform monitoring by default provides a set of alerting rules for these projects. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml It takes some time to create the alerting rule. 6.4.3. Reducing latency for alerting rules that do not query platform metrics If an alerting rule for a user-defined project does not query default cluster metrics, you can deploy the rule directly on the Prometheus instance in the openshift-user-workload-monitoring project. This reduces latency for alerting rules by bypassing Thanos Ruler when it is not required. This also helps to minimize the overall load on monitoring components. Warning Default OpenShift Container Platform metrics for user-defined projects provide information about CPU and memory usage, bandwidth statistics, and packet rate information. Those metrics cannot be included in an alerting rule if you deploy the rule directly to the Prometheus instance in the openshift-user-workload-monitoring project. The procedure outlined in this section should only be used if you have read the documentation and have a comprehensive understanding of the monitoring architecture. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for alerting rules. In this example, it is called example-app-alerting-rule.yaml . Add an alerting rule configuration to the YAML file that includes a label with the key openshift.io/prometheus-rule-evaluation-scope and value leaf-prometheus . For example: apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 labels: openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus spec: groups: - name: example rules: - alert: VersionAlert expr: version{job="prometheus-example-app"} == 0 If that label is present, the alerting rule is deployed on the Prometheus instance in the openshift-user-workload-monitoring project. If the label is not present, the alerting rule is deployed to Thanos Ruler. Apply the configuration file to the cluster: USD oc apply -f example-app-alerting-rule.yaml It takes some time to create the alerting rule. See Monitoring overview for details about OpenShift Container Platform 4.9 monitoring architecture. 6.4.4. Accessing alerting rules for user-defined projects To list alerting rules for a user-defined project, you must have been assigned the monitoring-rules-view role for the project. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-view role for your project. You have installed the OpenShift CLI ( oc ). Procedure You can list alerting rules in <project> : USD oc -n <project> get prometheusrule To list the configuration of an alerting rule, run the following: USD oc -n <project> get prometheusrule <rule> -o yaml 6.4.5. Listing alerting rules for all projects in a single view As a cluster administrator, you can list alerting rules for core OpenShift Container Platform and user-defined projects together in a single view. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure In the Administrator perspective, navigate to Observe Alerting Alerting Rules . Select the Platform and User sources in the Filter drop-down menu. Note The Platform source is selected by default. 6.4.6. Removing alerting rules for user-defined projects You can remove alerting rules for user-defined projects. Prerequisites You have enabled monitoring for user-defined projects. You are logged in as a user that has the monitoring-rules-edit role for the project where you want to create an alerting rule. You have installed the OpenShift CLI ( oc ). Procedure To remove rule <foo> in <namespace> , run the following: USD oc -n <namespace> delete prometheusrule <foo> Additional resources See the Alertmanager documentation 6.5. Managing silences You can create a silence to stop receiving notifications about an alert when it is firing. It might be useful to silence an alert after being first notified, while you resolve the underlying issue. When creating a silence, you must specify whether it becomes active immediately or at a later time. You must also set a duration period after which the silence expires. You can view, edit, and expire existing silences. 6.5.1. Silencing alerts You can either silence a specific alert or silence alerts that match a specification that you define. Prerequisites You are a cluster administrator and have access to the cluster as a user with the cluster-admin cluster role. You are a non-administator user and have access to the cluster as a user with the following user roles: The cluster-monitoring-view cluster role, which allows you to access Alertmanager. The monitoring-alertmanager-edit role, which permits you to create and silence alerts in the Administrator perspective in the web console. The monitoring-rules-edit role, which permits you to create and silence alerts in the Developer perspective in the web console. Procedure To silence a specific alert: In the Administrator perspective: Navigate to the Observe Alerting Alerts page of the OpenShift Container Platform web console. For the alert that you want to silence, select the in the right-hand column and select Silence Alert . The Silence Alert form will appear with a pre-populated specification for the chosen alert. Optional: Modify the silence. You must add a comment before creating the silence. To create the silence, select Silence . In the Developer perspective: Navigate to the Observe <project_name> Alerts page in the OpenShift Container Platform web console. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select Silence Alert . The Silence Alert form will appear with a prepopulated specification for the chosen alert. Optional: Modify the silence. You must add a comment before creating the silence. To create the silence, select Silence . To silence a set of alerts by creating an alert specification in the Administrator perspective: Navigate to the Observe Alerting Silences page in the OpenShift Container Platform web console. Select Create Silence . Set the schedule, duration, and label details for an alert in the Create Silence form. You must also add a comment for the silence. To create silences for alerts that match the label sectors that you entered in the step, select Silence . 6.5.2. Editing silences You can edit a silence, which will expire the existing silence and create a new one with the changed configuration. Procedure To edit a silence in the Administrator perspective: Navigate to the Observe Alerting Silences page. For the silence you want to modify, select the in the last column and choose Edit silence . Alternatively, you can select Actions Edit Silence in the Silence Details page for a silence. In the Edit Silence page, enter your changes and select Silence . This will expire the existing silence and create one with the chosen configuration. To edit a silence in the Developer perspective: Navigate to the Observe <project_name> Alerts page. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence. Select the name of a silence to navigate to its Silence Details page. Select Actions Edit Silence in the Silence Details page for a silence. In the Edit Silence page, enter your changes and select Silence . This will expire the existing silence and create one with the chosen configuration. 6.5.3. Expiring silences You can expire a silence. Expiring a silence deactivates it forever. Procedure To expire a silence in the Administrator perspective: Navigate to the Observe Alerting Silences page. For the silence you want to modify, select the in the last column and choose Expire silence . Alternatively, you can select Actions Expire Silence in the Silence Details page for a silence. To expire a silence in the Developer perspective: Navigate to the Observe <project_name> Alerts page. Expand the details for an alert by selecting > to the left of the alert name. Select the name of the alert in the expanded view to open the Alert Details page for the alert. Select the name of a silence in the Silenced By section in that page to navigate to the Silence Details page for the silence. Select the name of a silence to navigate to its Silence Details page. Select Actions Expire Silence in the Silence Details page for a silence. 6.6. Sending notifications to external systems In OpenShift Container Platform 4.9, firing alerts can be viewed in the Alerting UI. Alerts are not configured by default to be sent to any notification systems. You can configure OpenShift Container Platform to send alerts to the following receiver types: PagerDuty Webhook Email Slack Routing alerts to receivers enables you to send timely notifications to the appropriate teams when failures occur. For example, critical alerts require immediate attention and are typically paged to an individual or a critical response team. Alerts that provide non-critical warning notifications might instead be routed to a ticketing system for non-immediate review. Checking that alerting is operational by using the watchdog alert OpenShift Container Platform monitoring includes a watchdog alert that fires continuously. Alertmanager repeatedly sends watchdog alert notifications to configured notification providers. The provider is usually configured to notify an administrator when it stops receiving the watchdog alert. This mechanism helps you quickly identify any communication issues between Alertmanager and the notification provider. 6.6.1. Configuring alert receivers You can configure alert receivers to ensure that you learn about important issues with your cluster. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Administration Cluster Settings Configuration Alertmanager . Note Alternatively, you can navigate to the same page through the notification drawer. Select the bell icon at the top right of the OpenShift Container Platform web console and choose Configure in the AlertmanagerReceiverNotConfigured alert. Select Create Receiver in the Receivers section of the page. In the Create Receiver form, add a Receiver Name and choose a Receiver Type from the list. Edit the receiver configuration: For PagerDuty receivers: Choose an integration type and add a PagerDuty integration key. Add the URL of your PagerDuty installation. Select Show advanced configuration if you want to edit the client and incident details or the severity specification. For webhook receivers: Add the endpoint to send HTTP POST requests to. Select Show advanced configuration if you want to edit the default option to send resolved alerts to the receiver. For email receivers: Add the email address to send notifications to. Add SMTP configuration details, including the address to send notifications from, the smarthost and port number used for sending emails, the hostname of the SMTP server, and authentication details. Choose whether TLS is required. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the body of email notifications configuration. For Slack receivers: Add the URL of the Slack webhook. Add the Slack channel or user name to send notifications to. Select Show advanced configuration if you want to edit the default option not to send resolved alerts to the receiver or edit the icon and username configuration. You can also choose whether to find and link channel names and usernames. By default, firing alerts with labels that match all of the selectors will be sent to the receiver. If you want label values for firing alerts to be matched exactly before they are sent to the receiver: Add routing label names and values in the Routing Labels section of the form. Select Regular Expression if want to use a regular expression. Select Add Label to add further routing labels. Select Create to create the receiver. 6.7. Applying a custom Alertmanager configuration You can overwrite the default Alertmanager configuration by editing the alertmanager-main secret inside the openshift-monitoring project. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure To change the Alertmanager configuration from the CLI: Print the currently active Alertmanager configuration into file alertmanager.yaml : USD oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml Edit the configuration in alertmanager.yaml : global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 5m receiver: watchdog - matchers: - "service=<your_service>" 1 routes: - matchers: - <your_matching_rules> 2 receiver: <receiver> 3 receivers: - name: default - name: watchdog - name: <receiver> # <receiver_configuration> 1 service specifies the service that fires the alerts. 2 <your_matching_rules> specifies the target alerts. 3 receiver specifies the receiver to use for the alert. Note Use the matchers key name to indicate the matchers that an alert has to fulfill to match the node. Do not use the match or match_re key names, which are both deprecated and planned for removal in a future release. In addition, if you define inhibition rules, use the target_matchers key name to indicate the target matchers and the source_matchers key name to indicate the source matchers. Do not use the target_match , target_match_re , source_match , or source_match_re key names, which are deprecated and planned for removal in a future release. The following Alertmanager configuration example configures PagerDuty as an alert receiver: global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - "alertname=Watchdog" repeat_interval: 5m receiver: watchdog - matchers: - "service=example-app" routes: - matchers: - "severity=critical" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: " your-key " With this configuration, alerts of critical severity that are fired by the example-app service are sent using the team-frontend-page receiver. Typically these types of alerts would be paged to an individual or a critical response team. Apply the new configuration in the file: USD oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=- To change the Alertmanager configuration from the OpenShift Container Platform web console: Navigate to the Administration Cluster Settings Configuration Alertmanager YAML page of the web console. Modify the YAML configuration file. Select Save . Additional resources See the PagerDuty official site for more information on PagerDuty See the PagerDuty Prometheus Integration Guide to learn how to retrieve the service_key See Alertmanager configuration for configuring alerting through different alert receivers 6.8. steps Reviewing monitoring dashboards | [
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0",
"oc apply -f example-app-alerting-rule.yaml",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 labels: openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0",
"oc apply -f example-app-alerting-rule.yaml",
"oc -n <project> get prometheusrule",
"oc -n <project> get prometheusrule <rule> -o yaml",
"oc -n <namespace> delete prometheusrule <foo>",
"oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml",
"global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 5m receiver: watchdog - matchers: - \"service=<your_service>\" 1 routes: - matchers: - <your_matching_rules> 2 receiver: <receiver> 3 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration>",
"global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 5m receiver: watchdog - matchers: - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \" your-key \"",
"oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/monitoring/managing-alerts |
1.5. MAC Address Pools | 1.5. MAC Address Pools MAC address pools define the range(s) of MAC addresses allocated for each cluster. A MAC address pool is specified for each cluster. By using MAC address pools, Red Hat Virtualization can automatically generate and assign MAC addresses to new virtual network devices, which helps to prevent MAC address duplication. MAC address pools are more memory efficient when all MAC addresses related to a cluster are within the range for the assigned MAC address pool. The same MAC address pool can be shared by multiple clusters, but each cluster has a single MAC address pool assigned. A default MAC address pool is created by Red Hat Virtualization and is used if another MAC address pool is not assigned. For more information about assigning MAC address pools to clusters see Section 8.2.1, "Creating a New Cluster" . Note If more than one Red Hat Virtualization cluster shares a network, do not rely solely on the default MAC address pool because the virtual machines of each cluster will try to use the same range of MAC addresses, leading to conflicts. To avoid MAC address conflicts, check the MAC address pool ranges to ensure that each cluster is assigned a unique MAC address range. The MAC address pool assigns the available MAC address following the last address that was returned to the pool. If there are no further addresses left in the range, the search starts again from the beginning of the range. If there are multiple MAC address ranges with available MAC addresses defined in a single MAC address pool, the ranges take turns in serving incoming requests in the same way available MAC addresses are selected. 1.5.1. Creating MAC Address Pools You can create new MAC address pools. Creating a MAC Address Pool Click Administration Configure . Click the MAC Address Pools tab. Click Add . Enter the Name and Description of the new MAC address pool. Select the Allow Duplicates check box to allow a MAC address to be used multiple times in a pool. The MAC address pool will not automatically use a duplicate MAC address, but enabling the duplicates option means a user can manually use a duplicate MAC address. Note If one MAC address pool has duplicates disabled, and another has duplicates enabled, each MAC address can be used once in the pool with duplicates disabled but can be used multiple times in the pool with duplicates enabled. Enter the required MAC Address Ranges . To enter multiple ranges click the plus button to the From and To fields. Click OK . 1.5.2. Editing MAC Address Pools You can edit MAC address pools to change the details, including the range of MAC addresses available in the pool and whether duplicates are allowed. Editing MAC Address Pool Properties Click Administration Configure . Click the MAC Address Pools tab. Select the MAC address pool to be edited. Click Edit . Change the Name , Description , Allow Duplicates , and MAC Address Ranges fields as required. Note When a MAC address range is updated, the MAC addresses of existing NICs are not reassigned. MAC addresses that were already assigned, but are outside of the new MAC address range, are added as user-specified MAC addresses and are still tracked by that MAC address pool. Click OK . 1.5.3. Editing MAC Address Pool Permissions After a MAC address pool has been created, you can edit its user permissions. The user permissions control which data centers can use the MAC address pool. See Section 1.1, "Roles" for more information on adding new user permissions. Editing MAC Address Pool Permissions Click Administration Configure . Click the MAC Address Pools tab. Select the required MAC address pool. Edit the user permissions for the MAC address pool: To add user permissions to a MAC address pool: Click Add in the user permissions pane at the bottom of the Configure window. Search for and select the required users. Select the required role from the Role to Assign drop-down list. Click OK to add the user permissions. To remove user permissions from a MAC address pool: Select the user permission to be removed in the user permissions pane at the bottom of the Configure window. Click Remove to remove the user permissions. 1.5.4. Removing MAC Address Pools You can remove a created MAC address pool if the pool is not associated with a cluster, but the default MAC address pool cannot be removed. Removing a MAC Address Pool Click Administration Configure . Click the MAC Address Pools tab. Select the MAC address pool to be removed. Click the Remove . Click OK . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-mac_address_pools |
Chapter 4. Avro Jackson | Chapter 4. Avro Jackson Jackson Avro is a Data Format which uses the Jackson library with the Avro extension to unmarshal an Avro payload into Java objects or to marshal Java objects into an Avro payload. Note If you are familiar with Jackson, this Avro data format behaves in the same way as its JSON counterpart, and thus can be used with classes annotated for JSON serialization/deserialization. from("kafka:topic"). unmarshal().avro(AvroLibrary.Jackson, JsonNode.class). to("log:info"); 4.1. Dependencies When using avro-jackson with Red Hat build of Camel Spring Boot make sure to add the Maven dependency to have support for auto configuration. <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-avro-starter</artifactId> </dependency> 4.2. Configuring the SchemaResolver Since Avro serialization is schema-based, this data format requires that you provide a SchemaResolver object that is able to lookup the schema for each exchange that is going to be marshalled/unmarshalled. You can add a single SchemaResolver to the registry and it will be looked up automatically. Or you can explicitly specify the reference to a custom SchemaResolver. 4.3. Avro Jackson Options The Avro Jackson dataformat supports 18 options, which are listed below. Name Default Java Type Description objectMapper String Lookup and use the existing ObjectMapper with the given id when using Jackson. useDefaultObjectMapper Boolean Whether to lookup and use default Jackson ObjectMapper from the registry. unmarshalType String Class name of the java type to use when unmarshalling. jsonView String When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. allowJmsType Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionType String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList Boolean To unmarshal to a List of Map or a List of Pojo. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. allowUnmarshallType Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. timezone String If set then Jackson will use the Timezone when marshalling/unmarshalling. autoDiscoverObjectMapper Boolean If set to true then Jackson will lookup for an objectMapper into the registry. contentTypeHeader Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. schemaResolver String Optional schema resolver used to lookup schemas for the data in transit. autoDiscoverSchemaResolver Boolean When not disabled, the SchemaResolver will be looked up into the registry. 4.4. Using custom AvroMapper You can configure JacksonAvroDataFormat to use a custom AvroMapper in case you need more control of the mapping configuration. If you setup a single AvroMapper in the registry, then Camel will automatic lookup and use this AvroMapper . 4.5. Spring Boot Auto-Configuration The component supports 19 options, which are listed below. Name Description Default Type camel.dataformat.avro-jackson.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.avro-jackson.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.avro-jackson.auto-discover-object-mapper If set to true then Jackson will lookup for an objectMapper into the registry. false Boolean camel.dataformat.avro-jackson.auto-discover-schema-resolver When not disabled, the SchemaResolver will be looked up into the registry. true Boolean camel.dataformat.avro-jackson.collection-type Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.avro-jackson.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.avro-jackson.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.avro-jackson.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma. String camel.dataformat.avro-jackson.enabled Whether to enable auto configuration of the avro-jackson data format. This is enabled by default. Boolean camel.dataformat.avro-jackson.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL. String camel.dataformat.avro-jackson.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations. String camel.dataformat.avro-jackson.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.avro-jackson.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.avro-jackson.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.avro-jackson.schema-resolver Optional schema resolver used to lookup schemas for the data in transit. String camel.dataformat.avro-jackson.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. String camel.dataformat.avro-jackson.unmarshal-type Class name of the java type to use when unmarshalling. String camel.dataformat.avro-jackson.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.avro-jackson.use-list To unmarshal to a List of Map or a List of Pojo. false Boolean | [
"from(\"kafka:topic\"). unmarshal().avro(AvroLibrary.Jackson, JsonNode.class). to(\"log:info\");",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jackson-avro-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-avro-jackson-dataformat-starter |
Chapter 19. Troubleshooting issues in provider mode | Chapter 19. Troubleshooting issues in provider mode 19.1. Force deletion of storage in provider clusters When a client cluster is deleted without performing the offboarding process to remove all the resources from the corresponding provider cluster, you need to perform force deletion of the corresponding storage consumer from the provider cluster. This helps to release the storage space that was claimed by the client. Caution It is recommended to use this method only in unavoidable situations such as accidental deletion of storage client clusters. Prerequisites Access to the OpenShift Data Foundation storage cluster in provider mode. Procedure Click Storage Storage Clients from the OpenShift console. Click the delete icon at the far right of the listed storage client cluster. The delete icon is enabled only after 5 minutes of the last heartbeat of the cluster. Click Confirm . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/troubleshooting_openshift_data_foundation/troubleshooting_issues_in_provider_mode |
Chapter 6. File Integrity Operator | Chapter 6. File Integrity Operator 6.1. File Integrity Operator release notes The File Integrity Operator for OpenShift Container Platform continually runs file integrity checks on RHCOS nodes. These release notes track the development of the File Integrity Operator in the OpenShift Container Platform. For an overview of the File Integrity Operator, see Understanding the File Integrity Operator . To access the latest release, see Updating the File Integrity Operator . 6.1.1. OpenShift File Integrity Operator 1.3.1 The following advisory is available for the OpenShift File Integrity Operator 1.3.1: RHBA-2023:3600 OpenShift File Integrity Operator Bug Fix Update 6.1.1.1. New features and enhancements FIO now includes kubelet certificates as default files, excluding them from issuing warnings when they're managed by OpenShift Container Platform. ( OCPBUGS-14348 ) FIO now correctly directs email to the address for Red Hat Technical Support. ( OCPBUGS-5023 ) 6.1.1.2. Bug fixes Previously, FIO would not clean up FileIntegrityNodeStatus CRDs when nodes are removed from the cluster. FIO has been updated to correctly clean up node status CRDs on node removal. ( OCPBUGS-4321 ) Previously, FIO would also erroneously indicate that new nodes failed integrity checks. FIO has been updated to correctly show node status CRDs when adding new nodes to the cluster. This provides correct node status notifications. ( OCPBUGS-8502 ) Previously, when FIO was reconciling FileIntegrity CRDs, it would pause scanning until the reconciliation was done. This caused an overly aggressive re-initiatization process on nodes not impacted by the reconciliation. This problem also resulted in unnecessary daemonsets for machine config pools which are unrelated to the FileIntegrity being changed. FIO correctly handles these cases and only pauses AIDE scanning for nodes that are affected by file integrity changes. ( CMP-1097 ) 6.1.1.3. Known Issues In FIO 1.3.1, increasing nodes in IBM Z clusters might result in Failed File Integrity node status. For more information, see Adding nodes in IBM Power clusters can result in failed File Integrity node status . 6.1.2. OpenShift File Integrity Operator 1.2.1 The following advisory is available for the OpenShift File Integrity Operator 1.2.1: RHBA-2023:1684 OpenShift File Integrity Operator Bug Fix Update This release includes updated container dependencies. 6.1.3. OpenShift File Integrity Operator 1.2.0 The following advisory is available for the OpenShift File Integrity Operator 1.2.0: RHBA-2023:1273 OpenShift File Integrity Operator Enhancement Update 6.1.3.1. New features and enhancements The File Integrity Operator Custom Resource (CR) now contains an initialDelay feature that specifies the number of seconds to wait before starting the first AIDE integrity check. For more information, see Creating the FileIntegrity custom resource . The File Integrity Operator is now stable and the release channel is upgraded to stable . Future releases will follow Semantic Versioning . To access the latest release, see Updating the File Integrity Operator . 6.1.4. OpenShift File Integrity Operator 1.0.0 The following advisory is available for the OpenShift File Integrity Operator 1.0.0: RHBA-2023:0037 OpenShift File Integrity Operator Bug Fix Update 6.1.5. OpenShift File Integrity Operator 0.1.32 The following advisory is available for the OpenShift File Integrity Operator 0.1.32: RHBA-2022:7095 OpenShift File Integrity Operator Bug Fix Update 6.1.5.1. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand from which namespace the alert originated. Now, the Operator sets the appropriate namespace, providing more information about the alert. ( BZ#2112394 ) Previously, The File Integrity Operator did not update the metrics service on Operator startup, causing the metrics targets to be unreachable. With this release, the File Integrity Operator now ensures the metrics service is updated on Operator startup. ( BZ#2115821 ) 6.1.6. OpenShift File Integrity Operator 0.1.30 The following advisory is available for the OpenShift File Integrity Operator 0.1.30: RHBA-2022:5538 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.1.6.1. New features and enhancements The File Integrity Operator is now supported on the following architectures: IBM Power IBM Z and LinuxONE 6.1.6.2. Bug fixes Previously, alerts issued by the File Integrity Operator did not set a namespace, making it difficult to understand where the alert originated. Now, the Operator sets the appropriate namespace, increasing understanding of the alert. ( BZ#2101393 ) 6.1.7. OpenShift File Integrity Operator 0.1.24 The following advisory is available for the OpenShift File Integrity Operator 0.1.24: RHBA-2022:1331 OpenShift File Integrity Operator Bug Fix 6.1.7.1. New features and enhancements You can now configure the maximum number of backups stored in the FileIntegrity Custom Resource (CR) with the config.maxBackups attribute. This attribute specifies the number of AIDE database and log backups left over from the re-init process to keep on the node. Older backups beyond the configured number are automatically pruned. The default is set to five backups. 6.1.7.2. Bug fixes Previously, upgrading the Operator from versions older than 0.1.21 to 0.1.22 could cause the re-init feature to fail. This was a result of the Operator failing to update configMap resource labels. Now, upgrading to the latest version fixes the resource labels. ( BZ#2049206 ) Previously, when enforcing the default configMap script contents, the wrong data keys were compared. This resulted in the aide-reinit script not being updated properly after an Operator upgrade, and caused the re-init process to fail. Now, daemonSets run to completion and the AIDE database re-init process executes successfully. ( BZ#2072058 ) 6.1.8. OpenShift File Integrity Operator 0.1.22 The following advisory is available for the OpenShift File Integrity Operator 0.1.22: RHBA-2022:0142 OpenShift File Integrity Operator Bug Fix 6.1.8.1. Bug fixes Previously, a system with a File Integrity Operator installed might interrupt the OpenShift Container Platform update, due to the /etc/kubernetes/aide.reinit file. This occurred if the /etc/kubernetes/aide.reinit file was present, but later removed prior to the ostree validation. With this update, /etc/kubernetes/aide.reinit is moved to the /run directory so that it does not conflict with the OpenShift Container Platform update. ( BZ#2033311 ) 6.1.9. OpenShift File Integrity Operator 0.1.21 The following advisory is available for the OpenShift File Integrity Operator 0.1.21: RHBA-2021:4631 OpenShift File Integrity Operator Bug Fix and Enhancement Update 6.1.9.1. New features and enhancements The metrics related to FileIntegrity scan results and processing metrics are displayed on the monitoring dashboard on the web console. The results are labeled with the prefix of file_integrity_operator_ . If a node has an integrity failure for more than 1 second, the default PrometheusRule provided in the operator namespace alerts with a warning. The following dynamic Machine Config Operator and Cluster Version Operator related filepaths are excluded from the default AIDE policy to help prevent false positives during node updates: /etc/machine-config-daemon/currentconfig /etc/pki/ca-trust/extracted/java/cacerts /etc/cvo/updatepayloads /root/.kube The AIDE daemon process has stability improvements over v0.1.16, and is more resilient to errors that might occur when the AIDE database is initialized. 6.1.9.2. Bug fixes Previously, when the Operator automatically upgraded, outdated daemon sets were not removed. With this release, outdated daemon sets are removed during the automatic upgrade. 6.1.10. Additional resources Understanding the File Integrity Operator 6.2. Installing the File Integrity Operator 6.2.1. Installing the File Integrity Operator using the web console Prerequisites You must have admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Search for the File Integrity Operator, then click Install . Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-file-integrity namespace. Click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-file-integrity namespace and its status is Succeeded . If the Operator is not installed successfully: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-file-integrity project that are reporting issues. 6.2.2. Installing the File Integrity Operator using the CLI Prerequisites You must have admin privileges. Procedure Create a Namespace object YAML file by running: USD oc create -f <file-name>.yaml Example output apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: openshift-file-integrity Create the OperatorGroup object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity Create the Subscription object YAML file: USD oc create -f <file-name>.yaml Example output apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: "stable" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace Verification Verify the installation succeeded by inspecting the CSV file: USD oc get csv -n openshift-file-integrity Verify that the File Integrity Operator is up and running: USD oc get deploy -n openshift-file-integrity 6.2.3. Additional resources The File Integrity Operator is supported in a restricted network environment. For more information, see Using Operator Lifecycle Manager on restricted networks . 6.3. Updating the File Integrity Operator As a cluster administrator, you can update the File Integrity Operator on your OpenShift Container Platform cluster. 6.3.1. Preparing for an Operator update The subscription of an installed Operator specifies an update channel that tracks and receives updates for the Operator. You can change the update channel to start tracking and receiving updates from a newer channel. The names of update channels in a subscription can differ between Operators, but the naming scheme typically follows a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator ( 1.2 , 1.3 ) or a release frequency ( stable , fast ). Note You cannot change installed Operators to a channel that is older than the current channel. Red Hat Customer Portal Labs include the following application that helps administrators prepare to update their Operators: Red Hat OpenShift Container Platform Operator Update Information Checker You can use the application to search for Operator Lifecycle Manager-based Operators and verify the available Operator version per update channel across different versions of OpenShift Container Platform. Cluster Version Operator-based Operators are not included. 6.3.2. Changing the update channel for an Operator You can change the update channel for an Operator by using the OpenShift Container Platform web console. Tip If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the web console, navigate to Operators Installed Operators . Click the name of the Operator you want to change the update channel for. Click the Subscription tab. Click the name of the update channel under Channel . Click the newer update channel that you want to change to, then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. 6.3.3. Manually approving a pending Operator update If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. Prerequisites An Operator previously installed using Operator Lifecycle Manager (OLM). Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Operators Installed Operators . Operators that have a pending update display a status with Upgrade available . Click the name of the Operator you want to update. Click the Subscription tab. Any update requiring approval are displayed to Upgrade Status . For example, it might display 1 requires approval . Click 1 requires approval , then click Preview Install Plan . Review the resources that are listed as available for update. When satisfied, click Approve . Navigate back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . 6.4. Understanding the File Integrity Operator The File Integrity Operator is an OpenShift Container Platform Operator that continually runs file integrity checks on the cluster nodes. It deploys a daemon set that initializes and runs privileged advanced intrusion detection environment (AIDE) containers on each node, providing a status object with a log of files that are modified during the initial run of the daemon set pods. Important Currently, only Red Hat Enterprise Linux CoreOS (RHCOS) nodes are supported. 6.4.1. Creating the FileIntegrity custom resource An instance of a FileIntegrity custom resource (CR) represents a set of continuous file integrity scans for one or more nodes. Each FileIntegrity CR is backed by a daemon set running AIDE on the nodes matching the FileIntegrity CR specification. Procedure Create the following example FileIntegrity CR named worker-fileintegrity.yaml to enable scans on worker nodes: Example FileIntegrity CR apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: "" tolerations: 2 - key: "myNode" operator: "Exists" effect: "NoSchedule" config: 3 name: "myconfig" namespace: "openshift-file-integrity" key: "config" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7 1 Defines the selector for scheduling node scans. 2 Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration allowing running on main and infra nodes is applied. 3 Define a ConfigMap containing an AIDE configuration to use. 4 The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node might be resource intensive, so it can be useful to specify a longer interval. Default is 900 seconds (15 minutes). 5 The maximum number of AIDE database and log backups (leftover from the re-init process) to keep on a node. Older backups beyond this number are automatically pruned by the daemon. Default is set to 5. 6 The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. 7 The running status of the FileIntegrity instance. Statuses are Initializing , Pending , or Active . Initializing The FileIntegrity object is currently initializing or re-initializing the AIDE database. Pending The FileIntegrity deployment is still being created. Active The scans are active and ongoing. Apply the YAML file to the openshift-file-integrity namespace: USD oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity Verification Confirm the FileIntegrity object was created successfully by running the following command: USD oc get fileintegrities -n openshift-file-integrity Example output NAME AGE worker-fileintegrity 14s 6.4.2. Checking the FileIntegrity custom resource status The FileIntegrity custom resource (CR) reports its status through the . status.phase subresource. Procedure To query the FileIntegrity CR status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status.phase }" Example output Active 6.4.3. FileIntegrity custom resource phases Pending - The phase after the custom resource (CR) is created. Active - The phase when the backing daemon set is up and running. Initializing - The phase when the AIDE database is being reinitialized. 6.4.4. Understanding the FileIntegrityNodeStatuses object The scan results of the FileIntegrity CR are reported in another object called FileIntegrityNodeStatuses . USD oc get fileintegritynodestatuses Example output NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s Note It might take some time for the FileIntegrityNodeStatus object results to be available. There is one result object per node. The nodeName attribute of each FileIntegrityNodeStatus object corresponds to the node being scanned. The status of the file integrity scan is represented in the results array, which holds scan conditions. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq The fileintegritynodestatus object reports the latest status of an AIDE run and exposes the status as Failed , Succeeded , or Errored in a status field. USD oc get fileintegritynodestatuses -w Example output NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded 6.4.5. FileIntegrityNodeStatus CR status types These conditions are reported in the results array of the corresponding FileIntegrityNodeStatus CR status: Succeeded - The integrity check passed; the files and directories covered by the AIDE check have not been modified since the database was last initialized. Failed - The integrity check failed; some files or directories covered by the AIDE check have been modified since the database was last initialized. Errored - The AIDE scanner encountered an internal error. 6.4.5.1. FileIntegrityNodeStatus CR success example Example output of a condition with a success status [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:57Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:46:03Z" } ] [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:45:48Z" } ] In this case, all three scans succeeded and so far there are no other conditions. 6.4.5.2. FileIntegrityNodeStatus CR failure status example To simulate a failure condition, modify one of the files AIDE tracks. For example, modify /etc/resolv.conf on one of the worker nodes: USD oc debug node/ip-10-0-130-192.ec2.internal Example output Creating debug namespace/openshift-debug-node-ldfbj ... Starting pod/ip-10-0-130-192ec2internal-debug ... To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo "# integrity test" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod ... Removing debug namespace/openshift-debug-node-ldfbj ... After some time, the Failed condition is reported in the results array of the corresponding FileIntegrityNodeStatus object. The Succeeded condition is retained, which allows you to pinpoint the time the check failed. USD oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r Alternatively, if you are not mentioning the object name, run: USD oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq Example output [ { "condition": "Succeeded", "lastProbeTime": "2020-09-15T12:54:14Z" }, { "condition": "Failed", "filesChanged": 1, "lastProbeTime": "2020-09-15T12:57:20Z", "resultConfigMapName": "aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed", "resultConfigMapNamespace": "openshift-file-integrity" } ] The Failed condition points to a config map that gives more details about what exactly failed and why: USD oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Example output Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none> Due to the config map data size limit, AIDE logs over 1 MB are added to the failure config map as a base64-encoded gzip archive. In this case, you want to pipe the output of the above command to base64 --decode | gunzip . Compressed logs are indicated by the presence of a file-integrity.openshift.io/compressed annotation key in the config map. 6.4.6. Understanding events Transitions in the status of the FileIntegrity and FileIntegrityNodeStatus objects are logged by events . The creation time of the event reflects the latest transition, such as Initializing to Active , and not necessarily the latest scan result. However, the newest event always reflects the most recent status. USD oc get events --field-selector reason=FileIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active When a node scan fails, an event is created with the add/changed/removed and config map information. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed Changes to the number of added, changed, or removed files results in a new event, even if the status of the node has not transitioned. USD oc get events --field-selector reason=NodeIntegrityStatus Example output LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 6.5. Configuring the Custom File Integrity Operator 6.5.1. Viewing FileIntegrity object attributes As with any Kubernetes custom resources (CRs), you can run oc explain fileintegrity , and then look at the individual attributes using: USD oc explain fileintegrity.spec USD oc explain fileintegrity.spec.config 6.5.2. Important attributes Table 6.1. Important spec and spec.config attributes Attribute Description spec.nodeSelector A map of key-values pairs that must match with node's labels in order for the AIDE pods to be schedulable on that node. The typical use is to set only a single key-value pair where node-role.kubernetes.io/worker: "" schedules AIDE on all worker nodes, node.openshift.io/os_id: "rhcos" schedules on all Red Hat Enterprise Linux CoreOS (RHCOS) nodes. spec.debug A boolean attribute. If set to true , the daemon running in the AIDE deamon set's pods would output extra information. spec.tolerations Specify tolerations to schedule on nodes with custom taints. When not specified, a default toleration is applied, which allows tolerations to run on control plane nodes. spec.config.gracePeriod The number of seconds to pause in between AIDE integrity checks. Frequent AIDE checks on a node can be resource intensive, so it can be useful to specify a longer interval. Defaults to 900 , or 15 minutes. maxBackups The maximum number of AIDE database and log backups leftover from the re-init process to keep on a node. Older backups beyond this number are automatically pruned by the daemon. spec.config.name Name of a configMap that contains custom AIDE configuration. If omitted, a default configuration is created. spec.config.namespace Namespace of a configMap that contains custom AIDE configuration. If unset, the FIO generates a default configuration suitable for RHCOS systems. spec.config.key Key that contains actual AIDE configuration in a config map specified by name and namespace . The default value is aide.conf . spec.config.initialDelay The number of seconds to wait before starting the first AIDE integrity check. Default is set to 0. This attribute is optional. 6.5.3. Examine the default configuration The default File Integrity Operator configuration is stored in a config map with the same name as the FileIntegrity CR. Procedure To examine the default config, run: USD oc describe cm/worker-fileintegrity 6.5.4. Understanding the default File Integrity Operator configuration Below is an excerpt from the aide.conf key of the config map: @@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\..* PERMS /hostroot/root/ CONTENT_EX The default configuration for a FileIntegrity instance provides coverage for files under the following directories: /root /boot /usr /etc The following directories are not covered: /var /opt Some OpenShift Container Platform-specific excludes under /etc/ 6.5.5. Supplying a custom AIDE configuration Any entries that configure AIDE internal behavior such as DBDIR , LOGDIR , database , and database_out are overwritten by the Operator. The Operator would add a prefix to /hostroot/ before all paths to be watched for integrity changes. This makes reusing existing AIDE configs that might often not be tailored for a containerized environment and start from the root directory easier. Note /hostroot is the directory where the pods running AIDE mount the host's file system. Changing the configuration triggers a reinitializing of the database. 6.5.6. Defining a custom File Integrity Operator configuration This example focuses on defining a custom configuration for a scanner that runs on the control plane nodes based on the default configuration provided for the worker-fileintegrity CR. This workflow might be useful if you are planning to deploy a custom software running as a daemon set and storing its data under /opt/mydaemon on the control plane nodes. Procedure Make a copy of the default configuration. Edit the default configuration with the files that must be watched or excluded. Store the edited contents in a new config map. Point the FileIntegrity object to the new config map through the attributes in spec.config . Extract the default configuration: USD oc extract cm/worker-fileintegrity --keys=aide.conf This creates a file named aide.conf that you can edit. To illustrate how the Operator post-processes the paths, this example adds an exclude directory without the prefix: USD vim aide.conf Example output /hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db Exclude a path specific to control plane nodes: !/opt/mydaemon/ Store the other content in /etc : /hostroot/etc/ CONTENT_EX Create a config map based on this file: USD oc create cm master-aide-conf --from-file=aide.conf Define a FileIntegrity CR manifest that references the config map: apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: "" config: name: master-aide-conf namespace: openshift-file-integrity The Operator processes the provided config map file and stores the result in a config map with the same name as the FileIntegrity object: USD oc describe cm/master-fileintegrity | grep /opt/mydaemon Example output !/hostroot/opt/mydaemon 6.5.7. Changing the custom File Integrity configuration To change the File Integrity configuration, never change the generated config map. Instead, change the config map that is linked to the FileIntegrity object through the spec.name , namespace , and key attributes. 6.6. Performing advanced Custom File Integrity Operator tasks 6.6.1. Reinitializing the database If the File Integrity Operator detects a change that was planned, it might be required to reinitialize the database. Procedure Annotate the FileIntegrity custom resource (CR) with file-integrity.openshift.io/re-init : USD oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init= The old database and log files are backed up and a new database is initialized. The old database and logs are retained on the nodes under /etc/kubernetes , as seen in the following output from a pod spawned using oc debug : Example output ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55 To provide some permanence of record, the resulting config maps are not owned by the FileIntegrity object, so manual cleanup is necessary. As a result, any integrity failures would still be visible in the FileIntegrityNodeStatus object. 6.6.2. Machine config integration In OpenShift Container Platform 4, the cluster node configuration is delivered through MachineConfig objects. You can assume that the changes to files that are caused by a MachineConfig object are expected and should not cause the file integrity scan to fail. To suppress changes to files caused by MachineConfig object updates, the File Integrity Operator watches the node objects; when a node is being updated, the AIDE scans are suspended for the duration of the update. When the update finishes, the database is reinitialized and the scans resume. This pause and resume logic only applies to updates through the MachineConfig API, as they are reflected in the node object annotations. 6.6.3. Exploring the daemon sets Each FileIntegrity object represents a scan on a number of nodes. The scan itself is performed by pods managed by a daemon set. To find the daemon set that represents a FileIntegrity object, run: USD oc -n openshift-file-integrity get ds/aide-worker-fileintegrity To list the pods in that daemon set, run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity To view logs of a single AIDE pod, call oc logs on one of the pods. USD oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6 Example output Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check ... The config maps created by the AIDE daemon are not retained and are deleted after the File Integrity Operator processes them. However, on failure and error, the contents of these config maps are copied to the config map that the FileIntegrityNodeStatus object points to. 6.7. Troubleshooting the File Integrity Operator 6.7.1. General troubleshooting Issue You want to generally troubleshoot issues with the File Integrity Operator. Resolution Enable the debug flag in the FileIntegrity object. The debug flag increases the verbosity of the daemons that run in the DaemonSet pods and run the AIDE checks. 6.7.2. Checking the AIDE configuration Issue You want to check the AIDE configuration. Resolution The AIDE configuration is stored in a config map with the same name as the FileIntegrity object. All AIDE configuration config maps are labeled with file-integrity.openshift.io/aide-conf . 6.7.3. Determining the FileIntegrity object's phase Issue You want to determine if the FileIntegrity object exists and see its current status. Resolution To see the FileIntegrity object's current status, run: USD oc get fileintegrities/worker-fileintegrity -o jsonpath="{ .status }" Once the FileIntegrity object and the backing daemon set are created, the status should switch to Active . If it does not, check the Operator pod logs. 6.7.4. Determining that the daemon set's pods are running on the expected nodes Issue You want to confirm that the daemon set exists and that its pods are running on the nodes you expect them to run on. Resolution Run: USD oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity Note Adding -owide includes the IP address of the node that the pod is running on. To check the logs of the daemon pods, run oc logs . Check the return value of the AIDE command to see if the check passed or failed. | [
"oc create -f <file-name>.yaml",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" name: openshift-file-integrity",
"oc create -f <file-name>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: targetNamespaces: - openshift-file-integrity",
"oc create -f <file-name>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: file-integrity-operator namespace: openshift-file-integrity spec: channel: \"stable\" installPlanApproval: Automatic name: file-integrity-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc get csv -n openshift-file-integrity",
"oc get deploy -n openshift-file-integrity",
"apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: worker-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: 1 node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"myNode\" operator: \"Exists\" effect: \"NoSchedule\" config: 3 name: \"myconfig\" namespace: \"openshift-file-integrity\" key: \"config\" gracePeriod: 20 4 maxBackups: 5 5 initialDelay: 60 6 debug: false status: phase: Active 7",
"oc apply -f worker-fileintegrity.yaml -n openshift-file-integrity",
"oc get fileintegrities -n openshift-file-integrity",
"NAME AGE worker-fileintegrity 14s",
"oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status.phase }\"",
"Active",
"oc get fileintegritynodestatuses",
"NAME AGE worker-fileintegrity-ip-10-0-130-192.ec2.internal 101s worker-fileintegrity-ip-10-0-147-133.ec2.internal 109s worker-fileintegrity-ip-10-0-165-160.ec2.internal 102s",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq",
"oc get fileintegritynodestatuses -w",
"NAME NODE STATUS example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-169-137.us-east-2.compute.internal ip-10-0-169-137.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-134-186.us-east-2.compute.internal ip-10-0-134-186.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-222-188.us-east-2.compute.internal ip-10-0-222-188.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-194-66.us-east-2.compute.internal ip-10-0-194-66.us-east-2.compute.internal Failed example-fileintegrity-ip-10-0-150-230.us-east-2.compute.internal ip-10-0-150-230.us-east-2.compute.internal Succeeded example-fileintegrity-ip-10-0-180-200.us-east-2.compute.internal ip-10-0-180-200.us-east-2.compute.internal Succeeded",
"[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:57Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:46:03Z\" } ] [ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:45:48Z\" } ]",
"oc debug node/ip-10-0-130-192.ec2.internal",
"Creating debug namespace/openshift-debug-node-ldfbj Starting pod/ip-10-0-130-192ec2internal-debug To use host binaries, run `chroot /host` Pod IP: 10.0.130.192 If you don't see a command prompt, try pressing enter. sh-4.2# echo \"# integrity test\" >> /host/etc/resolv.conf sh-4.2# exit Removing debug pod Removing debug namespace/openshift-debug-node-ldfbj",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io/worker-fileintegrity-ip-10-0-130-192.ec2.internal -ojsonpath='{.results}' | jq -r",
"oc get fileintegritynodestatuses.fileintegrity.openshift.io -ojsonpath='{.items[*].results}' | jq",
"[ { \"condition\": \"Succeeded\", \"lastProbeTime\": \"2020-09-15T12:54:14Z\" }, { \"condition\": \"Failed\", \"filesChanged\": 1, \"lastProbeTime\": \"2020-09-15T12:57:20Z\", \"resultConfigMapName\": \"aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed\", \"resultConfigMapNamespace\": \"openshift-file-integrity\" } ]",
"oc describe cm aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed",
"Name: aide-ds-worker-fileintegrity-ip-10-0-130-192.ec2.internal-failed Namespace: openshift-file-integrity Labels: file-integrity.openshift.io/node=ip-10-0-130-192.ec2.internal file-integrity.openshift.io/owner=worker-fileintegrity file-integrity.openshift.io/result-log= Annotations: file-integrity.openshift.io/files-added: 0 file-integrity.openshift.io/files-changed: 1 file-integrity.openshift.io/files-removed: 0 Data integritylog: ------ AIDE 0.15.1 found differences between database and filesystem!! Start timestamp: 2020-09-15 12:58:15 Summary: Total number of files: 31553 Added files: 0 Removed files: 0 Changed files: 1 --------------------------------------------------- Changed files: --------------------------------------------------- changed: /hostroot/etc/resolv.conf --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /hostroot/etc/resolv.conf SHA512 : sTQYpB/AL7FeoGtu/1g7opv6C+KT1CBJ , qAeM+a8yTgHPnIHMaRlS+so61EN8VOpg Events: <none>",
"oc get events --field-selector reason=FileIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 97s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Pending 67s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Initializing 37s Normal FileIntegrityStatus fileintegrity/example-fileintegrity Active",
"oc get events --field-selector reason=NodeIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed",
"oc get events --field-selector reason=NodeIntegrityStatus",
"LAST SEEN TYPE REASON OBJECT MESSAGE 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-134-173.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-168-238.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-169-175.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-152-92.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-158-144.ec2.internal 114m Normal NodeIntegrityStatus fileintegrity/example-fileintegrity no changes to node ip-10-0-131-30.ec2.internal 87m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:1,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed 40m Warning NodeIntegrityStatus fileintegrity/example-fileintegrity node ip-10-0-152-92.ec2.internal has changed! a:3,c:1,r:0 \\ log:openshift-file-integrity/aide-ds-example-fileintegrity-ip-10-0-152-92.ec2.internal-failed",
"oc explain fileintegrity.spec",
"oc explain fileintegrity.spec.config",
"oc describe cm/worker-fileintegrity",
"@@define DBDIR /hostroot/etc/kubernetes @@define LOGDIR /hostroot/etc/kubernetes database=file:@@{DBDIR}/aide.db.gz database_out=file:@@{DBDIR}/aide.db.gz gzip_dbout=yes verbose=5 report_url=file:@@{LOGDIR}/aide.log report_url=stdout PERMS = p+u+g+acl+selinux+xattrs CONTENT_EX = sha512+ftype+p+u+g+n+acl+selinux+xattrs /hostroot/boot/ CONTENT_EX /hostroot/root/\\..* PERMS /hostroot/root/ CONTENT_EX",
"oc extract cm/worker-fileintegrity --keys=aide.conf",
"vim aide.conf",
"/hostroot/etc/kubernetes/static-pod-resources !/hostroot/etc/kubernetes/aide.* !/hostroot/etc/kubernetes/manifests !/hostroot/etc/docker/certs.d !/hostroot/etc/selinux/targeted !/hostroot/etc/openvswitch/conf.db",
"!/opt/mydaemon/",
"/hostroot/etc/ CONTENT_EX",
"oc create cm master-aide-conf --from-file=aide.conf",
"apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: master-fileintegrity namespace: openshift-file-integrity spec: nodeSelector: node-role.kubernetes.io/master: \"\" config: name: master-aide-conf namespace: openshift-file-integrity",
"oc describe cm/master-fileintegrity | grep /opt/mydaemon",
"!/hostroot/opt/mydaemon",
"oc annotate fileintegrities/worker-fileintegrity file-integrity.openshift.io/re-init=",
"ls -lR /host/etc/kubernetes/aide.* -rw-------. 1 root root 1839782 Sep 17 15:08 /host/etc/kubernetes/aide.db.gz -rw-------. 1 root root 1839783 Sep 17 14:30 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_38 -rw-------. 1 root root 73728 Sep 17 15:07 /host/etc/kubernetes/aide.db.gz.backup-20200917T15_07_55 -rw-r--r--. 1 root root 0 Sep 17 15:08 /host/etc/kubernetes/aide.log -rw-------. 1 root root 613 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_38 -rw-r--r--. 1 root root 0 Sep 17 15:07 /host/etc/kubernetes/aide.log.backup-20200917T15_07_55",
"oc -n openshift-file-integrity get ds/aide-worker-fileintegrity",
"oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity",
"oc -n openshift-file-integrity logs pod/aide-worker-fileintegrity-mr8x6",
"Starting the AIDE runner daemon initializing AIDE db initialization finished running aide check",
"oc get fileintegrities/worker-fileintegrity -o jsonpath=\"{ .status }\"",
"oc -n openshift-file-integrity get pods -lapp=aide-worker-fileintegrity"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/file-integrity-operator |
Chapter 8. File and directory layouts | Chapter 8. File and directory layouts As a storage administrator, you can control how file or directory data is mapped to objects. This section describes how to: Understand file and directory layouts Set file and directory layouts View file and directory layout fields View individual layout fields Remove the directory layouts 8.1. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Deployment of a Ceph File System. The installation of the attr package. 8.2. Overview of file and directory layouts This section explains what file and directory layouts are in the context for the Ceph File System. A layout of a file or directory controls how its content is mapped to Ceph RADOS objects. The directory layouts serve primarily for setting an inherited layout for new files in that directory. To view and set a file or directory layout, use virtual extended attributes or extended file attributes ( xattrs ). The name of the layout attributes depends on whether a file is a regular file or a directory: Regular files layout attributes are called ceph.file.layout . Directories layout attributes are called ceph.dir.layout . Layouts Inheritance Files inherit the layout of their parent directory when you create them. However, subsequent changes to the parent directory layout do not affect children. If a directory does not have any layouts set, files inherit the layout from the closest directory to the layout in the directory structure. 8.3. Setting file and directory layout fields Use the setfattr command to set layout fields on a file or directory. Important When you modify the layout fields of a file, the file must be empty, otherwise an error occurs. Prerequisites Root-level access to the node. Procedure To modify layout fields on a file or directory: Syntax Replace: TYPE with file or dir . FIELD with the name of the field. VALUE with the new value of the field. PATH with the path to the file or directory. Example Additional Resources See the table in the Overview of file and directory layouts section of the Red Hat Ceph Storage File System Guide for more details. See the setfattr(1) manual page. 8.4. Viewing file and directory layout fields To use the getfattr command to view layout fields on a file or directory. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure To view layout fields on a file or directory as a single string: Syntax Replace PATH with the path to the file or directory. TYPE with file or dir . Example Note A directory does not have an explicit layout until you set it. Consequently, attempting to view the layout without first setting it fails because there are no changes to display. Additional Resources The getfattr(1) manual page. For more information, see Setting file and directory layout fields section in the Red Hat Ceph Storage File System Guide . 8.5. Viewing individual layout fields Use the getfattr command to view individual layout fields for a file or directory. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all nodes in the storage cluster. Procedure To view individual layout fields on a file or directory: Syntax Replace TYPE with file or dir . FIELD with the name of the field. PATH with the path to the file or directory. Example Note Pools in the pool field are indicated by name. However, newly created pools can be indicated by ID. Additional Resources The getfattr(1) manual page. For more information, see File and directory layouts section in the Red Hat Ceph Storage File System Guide . 8.6. Removing directory layouts Use the setfattr command to remove layouts from a directory. Note When you set a file layout, you cannot change or remove it. Prerequisites A directory with a layout. Procedure To remove a layout from a directory: Syntax Example To remove the pool_namespace field: Syntax Example Note The pool_namespace field is the only field you can remove separately. Additional Resources The setfattr(1) manual page 8.7. Additional Resources See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System Guide . See the getfattr(1) manual page for more information. See the setfattr(1) manual page for more information. | [
"setfattr -n ceph. TYPE .layout. FIELD -v VALUE PATH",
"setfattr -n ceph.file.layout.stripe_unit -v 1048576 test",
"getfattr -n ceph. TYPE .layout PATH",
"getfattr -n ceph.dir.layout /home/test ceph.dir.layout=\"stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data\"",
"getfattr -n ceph. TYPE .layout. FIELD _PATH",
"getfattr -n ceph.file.layout.pool test ceph.file.layout.pool=\"cephfs_data\"",
"setfattr -x ceph.dir.layout DIRECTORY_PATH",
"[user@client ~]USD setfattr -x ceph.dir.layout /home/cephfs",
"setfattr -x ceph.dir.layout.pool_namespace DIRECTORY_PATH",
"[user@client ~]USD setfattr -x ceph.dir.layout.pool_namespace /home/cephfs"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/file_system_guide/file-and-directory-layouts |
Chapter 2. Architecture | Chapter 2. Architecture The director advocates the use of native OpenStack APIs to configure, deploy, and manage OpenStack environments itself. This means integration with director requires integrating with these native OpenStack APIs and supporting components. The major benefit of utilizing such APIs is that they are well documented, undergo extensive integration testing upstream, are mature, and makes understanding how the director works easier for those that have a foundational knowledge of OpenStack. This also means the director automatically inherits core OpenStack feature enhancements, security patches, and bug fixes. The Red Hat OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for "OpenStack-On-OpenStack". This project takes advantage of OpenStack components to install a fully operational OpenStack environment. This includes new OpenStack components that provision and control bare metal systems to use as OpenStack nodes. This provides a simple method for installing a complete Red Hat OpenStack Platform environment that is both lean and robust. The Red Hat OpenStack Platform director uses two main concepts: an Undercloud and an Overcloud. This director itself is comprised of a subset of OpenStack components that form a single-system OpenStack environment, otherwise known as the Undercloud. The Undercloud acts as a management system that can create a production-level cloud for workloads to run. This production-level cloud is the Overcloud. For more information on the Overcloud and the Undercloud, see the Director Installation and Usage guide. Director ships with tools, utilities, and example templates for creating an Overcloud configuration. The director captures configuration data, parameters, and network topology information then uses this information in conjunction with components such as Ironic, Heat, and Puppet to orchestrate an Overcloud installation. Partners have varied requirements. Understanding the director's architecture aids in understand which components matter for a given integration effort. 2.1. Core Components This section examines some of the core components of the Red Hat OpenStack Platform director and describes how they contribute to Overcloud creation. 2.1.1. Ironic Ironic provides dedicated bare metal hosts to end users through self-service provisioning. The director uses Ironic to manage the lifecycle of the bare metal hardware in our Overcloud. Ironic has its own native API for defining bare metal nodes. Administrators aiming to provision OpenStack environments with the director must register their nodes with Ironic using a specific driver. The main supported driver is The Intelligent Platform Management Interface (IPMI) as most hardware contains some support for IPMI power management functions. However, ironic also contains vendor specific equivalents such as HP iLO or Dell DRAC. Ironic controls the power management of the nodes and gathers hardware information or facts using a introspection mechanism. The director uses the information obtained from the introspection process to match node to various OpenStack environment roles, such as Controller nodes, Compute nodes, and storage nodes. For example, a discovered node with 10 disks will more than likely be provisioned as a storage node. Partners wishing to have director support for their hardware will need to have driver coverage in Ironic. 2.1.2. Heat Heat acts as an application stack orchestration engine. This allows organizations to define elements for a given application before deploying it to a cloud. This involves creating a stack template that includes a number of infrastructure resources (e.g. instances, networks, storage volumes, elastic IPs, etc) along with a set of parameters for configuration. Heat creates these resources based on a given dependency chain, monitors them for availability, and scales them where necessary. These templates enable application stacks to become portable and achieve repeatable results. The director uses the native OpenStack Heat APIs to provision and manage the resources associated with deploying an Overcloud. This includes precise details such as defining the number of nodes to provision per node role, the software components to configure for each node, and the order in which the director configures these components and node types. The director also uses Heat for troubleshooting a deployment and making changes post-deployment with ease. The following example is a snippet from a Heat template that defines parameters of a Controller node: Heat consumes templates included with the director to facilitate the creation of an Overcloud, which includes calling Ironic to power the nodes. We can view the resources (and their status) of an in-progress Overcloud using the standard Heat tools. For example, you can use the Heat tools to display the Overcloud as a nested application stack. Heat provides a comprehensive and powerful syntax for declaring and creating production OpenStack clouds. However, it requires some prior understanding and proficiency for partner integration. Every partner integration use case requires Heat templates. 2.1.3. Puppet Puppet is a configuration management and enforcement tool. It is used as a mechanism to describe the end state of a machine and keep it that way. You define this end state in a Puppet manifest. Puppet supports two models: A standalone mode in which instructions in the form of manifests are ran locally A server mode where it retrieves its manifests from a central server, called a Puppet Master. Administrators make changes in two ways: either uploading new manifests to a node and executing them locally, or in the client/server model by making modifications on the Puppet Master. We use Puppet in many areas of director: We use Puppet on the Undercloud host locally to install and configure packages as per the configuration laid out in undercloud.conf . We inject the openstack-puppet-modules package into the base Overcloud image. These Puppet modules are ready for post-deployment configuration. By default, we create an image that contains all OpenStack services and use it for each node. We provide additional Puppet manifests and parameters to the nodes via Heat, and apply the configuration after the Overcloud's deployment. This includes the services to enable and start and the OpenStack configuration to apply, which are dependent on the node type. We provide Puppet hieradata to the nodes. The Puppet modules and manifests are free from site or node-specific parameters to keep the manifests consistent. The hieradata acts as a form of parameterized values that you can push to a Puppet module and reference in other areas. For example, to reference the MySQL password inside of a manifest, save this information as hieradata and reference it within the manifest. Viewing the hieradata: Referencing it in the Puppet manifest: Partner integrated services that need package installation and service enablement should consider creating Puppet modules to meet their requirement. For examples, see Section 4.2, "Obtaining OpenStack Puppet Modules" for information on how to obtain current OpenStack Puppet modules. 2.1.4. TripleO and TripleO Heat Templates As mentioned previously, the director is based on the upstream TripleO project. This project combines a set of OpenStack services that: Store Overcloud images (Glance) Orchestrate the Overcloud (Heat) Provision bare metal machines (Ironic and Nova) TripleO also includes a Heat template collection that defines a Red Hat-supported Overcloud environment. The director, using Heat, reads this template collection and orchestrates the Overcloud stack. 2.1.5. Composable Services Each aspect of Red Hat OpenStack Platform is broken into a composable service. This means you can define different roles using different combinations of services. For example, an administrator might aim to move the networking agents from the default Controller node to a standalone Networker node. For more information about the composable service architecture, see Chapter 6, Composable Services . 2.1.6. Containerized Services and Kolla Each of the main Red Hat OpenStack Platform services run in containers. This provides a method of keep each service within its own isolated namespace separated from the host. This means: The deployment of services is performed by pulling container images from the Red Hat Custom Portal and running them. The management functions, like starting and stopping services, operate through the podman command. Upgrading containers require pulling new container images and replacing the existing containers with newer versions. Red Hat OpenStack Platform uses a set of containers built and managed with the kolla toolset. 2.1.7. Ansible OpenStack Platform uses Ansible is used to drive certain functions in relation to composable service upgrades. This includes functions such as starting and stopping certain services and perfoming database upgrades. These upgrade tasks are defined within composable service templates. | [
"NeutronExternalNetworkBridge: description: Name of bridge used for external network traffic. type: string default: 'br-ex' NeutronBridgeMappings: description: > The OVS logical->physical bridge mappings to use. See the Neutron documentation for details. Defaults to mapping br-ex - the external bridge on hosts - to a physical name 'datacentre' which can be used to create provider networks (and we use this for the default floating network) - if changing this either use different post-install network scripts or be sure to keep 'datacentre' as a mapping network name. type: string default: \"datacentre:br-ex\"",
"grep mysql_root_password hieradata.yaml # View the data in the hieradata file openstack::controller::mysql_root_password: 'redhat123'",
"grep mysql_root_password example.pp # Now referenced in the Puppet manifest mysql_root_password => hiera('openstack::controller::mysql_root_password')"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/partner_integration/architecture |
9.5. Pacemaker Support for Docker Containers (Technology Preview) | 9.5. Pacemaker Support for Docker Containers (Technology Preview) Important Pacemaker support for Docker containers is provided for technology preview only. For details on what "technology preview" means, see Technology Preview Features Support Scope . There is one exception to this feature being Technology Preview: As of Red Hat Enterprise Linux 7.4, Red Hat fully supports the usage of Pacemaker bundles for Red Hat Openstack Platform (RHOSP) deployments. Pacemaker supports a special syntax for launching a Docker container with any infrastructure it requires: the bundle . After you have created a Pacemaker bundle, you can create a Pacemaker resource that the bundle encapsulates. Section 9.5.1, "Configuring a Pacemaker Bundle Resource" describes the syntax for the command to create a Pacemaker bundle and provides tables summarizing the parameters you can define for each bundle parameter. Section 9.5.2, "Configuring a Pacemaker Resource in a Bundle" provides information on configuring a resource contained in a Pacemaker bundle. Section 9.5.3, "Limitations of Pacemaker Bundles" notes the limitations of Pacemaker bundles. Section 9.5.4, "Pacemaker Bundle Configuration Example" provides a Pacemaker bundle configuration example. 9.5.1. Configuring a Pacemaker Bundle Resource The syntax for the command to create a Pacemaker bundle for a Docker container is as follows. This command creates a bundle that encapsulates no other resources. For information on creating a cluster resource in a bundle see Section 9.5.2, "Configuring a Pacemaker Resource in a Bundle" . The required bundle_id parameter must be a unique name for the bundle. If the --disabled option is specified, the bundle is not started automatically. If the --wait option is specified, Pacemaker will wait up to n seconds for the bundle to start and then return 0 on success or 1 on error. If n is not specified it defaults to 60 minutes. The following sections describe the parameters you can configure for each element of a Pacemaker bundle. 9.5.1.1. Docker Parameters Table 9.6, "Docker Container Parameters" describes the docker container options you can set for a bundle. Note Before configuring a docker bundle in Pacemaker, you must install Docker and supply a fully configured Docker image on every node allowed to run the bundle. Table 9.6. Docker Container Parameters Field Default Description image Docker image tag (required) replicas Value of promoted-max if that is positive, otherwise 1. A positive integer specifying the number of container instances to launch replicas-per-host 1 A positive integer specifying the number of container instances allowed to run on a single node promoted-max 0 A non-negative integer that, if positive, indicates that the containerized service should be treated as a multistate service, with this many replicas allowed to run the service in the master role network If specified, this will be passed to the docker run command as the network setting for the Docker container. run-command /usr/sbin/pacemaker_remoted if the bundle contains a resource, otherwise none This command will be run inside the container when launching it ("PID 1"). If the bundle contains a resource, this command must start the pacemaker_remoted daemon (but it could, for example, be a script that performs others tasks as well). options Extra command-line options to pass to the docker run command 9.5.1.2. Bundle Network Parameters Table 9.7, "Bundle Resource Network Parameters" describes the network options you can set for a bundle. Table 9.7. Bundle Resource Network Parameters Field Default Description add-host TRUE If TRUE, and ip-range-start is used, Pacemaker will automatically ensure that the /etc/hosts file inside the containers has entries for each replica name and its assigned IP. ip-range-start If specified, Pacemaker will create an implicit ocf:heartbeat:IPaddr2 resource for each container instance, starting with this IP address, using as many sequential addresses as were specified as the replicas parameter for the Docker element. These addresses can be used from the host's network to reach the service inside the container, although it is not visible within the container itself. Only IPv4 addresses are currently supported. host-netmask 32 If ip-range-start is specified, the IP addresses are created with this CIDR netmask (as a number of bits). host-interface If ip-range-start is specified, the IP addresses are created on this host interface (by default, it will be determined from the IP address). control-port 3121 If the bundle contains a Pacemaker resource, the cluster will use this integer TCP port for communication with Pacemaker Remote inside the container. Changing this is useful when the container is unable to listen on the default port, which could happen when the container uses the host's network rather than ip-range-start (in which case replicas-per-host must be 1), or when the bundle may run on a Pacemaker Remote node that is already listening on the default port. Any PCMK_remote_port environment variable set on the host or in the container is ignored for bundle connections. When a Pacemaker bundle configuration uses the control-port parameter, then if the bundle has its own IP address the port needs to be open on that IP address on and from all full cluster nodes running corosync. If, instead, the bundle has set the network="host" container parameter, the port needs to be open on each cluster node's IP address from all cluster nodes. Note Replicas are named by the bundle ID plus a dash and an integer counter starting with zero. For example, if a bundle named httpd-bundle has configured replicas=2 , its containers will be named httpd-bundle-0 and httpd-bundle-1 . In addition to the network parameters, you can optionally specify port-map parameters for a bundle. Table 9.8, "Bundle Resource port-map Parameters" describes these port-map parameters. Table 9.8. Bundle Resource port-map Parameters Field Default Description id A unique name for the port mapping (required) port If this is specified, connections to this TCP port number on the host network (on the container's assigned IP address, if ip-range-start is specified) will be forwarded to the container network. Exactly one of port or range must be specified in a port-mapping. internal-port Value of port If port and internal-port are specified, connections to port on the host's network will be forwarded to this port on the container network. range If range is specified, connections to these TCP port numbers (expressed as first_port-last_port ) on the host network (on the container's assigned IP address, if ip-range-start is specified) will be forwarded to the same ports in the container network. Exactly one of port or range must be specified in a port mapping. Note If the bundle contains a resource, Pacemaker will automatically map the control-port , so it is not necessary to specify that port in a port mapping. 9.5.1.3. Bundle Storage Parameters You can optionally configure storage-map parameters for a bundle. Table 9.9, "Bundle Resource Storage Mapping Parameters" describes these parameters. Table 9.9. Bundle Resource Storage Mapping Parameters Field Default Description id A unique name for the storage mapping (required) source-dir The absolute path on the host's filesystem that will be mapped into the container. Exactly one of source-dir and source-dir-root parameter must be specified when configuring a storage-map parameter. source-dir-root The start of a path on the host's filesystem that will be mapped into the container, using a different subdirectory on the host for each container instance. The subdirectory will be named with the same name as the bundle name, plus a dash and an integer counter starting with 0. Exactly one source-dir and source-dir-root parameter must be specified when configuring a storage-map parameter. target-dir The path name within the container where the host storage will be mapped (required) options File system mount options to use when mapping the storage As an example of how subdirectories on a host are named using the source-dir-root parameter, if source-dir-root=/path/to/my/directory , target-dir=/srv/appdata , and the bundle is named mybundle with replicas=2 , then the cluster will create two container instances with host names mybundle-0 and mybundle-1 and create two directories on the host running the containers: /path/to/my/directory/mybundle-0 and /path/to/my/directory/mybundle-1 . Each container will be given one of those directories, and any application running inside the container will see the directory as /srv/appdata . Note Pacemaker does not define the behavior if the source directory does not already exist on the host. However, it is expected that the container technology or its resource agent will create the source directory in that case. Note If the bundle contains a Pacemaker resource, Pacemaker will automatically map the equivalent of source-dir=/etc/pacemaker/authkey target-dir=/etc/pacemaker/authkey and source-dir-root=/var/log/pacemaker/bundles target-dir=/var/log into the container, so it is not necessary to specify those paths in when configuring storage-map parameters. Important The PCMK_authkey_location environment variable must not be set to anything other than the default of /etc/pacemaker/authkey on any node in the cluster. 9.5.2. Configuring a Pacemaker Resource in a Bundle A bundle may optionally contain one Pacemaker cluster resource. As with a resource that is not contained in a bundle, the cluster resource may have operations, instance attributes, and metadata attributes defined. If a bundle contains a resource, the container image must include the Pacemaker Remote daemon, and ip-range-start or control-port must be configured in the bundle. Pacemaker will create an implicit ocf:pacemaker:remote resource for the connection, launch Pacemaker Remote within the container, and monitor and manage the resource by means of Pacemaker Remote. If the bundle has more than one container instance (replica), the Pacemaker resource will function as an implicit clone, which will be a multistate clone if the bundle has configured the promoted-max option as greater than zero. You create a resource in a Pacemaker bundle with the pcs resource create command by specifying the bundle parameter for the command and the bundle ID in which to include the resource. For an example of creating a Pacemaker bundle that contains a resource, see Section 9.5.4, "Pacemaker Bundle Configuration Example" . Important Containers in bundles that contain a resource must have an accessible networking environment, so that Pacemaker on the cluster nodes can contact Pacemaker Remote inside the container. For example, the docker option --net=none should not be used with a resource. The default (using a distinct network space inside the container) works in combination with the ip-range-start parameter. If the docker option --net=host is used (making the container share the host's network space), a unique control-port parameter should be specified for each bundle. Any firewall must allow access to the control-port . 9.5.2.1. Node Attributes and Bundle Resources If the bundle contains a cluster resource, the resource agent may want to set node attributes such as master scores. However, with containers, it is not apparent which node should get the attribute. If the container uses shared storage that is the same no matter which node the container is hosted on, then it is appropriate to use the master score on the bundle node itself. On the other hand, if the container uses storage exported from the underlying host, then it may be more appropriate to use the master score on the underlying host. Since this depends on the particular situation, the container-attribute-target resource metadata attribute allows the user to specify which approach to use. If it is set to host , then user-defined node attributes will be checked on the underlying host. If it is anything else, the local node (in this case the bundle node) is used. This behavior applies only to user-defined attributes; the cluster will always check the local node for cluster-defined attributes such as #uname . If container-attribute-target is set to host , the cluster will pass additional environment variables to the resource agent that allow it to set node attributes appropriately. 9.5.2.2. Metadata Attributes and Bundle Resources Any metadata attribute set on a bundle will be inherited by the resource contained in a bundle and any resources implicitly created by Pacemaker for the bundle. This includes options such as priority , target-role , and is-managed . 9.5.3. Limitations of Pacemaker Bundles Pacemaker bundles operate with the following limitations: Bundles may not be included in groups or explicitly cloned with a pcs command. This includes a resource that the bundle contains, and any resources implicitly created by Pacemaker for the bundle. Note, however, that if a bundle is configured with a value of replicas greater than one, the bundle behaves as if it were a clone. Restarting Pacemaker while a bundle is unmanaged or the cluster is in maintenance mode may cause the bundle to fail. Bundles do not have instance attributes, utilization attributes, or operations, although a resource contained in a bundle may have them. A bundle that contains a resource can run on a Pacemaker Remote node only if the bundle uses a distinct control-port . 9.5.4. Pacemaker Bundle Configuration Example The following example creates a Pacemaker bundle resource with a bundle ID of httpd-bundle that contains an ocf:heartbeat:apache resource with a resource ID of httpd . This procedure requires the following prerequisite configuration: Docker has been installed and enabled on every node in the cluster. There is an existing Docker image, named pcmktest:http The container image includes the Pacemaker Remote daemon. The container image includes a configured Apache web server. Every node in the cluster has directories /var/local/containers/httpd-bundle-0 , /var/local/containers/httpd-bundle-1 , and /var/local/containers/httpd-bundle-2 , containing an index.html file for the web server root. In production, a single, shared document root would be more likely, but for the example this configuration allows you to make the index.html file on each host different so that you can connect to the web server and verify which index.html file is being served. This procedure configures the following parameters for the Pacemaker bundle: The bundle ID is httpd-bundle . The previously-configured Docker container image is pcmktest:http . This example will launch three container instances. This example will pass the command-line option --log-driver=journald to the docker run command. This parameter is not required, but is included to show how to pass an extra option to the docker command. A value of --log-driver=journald means that the system logs inside the container will be logged in the underlying hosts's systemd journal. Pacemaker will create three sequential implicit ocf:heartbeat:IPaddr2 resources, one for each container image, starting with the IP address 192.168.122.131. The IP addresses are created on the host interface eth0. The IP addresses are created with a CIDR netmask of 24. This example creates a port map ID of http-port ; connections to port 80 on the container's assigned IP address will be forwarded to the container network. This example creates a storage map ID of httpd-root . For this storage mapping: The value of source-dir-root is /var/local/containers , which specifies the start of the path on the host's file system that will be mapped into the container, using a different subdirectory on the host for each container instance. The value of target-dir is /var/www/html , which specifies the path name within the container where the host storage will be mapped. The file system rw mount option will be used when mapping the storage. Since this example container includes a resource, Pacemaker will automatically map the equivalent of source-dir=/etc/pacemaker/authkey in the container, so you do not need to specify that path in the storage mapping. In this example, the existing cluster configuration is put into a temporary file named temp-cib.xml , which is then copied to a file named temp-cib.xml.deltasrc . All modifications to the cluster configuration are made to the tmp-cib.xml file. When the udpates are complete, this procedure uses the diff-against option of the pcs cluster cib-push command so that only the updates to the configuration file are pushed to the active configuration file. | [
"pcs resource bundle create bundle_id container docker [ container_options ] [network network_options ] [port-map port_options ]... [storage-map storage_options ]... [meta meta_options ] [--disabled] [--wait[=n]]",
"pcs cluster cib tmp-cib.xml cp tmp-cib.xml tmp-cib.xml.deltasrc pcs -f tmp.cib.xml resource bundle create httpd-bundle container docker image=pcmktest:http replicas=3 options=--log-driver=journald network ip-range-start=192.168.122.131 host-interface=eth0 host-netmask=24 port-map id=httpd-port port=80 storage-map id=httpd-root source-dir-root=/var/local/containers target-dir=/var/www/html options=rw pcs -f tmp-cib.xml resource create httpd ocf:heartbeat:apache statusurl=http://localhost/server-status bundle httpd-bundle pcs cluster cib-push tmp-cib.xml diff-against=tmp-cib.xml.deltasrc"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-containers-HAAR |
Chapter 1. Preparing to deploy OpenShift Data Foundation | Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Verify the rotational flag on your VMDKs before deploying object storage devices (OSDs) on them. For more information, see the knowledgebase article Override device rotational flag in ODF environment . Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Token authentication method . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices . These are not applicable for deployment using dynamic storage devices. 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. Note Make sure that the devices have a unique by-id device name for each available raw block device. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_on_vmware_vsphere/preparing_to_deploy_openshift_data_foundation |
15.3. Live Migration and Red Hat Enterprise Linux Version Compatibility | 15.3. Live Migration and Red Hat Enterprise Linux Version Compatibility Live Migration is supported as shown in Table 15.1, "Live Migration Compatibility" : Table 15.1. Live Migration Compatibility Migration Method Release Type Example Live Migration Support Notes Forward Major release 6.5+ 7.x Fully supported Any issues should be reported Backward Major release 7.x 6.y Not supported Forward Minor release 7.x 7.y (7.0 7.1) Fully supported Any issues should be reported Backward Minor release 7.y 7.x (7.1 7.0) Fully supported Any issues should be reported Troubleshooting problems with migration Issues with the migration protocol - If backward migration ends with "unknown section error", repeating the migration process can repair the issue as it may be a transient error. If not, report the problem. Issues with audio devices - When migrating from Red Hat Enterprise Linux 6.x to Red Hat Enterprise Linux 7.y, note that the es1370 audio card is no longer supported. Use the ac97 audio card instead. Issues with network cards - When migrating from Red Hat Enterprise Linux 6.x to Red Hat Enterprise Linux 7.y, note that the pcnet and ne2k_pci network cards are no longer supported. Use the virtio-net network device instead. Configuring Network Storage Configure shared storage and install a guest virtual machine on the shared storage. Alternatively, use the NFS example in Section 15.4, "Shared Storage Example: NFS for a Simple Migration" | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-kvm_live_migration-live_migration_and_red_hat_enterprise_linux_version_compatibility_ |
Chapter 4. Helm CLI | Chapter 4. Helm CLI 4.1. Getting started with Helm 3 4.1.1. Understanding Helm Helm is a software package manager that simplifies deployment of applications and services to OpenShift Container Platform clusters. Helm uses a packaging format called charts . A Helm chart is a collection of files that describes the OpenShift Container Platform resources. A running instance of the chart in a cluster is called a release . A new release is created every time a chart is installed on the cluster. Each time a chart is installed, or a release is upgraded or rolled back, an incremental revision is created. 4.1.1.1. Key features Helm provides the ability to: Search through a large collection of charts stored in the chart repository. Modify existing charts. Create your own charts with OpenShift Container Platform or Kubernetes resources. Package and share your applications as charts. 4.1.2. Installing Helm The following section describes how to install Helm on different platforms using the CLI. You can also find the URL to the latest binaries from the OpenShift Container Platform web console by clicking the ? icon in the upper-right corner and selecting Command Line Tools . Prerequisites You have installed Go, version 1.13 or higher. 4.1.2.1. On Linux Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 4.1.2.2. On Windows 7/8 Download the latest .exe file and put in a directory of your preference. Right click Start and click Control Panel . Select System and Security and then click System . From the menu on the left, select Advanced systems settings and click Environment Variables at the bottom. Select Path from the Variable section and click Edit . Click New and type the path to the folder with the .exe file into the field or click Browse and select the directory, and click OK . 4.1.2.3. On Windows 10 Download the latest .exe file and put in a directory of your preference. Click Search and type env or environment . Select Edit environment variables for your account . Select Path from the Variable section and click Edit . Click New and type the path to the directory with the exe file into the field or click Browse and select the directory, and click OK . 4.1.2.4. On MacOS Download the Helm binary and add it to your path: # curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm Make the binary file executable: # chmod +x /usr/local/bin/helm Check the installed version: USD helm version Example output version.BuildInfo{Version:"v3.0", GitCommit:"b31719aab7963acf4887a1c1e6d5e53378e34d93", GitTreeState:"clean", GoVersion:"go1.13.4"} 4.1.3. Installing a Helm chart on an OpenShift Container Platform cluster Prerequisites You have a running OpenShift Container Platform cluster and you have logged into it. You have installed Helm. Procedure Create a new project: USD oc new-project mysql Add a repository of Helm charts to your local Helm client: USD helm repo add stable https://kubernetes-charts.storage.googleapis.com/ Example output "stable" has been added to your repositories Update the repository: USD helm repo update Install an example MySQL chart: USD helm install example-mysql stable/mysql Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-mysql mysql 1 2019-12-05 15:06:51.379134163 -0500 EST deployed mysql-1.5.0 5.7.27 4.1.4. Creating a custom Helm chart on OpenShift Container Platform Procedure Create a new project: USD oc new-project nodejs-ex-k Download an example Node.js chart that contains OpenShift Container Platform objects: USD git clone https://github.com/redhat-developer/redhat-helm-charts Go to the directory with the sample chart: USD cd redhat-helm-charts/alpha/nodejs-ex-k/ Edit the Chart.yaml file and add a description of your chart: apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 1 The chart API version. It should be v2 for Helm charts that require at least Helm 3. 2 The name of your chart. 3 The description of your chart. 4 The URL to an image to be used as an icon. Verify that the chart is formatted properly: USD helm lint Example output [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed Navigate to the directory level: USD cd .. Install the chart: USD helm install nodejs-chart nodejs-ex-k Verify that the chart has installed successfully: USD helm list Example output NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0 4.2. Configuring custom Helm chart repositories The Developer Catalog , in the Developer perspective of the web console, displays the Helm charts available in the cluster. By default, it lists the Helm charts from the Red Hat Helm chart repository. For a list of the charts see the Red Hat Helm index file . As a cluster administrator, you can add multiple Helm chart repositories, apart from the default one, and display the Helm charts from these repositories in the Developer Catalog . 4.2.1. Adding custom Helm chart repositories As a cluster administrator, you can add custom Helm chart repositories to your cluster and enable access to the Helm charts from these repositories in the Developer Catalog . Procedure To add a new Helm Chart Repository, you must add the Helm Chart Repository custom resource (CR) to your cluster. Sample Helm Chart Repository CR apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url> For example, to add an Azure sample chart repository, run: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF Navigate to the Developer Catalog in the web console to verify that the Helm charts from the chart repository are displayed. For example, use the Chart repositories filter to search for a Helm chart from the repository. Figure 4.1. Chart repositories filter Note If a cluster administrator removes all of the chart repositories, then you cannot view the Helm option in the +Add view, Developer Catalog , and left navigation panel. 4.2.2. Creating credentials and CA certificates to add Helm chart repositories Some Helm chart repositories need credentials and custom certificate authority (CA) certificates to connect to it. You can use the web console as well as the CLI to add credentials and certificates. Procedure To configure the credentials and certificates, and then add a Helm chart repository using the CLI: In the openshift-config namespace, create a ConfigMap object with a custom CA certificate in PEM encoded format, and store it under the ca-bundle.crt key within the config map: USD oc create configmap helm-ca-cert \ --from-file=ca-bundle.crt=/path/to/certs/ca.crt \ -n openshift-config In the openshift-config namespace, create a Secret object to add the client TLS configurations: USD oc create secret generic helm-tls-configs \ --from-file=tls.crt=/path/to/certs/client.crt \ --from-file=tls.key=/path/to/certs//client.key \ -n openshift-config Note that the client certificate and key must be in PEM encoded format and stored under the keys tls.crt and tls.key , respectively. Add the Helm repository as follows: USD cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF The ConfigMap and Secret are consumed in the HelmChartRepository CR using the tlsConfig and ca fields. These certificates are used to connect to the Helm repository URL. By default, all authenticated users have access to all configured charts. However, for chart repositories where certificates are needed, you must provide users with read access to the helm-ca-cert config map and helm-tls-configs secret in the openshift-config namespace, as follows: USD cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["helm-ca-cert"] verbs: ["get"] - apiGroups: [""] resources: ["secrets"] resourceNames: ["helm-tls-configs"] verbs: ["get"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF 4.3. Disabling Helm hart repositories As a cluster administrator, you can remove Helm chart repositories in your cluster so they are no longer visible in the Developer Catalog . 4.3.1. Disabling Helm Chart repository in the cluster You can disable Helm Charts in the catalog by adding the disabled property in the HelmChartRepository custom resource. Procedure To disable a Helm Chart repository by using CLI, add the disabled: true flag to the custom resource. For example, to remove an Azure sample chart repository, run: To disable a recently added Helm Chart repository by using Web Console: Go to Custom Resource Definitions and search for the HelmChartRepository custom resource. Go to Instances , find the repository you want to disable, and click its name. Go to the YAML tab, add the disabled: true flag in the spec section, and click Save . Example The repository is now disabled and will not appear in the catalog. | [
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm",
"chmod +x /usr/local/bin/helm",
"helm version",
"version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}",
"oc new-project mysql",
"helm repo add stable https://kubernetes-charts.storage.googleapis.com/",
"\"stable\" has been added to your repositories",
"helm repo update",
"helm install example-mysql stable/mysql",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-mysql mysql 1 2019-12-05 15:06:51.379134163 -0500 EST deployed mysql-1.5.0 5.7.27",
"oc new-project nodejs-ex-k",
"git clone https://github.com/redhat-developer/redhat-helm-charts",
"cd redhat-helm-charts/alpha/nodejs-ex-k/",
"apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4",
"helm lint",
"[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed",
"cd ..",
"helm install nodejs-chart nodejs-ex-k",
"helm list",
"NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0",
"apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF",
"oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config",
"oc create secret generic helm-tls-configs --from-file=tls.crt=/path/to/certs/client.crt --from-file=tls.key=/path/to/certs//client.key -n openshift-config",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF",
"cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF",
"cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF",
"spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/cli_tools/helm-cli |
Part I. Vulnerability reporting with Clair on Red Hat Quay overview | Part I. Vulnerability reporting with Clair on Red Hat Quay overview The content in this guide explains the key purposes and concepts of Clair on Red Hat Quay. It also contains information about Clair releases and the location of official Clair containers. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/vulnerability_reporting_with_clair_on_red_hat_quay/vulnerability-reporting-clair-quay-overview |
Working with accelerators | Working with accelerators Red Hat OpenShift AI Cloud Service 1 Working with accelerators from Red Hat OpenShift AI Cloud Service | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_accelerators/index |
Chapter 29. Triggering scripts for cluster events | Chapter 29. Triggering scripts for cluster events A Pacemaker cluster is an event-driven system, where an event might be a resource or node failure, a configuration change, or a resource starting or stopping. You can configure Pacemaker cluster alerts to take some external action when a cluster event occurs by means of alert agents, which are external programs that the cluster calls in the same manner as the cluster calls resource agents to handle resource configuration and operation. The cluster passes information about the event to the agent by means of environment variables. Agents can do anything with this information, such as send an email message or log to a file or update a monitoring system. Pacemaker provides several sample alert agents, which are installed in /usr/share/pacemaker/alerts by default. These sample scripts may be copied and used as is, or they may be used as templates to be edited to suit your purposes. Refer to the source code of the sample agents for the full set of attributes they support. If the sample alert agents do not meet your needs, you can write your own alert agents for a Pacemaker alert to call. 29.1. Installing and configuring sample alert agents When you use one of the sample alert agents, you should review the script to ensure that it suits your needs. These sample agents are provided as a starting point for custom scripts for specific cluster environments. Note that while Red Hat supports the interfaces that the alert agents scripts use to communicate with Pacemaker, Red Hat does not provide support for the custom agents themselves. To use one of the sample alert agents, you must install the agent on each node in the cluster. For example, the following command installs the alert_file.sh.sample script as alert_file.sh . After you have installed the script, you can create an alert that uses the script. The following example configures an alert that uses the installed alert_file.sh alert agent to log events to a file. Alert agents run as the user hacluster , which has a minimal set of permissions. This example creates the log file pcmk_alert_file.log that will be used to record the events. It then creates the alert agent and adds the path to the log file as its recipient. The following example installs the alert_snmp.sh.sample script as alert_snmp.sh and configures an alert that uses the installed alert_snmp.sh alert agent to send cluster events as SNMP traps. By default, the script will send all events except successful monitor calls to the SNMP server. This example configures the timestamp format as a meta option. After configuring the alert, this example configures a recipient for the alert and displays the alert configuration. The following example installs the alert_smtp.sh agent and then configures an alert that uses the installed alert agent to send cluster events as email messages. After configuring the alert, this example configures a recipient and displays the alert configuration. 29.2. Creating a cluster alert The following command creates a cluster alert. The options that you configure are agent-specific configuration values that are passed to the alert agent script at the path you specify as additional environment variables. If you do not specify a value for id , one will be generated. Multiple alert agents may be configured; the cluster will call all of them for each event. Alert agents will be called only on cluster nodes. They will be called for events involving Pacemaker Remote nodes, but they will never be called on those nodes. The following example creates a simple alert that will call myscript.sh for each event. 29.3. Displaying, modifying, and removing cluster alerts There are a variety of pcs commands you can use to display, modify, and remove cluster alerts. The following command shows all configured alerts along with the values of the configured options. The following command updates an existing alert with the specified alert-id value. The following command removes an alert with the specified alert-id value. Alternately, you can run the pcs alert delete command, which is identical to the pcs alert remove command. Both the pcs alert delete and the pcs alert remove commands allow you to specify more than one alert to be deleted. 29.4. Configuring cluster alert recipients Usually alerts are directed towards a recipient. Thus each alert may be additionally configured with one or more recipients. The cluster will call the agent separately for each recipient. The recipient may be anything the alert agent can recognize: an IP address, an email address, a file name, or whatever the particular agent supports. The following command adds a new recipient to the specified alert. The following command updates an existing alert recipient. The following command removes the specified alert recipient. Alternately, you can run the pcs alert recipient delete command, which is identical to the pcs alert recipient remove command. Both the pcs alert recipient remove and the pcs alert recipient delete commands allow you to remove more than one alert recipient. The following example command adds the alert recipient my-alert-recipient with a recipient ID of my-recipient-id to the alert my-alert . This will configure the cluster to call the alert script that has been configured for my-alert for each event, passing the recipient some-address as an environment variable. 29.5. Alert meta options As with resource agents, meta options can be configured for alert agents to affect how Pacemaker calls them. The following table describes the alert meta options. Meta options can be configured per alert agent as well as per recipient. Table 29.1. Alert Meta Options Meta-Attribute Default Description enabled true (RHEL 8.9 and later) If set to false for an alert, the alert will not be used. If set to true for an alert and false for a particular recipient of that alert, that recipient will not be used. timestamp-format %H:%M:%S.%06N Format the cluster will use when sending the event's timestamp to the agent. This is a string as used with the date (1) command. timeout 30s If the alert agent does not complete within this amount of time, it will be terminated. The following example configures an alert that calls the script myscript.sh and then adds two recipients to the alert. The first recipient has an ID of my-alert-recipient1 and the second recipient has an ID of my-alert-recipient2 . The script will get called twice for each event, with each call using a 15-second timeout. One call will be passed to the recipient [email protected] with a timestamp in the format %D %H:%M, while the other call will be passed to the recipient [email protected] with a timestamp in the format %c. 29.6. Cluster alert configuration command examples The following sequential examples show some basic alert configuration commands to show the format to use to create alerts, add recipients, and display the configured alerts. Note that while you must install the alert agents themselves on each node in a cluster, you need to run the pcs commands only once. The following commands create a simple alert, add two recipients to the alert, and display the configured values. Since no alert ID value is specified, the system creates an alert ID value of alert . The first recipient creation command specifies a recipient of rec_value . Since this command does not specify a recipient ID, the value of alert-recipient is used as the recipient ID. The second recipient creation command specifies a recipient of rec_value2 . This command specifies a recipient ID of my-recipient for the recipient. This following commands add a second alert and a recipient for that alert. The alert ID for the second alert is my-alert and the recipient value is my-other-recipient . Since no recipient ID is specified, the system provides a recipient id of my-alert-recipient . The following commands modify the alert values for the alert my-alert and for the recipient my-alert-recipient . The following command removes the recipient my-alert-recipient from alert . The following command removes myalert from the configuration. 29.7. Writing a cluster alert agent There are three types of Pacemaker cluster alerts: node alerts, fencing alerts, and resource alerts. The environment variables that are passed to the alert agents can differ, depending on the type of alert. The following table describes the environment variables that are passed to alert agents and specifies when the environment variable is associated with a specific alert type. Table 29.2. Environment Variables Passed to Alert Agents Environment Variable Description CRM_alert_kind The type of alert (node, fencing, or resource) CRM_alert_version The version of Pacemaker sending the alert CRM_alert_recipient The configured recipient CRM_alert_node_sequence A sequence number increased whenever an alert is being issued on the local node, which can be used to reference the order in which alerts have been issued by Pacemaker. An alert for an event that happened later in time reliably has a higher sequence number than alerts for earlier events. Be aware that this number has no cluster-wide meaning. CRM_alert_timestamp A timestamp created prior to executing the agent, in the format specified by the timestamp-format meta option. This allows the agent to have a reliable, high-precision time of when the event occurred, regardless of when the agent itself was invoked (which could potentially be delayed due to system load or other circumstances). CRM_alert_node Name of affected node CRM_alert_desc Detail about event. For node alerts, this is the node's current state (member or lost). For fencing alerts, this is a summary of the requested fencing operation, including origin, target, and fencing operation error code, if any. For resource alerts, this is a readable string equivalent of CRM_alert_status . CRM_alert_nodeid ID of node whose status changed (provided with node alerts only) CRM_alert_task The requested fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rc The numerical return code of the fencing or resource operation (provided with fencing and resource alerts only) CRM_alert_rsc The name of the affected resource (resource alerts only) CRM_alert_interval The interval of the resource operation (resource alerts only) CRM_alert_target_rc The expected numerical return code of the operation (resource alerts only) CRM_alert_status A numerical code used by Pacemaker to represent the operation result (resource alerts only) When writing an alert agent, you must take the following concerns into account. Alert agents may be called with no recipient (if none is configured), so the agent must be able to handle this situation, even if it only exits in that case. Users may modify the configuration in stages, and add a recipient later. If more than one recipient is configured for an alert, the alert agent will be called once per recipient. If an agent is not able to run concurrently, it should be configured with only a single recipient. The agent is free, however, to interpret the recipient as a list. When a cluster event occurs, all alerts are fired off at the same time as separate processes. Depending on how many alerts and recipients are configured and on what is done within the alert agents, a significant load burst may occur. The agent could be written to take this into consideration, for example by queueing resource-intensive actions into some other instance, instead of directly executing them. Alert agents are run as the hacluster user, which has a minimal set of permissions. If an agent requires additional privileges, it is recommended to configure sudo to allow the agent to run the necessary commands as another user with the appropriate privileges. Take care to validate and sanitize user-configured parameters, such as CRM_alert_timestamp (whose content is specified by the user-configured timestamp-format ), CRM_alert_recipient , and all alert options. This is necessary to protect against configuration errors. In addition, if some user can modify the CIB without having hacluster -level access to the cluster nodes, this is a potential security concern as well, and you should avoid the possibility of code injection. If a cluster contains resources with operations for which the on-fail parameter is set to fence , there will be multiple fence notifications on failure, one for each resource for which this parameter is set plus one additional notification. Both the pacemaker-fenced and pacemaker-controld will send notifications. Pacemaker performs only one actual fence operation in this case, however, no matter how many notifications are sent. Note The alerts interface is designed to be backward compatible with the external scripts interface used by the ocf:pacemaker:ClusterMon resource. To preserve this compatibility, the environment variables passed to alert agents are available prepended with CRM_notify_ as well as CRM_alert_ . One break in compatibility is that the ClusterMon resource ran external scripts as the root user, while alert agents are run as the hacluster user. | [
"install --mode=0755 /usr/share/pacemaker/alerts/alert_file.sh.sample /var/lib/pacemaker/alert_file.sh",
"touch /var/log/pcmk_alert_file.log chown hacluster:haclient /var/log/pcmk_alert_file.log chmod 600 /var/log/pcmk_alert_file.log pcs alert create id=alert_file description=\"Log events to a file.\" path=/var/lib/pacemaker/alert_file.sh pcs alert recipient add alert_file id=my-alert_logfile value=/var/log/pcmk_alert_file.log",
"install --mode=0755 /usr/share/pacemaker/alerts/alert_snmp.sh.sample /var/lib/pacemaker/alert_snmp.sh pcs alert create id=snmp_alert path=/var/lib/pacemaker/alert_snmp.sh meta timestamp-format=\"%Y-%m-%d,%H:%M:%S.%01N\" pcs alert recipient add snmp_alert value=192.168.1.2 pcs alert Alerts: Alert: snmp_alert (path=/var/lib/pacemaker/alert_snmp.sh) Meta options: timestamp-format=%Y-%m-%d,%H:%M:%S.%01N. Recipients: Recipient: snmp_alert-recipient (value=192.168.1.2)",
"install --mode=0755 /usr/share/pacemaker/alerts/alert_smtp.sh.sample /var/lib/pacemaker/alert_smtp.sh pcs alert create id=smtp_alert path=/var/lib/pacemaker/alert_smtp.sh options [email protected] pcs alert recipient add smtp_alert [email protected] pcs alert Alerts: Alert: smtp_alert (path=/var/lib/pacemaker/alert_smtp.sh) Options: [email protected] Recipients: Recipient: smtp_alert-recipient ([email protected])",
"pcs alert create path= path [id= alert-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert create id=my_alert path=/path/to/myscript.sh",
"pcs alert [config|show]",
"pcs alert update alert-id [path= path ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert remove alert-id",
"pcs alert recipient add alert-id value= recipient-value [id= recipient-id ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient update recipient-id [value= recipient-value ] [description= description ] [options [ option = value ]...] [meta [ meta-option = value ]...]",
"pcs alert recipient remove recipient-id",
"pcs alert recipient add my-alert value=my-alert-recipient id=my-recipient-id options value=some-address",
"pcs alert create id=my-alert path=/path/to/myscript.sh meta timeout=15s pcs alert recipient add my-alert [email protected] id=my-alert-recipient1 meta timestamp-format=\"%D %H:%M\" pcs alert recipient add my-alert [email protected] id=my-alert-recipient2 meta timestamp-format=\"%c\"",
"pcs alert create path=/my/path pcs alert recipient add alert value=rec_value pcs alert recipient add alert value=rec_value2 id=my-recipient pcs alert config Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2)",
"pcs alert create id=my-alert path=/path/to/script description=alert_description options option1=value1 opt=val meta timeout=50s timestamp-format=\"%H%B%S\" pcs alert recipient add my-alert value=my-other-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=value1 Meta options: timestamp-format=%H%B%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient)",
"pcs alert update my-alert options option1=newvalue1 meta timestamp-format=\"%H%M%S\" pcs alert recipient update my-alert-recipient options option1=new meta timeout=60s pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Recipient: my-recipient (value=rec_value2) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=%H%M%S timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s",
"pcs alert recipient remove my-recipient pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value) Alert: my-alert (path=/path/to/script) Description: alert_description Options: opt=val option1=newvalue1 Meta options: timestamp-format=\"%M%B%S\" timeout=50s Recipients: Recipient: my-alert-recipient (value=my-other-recipient) Options: option1=new Meta options: timeout=60s",
"pcs alert remove myalert pcs alert Alerts: Alert: alert (path=/my/path) Recipients: Recipient: alert-recipient (value=rec_value)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-pacemaker-alert-agents_configuring-and-managing-high-availability-clusters |
5.5. Configuring IPv6 Settings | 5.5. Configuring IPv6 Settings To configure IPv6 settings, follow the procedure described in Section 5.4, "Configuring IPv4 Settings" and click the IPv6 menu entry. Method Ignore - Choose this option if you want to ignore IPv6 settings for this connection. Automatic - Choose this option to use SLAAC to create an automatic, stateless configuration based on the hardware address and router advertisements (RA). Automatic, addresses only - Choose this option if the network you are connecting to uses router advertisements (RA) to create an automatic, stateless configuration, but you want to assign DNS servers manually. Automatic, DHCP only - Choose this option to not use RA, but request information from DHCPv6 directly to create a stateful configuration. Manual - Choose this option if you want to assign IP addresses manually. Link-Local Only - Choose this option if the network you are connecting to does not have a DHCP server and you do not want to assign IP addresses manually. Random addresses will be assigned as per RFC 4862 with prefix FE80::0 . Addresses DNS servers - Enter a comma separated list of DNS servers. Search domains - Enter a comma separated list of domain controllers. If you need to configure static routes, click the Routes button and for more details on configuration options, see Section 4.3, "Configuring Static Routes with GUI" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_IPv6_Settings |
Chapter 10. Disconnected environment | Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/disconnected-environment_rhodf |
3.8. net_cls | 3.8. net_cls The net_cls subsystem tags network packets with a class identifier (classid) that allows the Linux traffic controller ( tc ) to identify packets originating from a particular cgroup. The traffic controller can be configured to assign different priorities to packets from different cgroups. net_cls.classid net_cls.classid contains a single value that indicates a traffic control handle . The value of classid read from the net_cls.classid file is presented in the decimal format while the value to be written to the file is expected in the hexadecimal format. For example, 0x100001 represents the handle conventionally written as 10:1 in the format used by the ip utility. In the net_cls.classid file, it would be represented by the number 1048577 . The format for these handles is: 0x AAAA BBBB , where AAAA is the major number in hexadecimal and BBBB is the minor number in hexadecimal. You can omit any leading zeroes; 0x10001 is the same as 0x00010001 , and represents 1:1 . The following is an example of setting a 10:1 handle in the net_cls.classid file: Refer to the man page for tc to learn how to configure the traffic controller to use the handles that the net_cls adds to network packets. | [
"~]# echo 0x100001 > /cgroup/net_cls/red/net_cls.classid ~]# cat /cgroup/net_cls/red/net_cls.classid 1048577"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-net_cls |
Chapter 5. Ceph Object Gateway and the IAM API | Chapter 5. Ceph Object Gateway and the IAM API The Ceph Object Gateway supports RESTful management of account users, roles, and associated policies. This REST API is served by the same HTTP endpoint as the Ceph Object Gateway S3 API . By default, only Account Root Users are authorized to use the IAM API, and can only see the resources under their own account. The account root user can use policies to delegate these permissions to other users or roles in the account. 5.1. Feature support The following tables describe the currently supported IAM actions. Table 5.1. Users Action Remarks CreateUser GetUser UpdateUser DeleteUser ListUsers CreateAccessKey UpdateAccessKey DeleteAccessKey ListAccessKeys PutUserPolicy GetUserPolicy DeleteUserPolicy ListUserPolicies AttachUserPolicies DetachUserPolicy ListAttachedUserPolicies Table 5.2. Groups Action Remarks CreateGroup GetGroup UpdateGroup DeleteGroup ListGroups AddUserToGroup RemoveUserFromGroup ListGroupsForUser PutGroupPolicy GetGroupPolicy DeleteGroupPolicy ListGroupPolicies AttachGroupPolicies DetachGroupPolicy ListAttachedGroupPolicies Table 5.3. Roles CreateRole GetRole UpdateRole UpdateAssumeRolePolicy DeleteRole ListRoles TagRole UntagRole ListRoleTags PutRolePolicy GetRolePolicy DeleteRolePolicy ListRolePolicies AttachRolePolicies DetachRolePolicy ListAttachedRolePolicies Table 5.4. OpenIDConnectProvider CreateOpenIDConnectProvider GetOpenIDConnectProvider DeleteOpenIDConnectProvider ListOpenIDConnectProviders 5.2. Managed policies The following managed policies are available for use with AttachGroupPolicy , AttachRolePolicy , and AttachUserPolicy . IAMFullAccess Arn arn:aws:iam::aws:policy/IAMFullAccess Version v2 (default) IAMReadOnlyAccess Arn arn:aws:iam::aws:policy/IAMReadOnlyAccess Version v4 (default) AmazonSNSFullAccess Arn arn:aws:iam::aws:policy/AmazonSNSFullAccess Version v1 (default) AmazonSNSReadOnlyAccess Arn arn:aws:iam::aws:policy/AmazonSNSReadOnlyAccess Version v1 (default) AmazonS3FullAccess Arn arn:aws:iam::aws:policy/AmazonS3FullAccess Version v2 (default) AmazonS3ReadOnlyAccess Arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess Version v3 (default) | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/developer_guide/ceph-object-gateway-and-the-iam-api |
Chapter 4. PodSecurityPolicyReview [security.openshift.io/v1] | Chapter 4. PodSecurityPolicyReview [security.openshift.io/v1] Description PodSecurityPolicyReview checks which service accounts (not users, since that would be cluster-wide) can create the PodTemplateSpec in question. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object PodSecurityPolicyReviewSpec defines specification for PodSecurityPolicyReview status object PodSecurityPolicyReviewStatus represents the status of PodSecurityPolicyReview. 4.1.1. .spec Description PodSecurityPolicyReviewSpec defines specification for PodSecurityPolicyReview Type object Required template Property Type Description serviceAccountNames array (string) serviceAccountNames is an optional set of ServiceAccounts to run the check with. If serviceAccountNames is empty, the template.spec.serviceAccountName is used, unless it's empty, in which case "default" is used instead. If serviceAccountNames is specified, template.spec.serviceAccountName is ignored. template PodTemplateSpec template is the PodTemplateSpec to check. The template.spec.serviceAccountName field is used if serviceAccountNames is empty, unless the template.spec.serviceAccountName is empty, in which case "default" is used. If serviceAccountNames is specified, template.spec.serviceAccountName is ignored. 4.1.2. .status Description PodSecurityPolicyReviewStatus represents the status of PodSecurityPolicyReview. Type object Required allowedServiceAccounts Property Type Description allowedServiceAccounts array allowedServiceAccounts returns the list of service accounts in this namespace that have the power to create the PodTemplateSpec. allowedServiceAccounts[] object ServiceAccountPodSecurityPolicyReviewStatus represents ServiceAccount name and related review status 4.1.3. .status.allowedServiceAccounts Description allowedServiceAccounts returns the list of service accounts in this namespace that have the power to create the PodTemplateSpec. Type array 4.1.4. .status.allowedServiceAccounts[] Description ServiceAccountPodSecurityPolicyReviewStatus represents ServiceAccount name and related review status Type object Required name Property Type Description allowedBy ObjectReference allowedBy is a reference to the rule that allows the PodTemplateSpec. A rule can be a SecurityContextConstraint or a PodSecurityPolicy A nil , indicates that it was denied. name string name contains the allowed and the denied ServiceAccount name reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. template PodTemplateSpec template is the PodTemplateSpec after the defaulting is applied. 4.2. API endpoints The following API endpoints are available: /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyreviews POST : create a PodSecurityPolicyReview 4.2.1. /apis/security.openshift.io/v1/namespaces/{namespace}/podsecuritypolicyreviews Table 4.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a PodSecurityPolicyReview Table 4.2. Body parameters Parameter Type Description body PodSecurityPolicyReview schema Table 4.3. HTTP responses HTTP code Reponse body 200 - OK PodSecurityPolicyReview schema 201 - Created PodSecurityPolicyReview schema 202 - Accepted PodSecurityPolicyReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/security_apis/podsecuritypolicyreview-security-openshift-io-v1 |
Chapter 11. IdM Directory Server RFC support | Chapter 11. IdM Directory Server RFC support The Directory Server component in Identity Management (IdM) supports many LDAP-related Requests for Comments (RFCs). Additional resources Directory Server RFC Support Directory Server 11 Deployment Guide | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/planning_identity_management/ref_idm-directory-server-rfc-support_planning-identity-management |
Distributed tracing | Distributed tracing OpenShift Container Platform 4.12 Configuring and using distributed tracing in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"oc edit configmap tempo-operator-manager-config -n openshift-tempo-operator 1",
"data: controller_manager_config.yaml: | featureGates: httpEncryption: false grpcEncryption: false builtInCertManagement: enabled: false",
"oc rollout restart deployment.apps/tempo-operator-controller -n openshift-tempo-operator",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 20 storageConfig: local: path: /home/user/images mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.13 packages: - name: tempo-product channels: - name: stable additionalImages: - name: registry.redhat.io/rhosdt/tempo-rhel8@sha256:e4295f837066efb05bcc5897f31eb2bdbd81684a8c59d6f9498dd3590c62c12a - name: registry.redhat.io/rhosdt/tempo-gateway-rhel8@sha256:b62f5cedfeb5907b638f14ca6aaeea50f41642980a8a6f87b7061e88d90fac23 - name: registry.redhat.io/rhosdt/tempo-gateway-opa-rhel8@sha256:8cd134deca47d6817b26566e272e6c3f75367653d589f5c90855c59b2fab01e9 - name: registry.redhat.io/rhosdt/tempo-query-rhel8@sha256:0da43034f440b8258a48a0697ba643b5643d48b615cdb882ac7f4f1f80aad08e",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"spec: mode: deployment config: | exporters: jaeger: endpoint: jaeger-production-collector-headless.tracing-system.svc:14250 tls: ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\"",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF",
"oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get csv -n openshift-tempo-operator",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route",
"oc apply -f - << EOF <tempostack_cr> EOF",
"oc get tempostacks.tempo.grafana.com simplest -o yaml",
"oc get pods",
"oc get route",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc login --username=<your_username>",
"oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF",
"oc apply -f - << EOF <object_storage_secret> EOF",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m",
"oc apply -f - << EOF <tempomonolithic_cr> EOF",
"oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml",
"oc get pods",
"oc get route",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }",
"aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text",
"aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"",
"apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque",
"ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'",
"oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"",
"apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3",
"apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route",
"kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true",
"apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true",
"oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1",
"oc login --username=<your_username>",
"oc get deployments -n <project_of_tempostack_instance>",
"oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>",
"oc get deployments -n <project_of_tempostack_instance>",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: MyConfigFile spec: strategy: production 1",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"oc create -n tracing-system -f jaeger.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE jaeger-all-in-one-inmemory-cdff7897b-qhfdx 2/2 Running 0 24s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-production namespace: spec: strategy: production ingress: security: oauth-proxy storage: type: elasticsearch elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy esIndexCleaner: enabled: true numberOfDays: 7 schedule: 55 23 * * * esRollover: schedule: '*/30 * * * *'",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-production.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerproduction-1-6676cf568gwhlw 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-2-bcd4c8bf5l6g6w 2/2 Running 0 10m elasticsearch-cdm-jaegersystemjaegerproduction-3-844d6d9694hhst 2/2 Running 0 10m jaeger-production-collector-94cd847d-jwjlj 1/1 Running 3 8m32s jaeger-production-query-5cbfbd499d-tv8zf 3/3 Running 3 8m32s",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 1 storage: type: elasticsearch ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:8443",
"oc new-project tracing-system",
"oc create -n tracing-system -f jaeger-streaming.yaml",
"oc get pods -n tracing-system -w",
"NAME READY STATUS RESTARTS AGE elasticsearch-cdm-jaegersystemjaegerstreaming-1-697b66d6fcztcnn 2/2 Running 0 5m40s elasticsearch-cdm-jaegersystemjaegerstreaming-2-5f4b95c78b9gckz 2/2 Running 0 5m37s elasticsearch-cdm-jaegersystemjaegerstreaming-3-7b6d964576nnz97 2/2 Running 0 5m5s jaeger-streaming-collector-6f6db7f99f-rtcfm 1/1 Running 0 80s jaeger-streaming-entity-operator-6b6d67cc99-4lm9q 3/3 Running 2 2m18s jaeger-streaming-ingester-7d479847f8-5h8kc 1/1 Running 0 80s jaeger-streaming-kafka-0 2/2 Running 0 3m1s jaeger-streaming-query-65bf5bb854-ncnc7 3/3 Running 0 80s jaeger-streaming-zookeeper-0 2/2 Running 0 3m39s",
"oc login --username=<NAMEOFUSER> https://<HOSTNAME>:6443",
"export JAEGER_URL=USD(oc get route -n tracing-system jaeger -o jsonpath='{.spec.host}')",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: name spec: strategy: <deployment_strategy> allInOne: options: {} resources: {} agent: options: {} resources: {} collector: options: {} resources: {} sampling: options: {} storage: type: options: {} query: options: {} resources: {} ingester: options: {} resources: {} options: {}",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-all-in-one-inmemory",
"collector: replicas:",
"spec: collector: options: {}",
"options: collector: num-workers:",
"options: collector: queue-size:",
"options: kafka: producer: topic: jaeger-spans",
"options: kafka: producer: brokers: my-cluster-kafka-brokers.kafka:9092",
"options: log-level:",
"options: otlp: enabled: true grpc: host-port: 4317 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"options: otlp: enabled: true http: cors: allowed-headers: [<header-name>[, <header-name>]*] allowed-origins: * host-port: 4318 max-connection-age: 0s max-connection-age-grace: 0s max-message-size: 4194304 read-timeout: 0s read-header-timeout: 2s idle-timeout: 0s tls: enabled: false cert: /path/to/cert.crt cipher-suites: \"TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256\" client-ca: /path/to/cert.ca reload-interval: 0s min-version: 1.2 max-version: 1.3",
"spec: sampling: options: {} default_strategy: service_strategy:",
"default_strategy: type: service_strategy: type:",
"default_strategy: param: service_strategy: param:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: with-sampling spec: sampling: options: default_strategy: type: probabilistic param: 0.5 service_strategies: - service: alpha type: probabilistic param: 0.8 operation_strategies: - operation: op1 type: probabilistic param: 0.2 - operation: op2 type: probabilistic param: 0.4 - service: beta type: ratelimiting param: 5",
"spec: sampling: options: default_strategy: type: probabilistic param: 1",
"spec: storage: type:",
"storage: secretname:",
"storage: options: {}",
"storage: esIndexCleaner: enabled:",
"storage: esIndexCleaner: numberOfDays:",
"storage: esIndexCleaner: schedule:",
"elasticsearch: properties: doNotProvision:",
"elasticsearch: properties: name:",
"elasticsearch: nodeCount:",
"elasticsearch: resources: requests: cpu:",
"elasticsearch: resources: requests: memory:",
"elasticsearch: resources: limits: cpu:",
"elasticsearch: resources: limits: memory:",
"elasticsearch: redundancyPolicy:",
"elasticsearch: useCertManagement:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 3 resources: requests: cpu: 1 memory: 16Gi limits: memory: 16Gi",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch elasticsearch: nodeCount: 1 storage: 1 storageClassName: gp2 size: 5Gi resources: requests: cpu: 200m memory: 4Gi limits: memory: 4Gi redundancyPolicy: ZeroRedundancy",
"es: server-urls:",
"es: max-doc-count:",
"es: max-num-spans:",
"es: max-span-age:",
"es: sniffer:",
"es: sniffer-tls-enabled:",
"es: timeout:",
"es: username:",
"es: password:",
"es: version:",
"es: num-replicas:",
"es: num-shards:",
"es: create-index-templates:",
"es: index-prefix:",
"es: bulk: actions:",
"es: bulk: flush-interval:",
"es: bulk: size:",
"es: bulk: workers:",
"es: tls: ca:",
"es: tls: cert:",
"es: tls: enabled:",
"es: tls: key:",
"es: tls: server-name:",
"es: token-file:",
"es-archive: bulk: actions:",
"es-archive: bulk: flush-interval:",
"es-archive: bulk: size:",
"es-archive: bulk: workers:",
"es-archive: create-index-templates:",
"es-archive: enabled:",
"es-archive: index-prefix:",
"es-archive: max-doc-count:",
"es-archive: max-num-spans:",
"es-archive: max-span-age:",
"es-archive: num-replicas:",
"es-archive: num-shards:",
"es-archive: password:",
"es-archive: server-urls:",
"es-archive: sniffer:",
"es-archive: sniffer-tls-enabled:",
"es-archive: timeout:",
"es-archive: tls: ca:",
"es-archive: tls: cert:",
"es-archive: tls: enabled:",
"es-archive: tls: key:",
"es-archive: tls: server-name:",
"es-archive: token-file:",
"es-archive: username:",
"es-archive: version:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 index-prefix: my-prefix tls: ca: /es/certificates/ca.crt secretName: tracing-secret volumeMounts: - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-prod spec: strategy: production storage: type: elasticsearch options: es: server-urls: https://quickstart-es-http.default.svc:9200 1 index-prefix: my-prefix tls: 2 ca: /es/certificates/ca.crt secretName: tracing-secret 3 volumeMounts: 4 - name: certificates mountPath: /es/certificates/ readOnly: true volumes: - name: certificates secret: secretName: quickstart-es-http-certs-public",
"apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: annotations: logging.openshift.io/elasticsearch-cert-management: \"true\" logging.openshift.io/elasticsearch-cert.jaeger-custom-es: \"user.jaeger\" logging.openshift.io/elasticsearch-cert.curator-custom-es: \"system.logging.curator\" name: custom-es spec: managementState: Managed nodeSpec: resources: limits: memory: 16Gi requests: cpu: 1 memory: 16Gi nodes: - nodeCount: 3 proxyResources: {} resources: {} roles: - master - client - data storage: {} redundancyPolicy: ZeroRedundancy",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: jaeger-prod spec: strategy: production storage: type: elasticsearch elasticsearch: name: custom-es doNotProvision: true useCertManagement: true",
"spec: query: replicas:",
"spec: query: options: {}",
"options: log-level:",
"options: query: base-path:",
"apiVersion: jaegertracing.io/v1 kind: \"Jaeger\" metadata: name: \"my-jaeger\" spec: strategy: allInOne allInOne: options: log-level: debug query: base-path: /jaeger",
"spec: ingester: options: {}",
"options: deadlockInterval:",
"options: kafka: consumer: topic:",
"options: kafka: consumer: brokers:",
"options: log-level:",
"apiVersion: jaegertracing.io/v1 kind: Jaeger metadata: name: simple-streaming spec: strategy: streaming collector: options: kafka: producer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: options: kafka: consumer: topic: jaeger-spans brokers: my-cluster-kafka-brokers.kafka:9092 ingester: deadlockInterval: 5 storage: type: elasticsearch options: es: server-urls: http://elasticsearch:9200",
"apiVersion: apps/v1 kind: Deployment metadata: name: myapp annotations: \"sidecar.jaegertracing.io/inject\": \"true\" 1 spec: selector: matchLabels: app: myapp template: metadata: labels: app: myapp spec: containers: - name: myapp image: acme/myapp:myversion",
"apiVersion: apps/v1 kind: StatefulSet metadata: name: example-statefulset namespace: example-ns labels: app: example-app spec: spec: containers: - name: example-app image: acme/myapp:myversion ports: - containerPort: 8080 protocol: TCP - name: jaeger-agent image: registry.redhat.io/distributed-tracing/jaeger-agent-rhel7:<version> # The agent version must match the Operator version imagePullPolicy: IfNotPresent ports: - containerPort: 5775 name: zk-compact-trft protocol: UDP - containerPort: 5778 name: config-rest protocol: TCP - containerPort: 6831 name: jg-compact-trft protocol: UDP - containerPort: 6832 name: jg-binary-trft protocol: UDP - containerPort: 14271 name: admin-http protocol: TCP args: - --reporter.grpc.host-port=dns:///jaeger-collector-headless.example-ns:14250 - --reporter.type=grpc",
"oc login --username=<your_username>",
"oc login --username=<NAMEOFUSER>",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 93m jaeger-operator 1/1 1 1 49m jaeger-test 1/1 1 1 7m23s jaeger-test2 1/1 1 1 6m48s tracing1 1/1 1 1 7m8s tracing2 1/1 1 1 35m",
"oc delete jaeger <deployment-name> -n <jaeger-project>",
"oc delete jaeger tracing2 -n openshift-operators",
"oc get deployments -n <jaeger-project>",
"oc get deployments -n openshift-operators",
"NAME READY UP-TO-DATE AVAILABLE AGE elasticsearch-operator 1/1 1 1 94m jaeger-operator 1/1 1 1 50m jaeger-test 1/1 1 1 8m14s jaeger-test2 1/1 1 1 7m39s tracing1 1/1 1 1 7m59s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/distributed_tracing/index |
Chapter 4. New features | Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.4. 4.1. Installer and image creation Anaconda replaces the original boot device NVRAM variable list with new values Previously, booting from NVRAM could lead to boot system failure due to the entries with the incorrect values in the boot device list. With this update the problem is fixed, but the list of devices is cleared when updating the boot device NVRAM variable. (BZ#1854307) Graphical installation of KVM virtual machines on IBM Z is now available When using the KVM hypervisor on IBM Z hardware, you can now use the graphical installation when creating virtual machines (VMs). Now, when a user executes the installation in KVM, and QEMU provides a virtio-gpu driver, the installer automatically starts the graphical console. The user can switch to text or VNC mode by appending the inst.text or inst.vnc boot parameters in the VM's kernel command line. (BZ#1609325) Warnings for deprecated kernel boot arguments Anaconda boot arguments without the inst. prefix (for example, ks , stage2 , repo and so on) are deprecated starting RHEL7. These arguments will be removed in the major RHEL release. With this release, appropriate warning messages are displayed when the boot arguments are used without the inst prefix. The warning messages are displayed in dracut when booting the installation and also when the installation program is started on a terminal. Following is a sample warning message that is displayed on a terminal: Deprecated boot argument %s must be used with the inst. prefix. Please use inst.%s instead. Anaconda boot arguments without inst. prefix have been deprecated and will be removed in a future major release. Following is a sample warning message that is displayed in dracut : USD1 has been deprecated. All usage of Anaconda boot arguments without the inst. prefix have been deprecated and will be removed in a future major release. Please use USD2 instead. ( BZ#1897657 ) 4.2. RHEL for Edge Support to specify the kernel name as customization for RHEL for Edge image types When creating OSTree commits for RHEL for Edge images, only one kernel package can be installed at a time, otherwise the commit creation fails in rpm-ostree . This prevents RHEL for Edge from adding alternative kernels, in particular, the real-time kernel ( kernel-rt ). With this enhancement, when creating a blueprint for RHEL for Edge image using the CLI, you can define the name of the kernel to be used in an image, by setting the customizations.kernel.name key. If you do not specify any kernel name, the image include the default kernel package. ( BZ#1960043 ) 4.3. Software management New fill_sack_from_repos_in_cache function is now supported in DNF API With this update, the new DNF API fill_sack_from_repos_in_cache function has been introduced which allows to load repositories only from the cached solv , solvx files, and the repomd.xml file. As a result, if the user manages dnf cache, it is possible to save resources without having duplicate information ( xml and solv ), and without processing xml into solv . ( BZ#1865803 ) createrepo_c now automatically adds modular metadata to repositories Previously, running the createrepo_c command on RHEL8 packages to create a new repository did not include modular repodata in this repository. Consequently, it caused various problems with repositories. With this update, createrepo_c : scans for modular metadata merges the found module YAML files into a single modular document modules.yaml automatically adds this document to the repository. As a result, adding modular metadata to repositories is now automatic and no longer has to be done as a separate step using the modifyrepo_c command. ( BZ#1795936 ) The ability to mirror a transaction between systems within DNF is now supported With this update, the user can store and replay a transaction within DNF. To store a transaction from DNF history into a JSON file, run the dnf history store command. To replay the transaction later on the same machine, or on a different one, run the dnf history replay command. Comps groups operations storing and replaying is supported. Module operations are not yet supported, and consequently, are not stored or replayed. ( BZ#1807446 ) createrepo_c rebased to version 0.16.2 The createrepo_c packages have been rebased to version 0.16.2 which provides the following notable changes over the version: Added module metadata support for createrepo_c . Fixed various memory leaks (BZ#1894361) The protect_running_kernel configuration option is now available. With this update, the protect_running_kernel configuration option for the dnf and microdnf commands has been introduced. This option controls whether the package corresponding to the running version of the kernel is protected from removal. As a result, the user can now disable protection of the running kernel. ( BZ#1698145 ) 4.4. Shells and command-line tools OpenIPMI rebased to version 2.0.29 The OpenIPMI packages have been upgraded to version 2.0.29. Notable changes over the version include: Fixed memory leak, variable binding, and missing error messages. Added support for IPMB . Added support for registration of individual group extension in the lanserv . (BZ#1796588) freeipmi rebased to version 1.6.6 The freeipmi packages have been upgraded to version 1.6.6. Notable changes over the version include: Fixed memory leaks and typos in the source code. Implemented workarounds for the following known issues: unexpected completion code. Dell Poweredge FC830. out of order packets with lan/rmcpplus ipmb . Added support for new Dell, Intel, and Gigabyte devices. Added support for the interpretation of system information and events. (BZ#1861627) opal-prd rebased to version 6.6.3 The opal-prd package has been rebased to version 6.6.3. Notable changes include: Added an offline worker process handle page for opal-prd daemon. Fixed the bug for opal-gard on POWER9P so that the system can identify the chip targets for gard records. Fixed false negatives in wait_for_all_occ_init() of occ command. Fixed OCAPI_MEM BAR values in hw/phys-map . Fixed warnings for Inconsistent MSAREA in hdata/memory.c . For sensors in occ: Fixed sensor values zero bug. Fixed the GPU detection code. Skipped sysdump retrieval in MPIPL boot. Fixed IPMI double-free in the Mihawk platform. Updated non-MPIPL scenario in fsp/dump . For hw/phb4: Verified AER support before initialising AER regs. Enabled error reporting. Added new smp-cable-connector VPD keyword in hdata . (BZ#1844427) opencryptoki rebased to version 3.15.1 The opencryptoki packages have been rebased to version 3.15.1. Notable changes include: Fixed segfault in C_SetPin . Fixed usage of EVP_CipherUpdate and EVP_CipherFinal . Added utility to migrate the token repository to FIPS compliant encryption. For pkcstok_migrate tool: Fixed NVTOK.DAT conversion on Little Endian platforms. Fixed private and public token object conversion on Little Endian platforms. Fixed storing of public token objects in the new data format. Fixed the parameter checking mechanism in dh_pkcs_derive . Corrected soft token model name. Replaced deprecated OpenSSL interfaces in mech_ec.c file and in ICA , TPM , and Soft tokens. Replaced deprecated OpenSSL AES/3DES interfaces in sw_crypt.c file. Added support for ECC mechanism in Soft token. Added IBM specific SHA3 HMAC and SHA512/224/256 HMAC mechanisms in the Soft token. Added support for key wrapping with CKM_RSA_PKCS in CCA. For EP11 crypto stack: Fixed ep11_get_keytype to recognize CKM_DES2_KEY_GEN . Fixed error trace in token_specific_rng . Enabled specific FW version and API in HSM simulation. Fixed Endian bug in X9.63 KDF . Added an error message for handling p11sak remove-key command . Fixed compiling issues with C++. Fixed the problem with C_Get/SetOperationState and digest contexts. Fixed pkcscca migration fails with usr/sb2 . (BZ#1847433) powerpc-utils rebased to version 1.3.8 The powerpc-utils packages have been rebased to version 1.3.8. Notable changes include: Commands that do not depend on Perl are now moved to the core subpackage. Added support for Linux Hybrid Network Virtualization. Updated safe bootlist. Added vcpustat utility. Added support for cpu-hotplug in lparstat command. Added switch to print Scaled metrics in lparstat command. Added helper function to calculate the delta, scaled timebase, and to derive PURR/SPURR values. For ofpathname utility: Improved the speed for l2of_scsi() . Fixed the udevadm location. Added partition to support l2od_ide() and l2of_scsi() . Added support for the plug ID of a SCSI/SATA host. Fixed the segfault condition on the unsupported connector type. Added tools to support migration of SR_IOV to a hybrid virtual network. Fixed the format-overflow warnings. Fixed the bash command substitution warning using the lsdevinfo utility. Fixed boot-time bonding interface cleanup. (BZ#1853297) New kernel cmdline option now generates network device name The net_id built-in from systemd-udevd service gains a new kernel cmdline option net.naming-scheme=SCHEME_VERSION . Based on the value of the SCHEME_VERSION , a user can select a version of the algorithm that will generate the network device name. For example, to use the features of net_id built-in in RHEL 8.4, set the value of the SCHEME_VERSION to rhel-8.4 . Similarly, you can set the value of the SCHEME_VERSION to any other minor release that includes the required change or fix. (BZ#1827462) 4.5. Infrastructure services Difference in default postfix-3.5.8 behavior For better RHEL-8 backward compatibility, the behavior of the postfix-3.5.8 update differs from the default upstream postfix-3.5.8 behavior. For the default upstream postfix-3.5.8 behavior, run the following commands: # postconf info_log_address_format=external # postconf smtpd_discard_ehlo_keywords= # postconf rhel_ipv6_normalize=yes For details, see the /usr/share/doc/postfix/README-RedHat.txt file. If the incompatible functionalities are not used or RHEL-8 backward compatibility is the priority, no steps are necessary. ( BZ#1688389 ) BIND rebased to version 9.11.26 The bind packages have been updated to version 9.11.26. Notable changes include: Changed the default EDNS buffer size from 4096 to 1232 bytes. This change will prevent the loss of fragmented packets in some networks. Increased the default value of max-recursion-queries from 75 to 100. Related to CVE-2020-8616. Fixed the problem of reused dead nodes in lib/dns/rbtdb.c file in named . Fixed the crashing problem in the named service when cleaning the reused dead nodes in the lib/dns/rbtdb.c file. Fixed the problem of configured multiple forwarders sometimes occurring in the named service. Fixed the problem of the named service of assigning incorrect signed zones with no DS record at the parent as bogus. Fixed the missing DNS cookie response over UDP . ( BZ#1882040 ) unbound configuration now provides enhanced logging output With this enhancement, the following three options have been added to the unbound configuration: log-servfail enables log lines that explain the reason for the SERVFAIL error code to clients. log-local-actions enables logging of all local zone actions. log-tag-queryreply enables tagging of log queries and log replies in the log file. ( BZ#1850460 ) Multiple vulnerabilities fixed with ghostscript-9.27 The ghostscript-9.27 release contains security fixes for the following vulnerabilities: CVE-2020-14373 CVE-2020-16287 CVE-2020-16288 CVE-2020-16289 CVE-2020-16290 CVE-2020-16291 CVE-2020-16292 CVE-2020-16293 CVE-2020-16294 CVE-2020-16295 CVE-2020-16296 CVE-2020-16297 CVE-2020-16298 CVE-2020-16299 CVE-2020-16300 CVE-2020-16301 CVE-2020-16302 CVE-2020-16303 CVE-2020-16304 CVE-2020-16305 CVE-2020-16306 CVE-2020-16307 CVE-2020-16308 CVE-2020-16309 CVE-2020-16310 CVE-2020-17538 ( BZ#1874523 ) Tuned rebased to version 2.15-1. Notable changes include: Added service plugin for Linux services control. Improved scheduler plugin. ( BZ#1874052 ) DNSTAP now records incoming detailed queries. DNSTAP provides an advanced way to monitor and log details of incoming name queries. It also records sent answers from the named service. Classic query logging of the named service has a negative impact on the performance of the named service. As a result, DNSTAP offers a way to perform continuous logging of detailed incoming queries without impacting the performance penalty. The new dnstap-read utility allows you to analyze the queries running on a different system. ( BZ#1854148 ) SpamAssassin rebased to version 3.4.4 The SpamAssassin package has been upgraded to version 3.4.4. Notable changes include: OLEVBMacro plugin has been added. New functions check_rbl_ns , check_rbl_rcvd , check_hashbl_bodyre , and check_hashbl_uris have been added. ( BZ#1822388 ) Key algorithm can be changed using the OMAPI shell With this enhancement, users can now change the key algorithm. The key algorithm that was hardcoded as HMAC-MD5 is not considered secure anymore. As a result, users can use the omshell command to change the key algorithm. ( BZ#1883999 ) Sendmail now supports TLSFallbacktoClear configuration With this enhancement, if the outgoing TLS connection fails, the sendmail client will fall back to the plaintext. This overcomes the TLS compatibility problems with the other parties. Red Hat ships sendmail with the TLSFallbacktoClear option disabled by default. ( BZ#1868041 ) tcpdump now allows viewing RDMA capable devices This enhancement enables support for capturing RDMA traffic with tcpdump . It allows users to capture and analyze offloaded RDMA traffic with the tcpdump tool. As a result, users can use tcpdump to view RDMA capable devices, capture RoCE and VMA traffic, and analyze its content. (BZ#1743650) 4.6. Security libreswan rebased to 4.3 The libreswan packages have been upgraded to version 4.3. Notable changes over the version include: IKE and ESP over TCP support (RFC 8229) IKEv2 Labeled IPsec support IKEv2 leftikeport/rightikeport support Experimental support for Intermediate Exchange Extended Redirect support for loadbalancing Default IKE lifetime changed from 1 h to 8 h for increased interoperability :RSA sections in the ipsec.secrets file are no longer required Fixed Windows 10 rekeying Fixed sending certificate for ECDSA authentication Fixes for MOBIKE and NAT-T ( BZ#1891128 ) IPsec VPN now supports TCP transport This update of the libreswan packages adds support for IPsec-based VPNs over TCP encapsulation as described in RFC 8229. The addition helps establish IPsec VPNs on networks that prevent traffic using Encapsulating Security Payload (ESP) and UDP. As a result, administrators can configure VPN servers and clients to use TCP either as a fallback or as the main VPN transport protocol. (BZ#1372050) Libreswan now supports IKEv2 for Labeled IPsec The Libreswan Internet Key Exchange (IKE) implementation now includes Internet Key Exchange version 2 (IKEv2) support of Security Labels for IPsec. With this update, systems that use security labels with IKEv1 can be upgraded to IKEv2. (BZ#1025061) libpwquality rebased to 1.4.4 The libpwquality package has been rebased to version 1.4.4. This release includes multiple bug fixes and translation updates. Most notably, the following setting options have been added to the pwquality.conf file: retry enforce_for_root local_users_only ( BZ#1537240 ) p11-kit rebased to 0.23.19 The p11-kit packages have been upgraded from version 0.23.14 to version 0.23.19. The new version fixes several bugs and provides various enhancements, notably: Fixed CVE-2020-29361, CVE-2020-29362, CVE-2020-29363 security issues. p11-kit now supports building through the meson build system. (BZ#1887853) pyOpenSSL rebased to 19.0.0 The pyOpenSSL packages have been rebased to upstream version 19.0.0. This version provides bug fixes and enhancements, most notably: Improved TLS 1.3 support with openssl version 1.1.1. No longer raising an error when trying to add a duplicate certificate with X509Store.add_cert Improved handling of X509 certificates containing NUL bytes in components (BZ#1629914) SCAP Security Guide rebased to 0.1.54 The scap-security-guide packages have been rebased to upstream version 0.1.54, which provides several bug fixes and improvements. Most notably: The Operating System Protection Profile (OSPP) has been updated in accordance with the Protection Profile for General Purpose Operating Systems for Red Hat Enterprise Linux 8.4. The ANSSI family of profiles based on the ANSSI BP-028 recommendations from the French National Security Agency (ANSSI), has been introduced. The content contains profiles implementing rules of the Minimum, Intermediary and Enhanced hardening levels. The Security Technical Implementation Guide ( STIG ) security profile has been updated, and it implements rules from the recently-released version V1R1. ( BZ#1889344 ) OpenSCAP rebased to 1.3.4 The OpenSCAP packages have been rebased to upstream version 1.3.4. Notable fixes and enhancements include: Fixed certain memory issues that were causing systems with large amounts of files to run out of memory. OpenSCAP now treats GPFS as a remote file system. Proper handling of OVALs with circular dependencies between definitions. Improved yamlfilecontent : updated yaml-filter , extended the schema and probe to be able to work with a set of values in maps. Fixed numerous warnings (GCC and Clang). Numerous memory management fixes. Numerous memory leak fixes. Platform elements in XCCDF files are now properly resolved in accordance with the XCCDF specification. Improved compatibility with uClibc. Local and remote file system detection methods improved. Fixed dpkginfo probe to use pkgCacheFile instead of manually opening the cache. OpenSCAP scan report is now a valid HTML5 document. Fixed unwanted recursion in the file probe. ( BZ#1887794 ) The RHEL 8 STIG security profile updated to version V1R1 With the release of the RHBA-2021:1886 advisory, the DISA STIG for Red Hat Enterprise Linux 8 profile in the SCAP Security Guide has been updated to align with the latest version V1R1 . The profile is now also more stable and better aligns with the RHEL 8 STIG (Security Technical Implementation Guide) manual benchmark provided by the Defense Information Systems Agency (DISA). This first iteration brings approximately 60% of coverage with regards to the STIG. You should use only the current version of this profile because the draft profile is no longer valid. Warning Automatic remediation might render the system non-functional. Run the remediation in a test environment first. ( BZ#1918742 ) New DISA STIG profile compatible with Server with GUI installations A new profile, DISA STIG with GUI , has been added to the SCAP Security Guide with the release of the RHBA-2021:4098 advisory. This profile is derived from the DISA STIG profile and is compatible with RHEL installations that selected the Server with GUI package group. The previously existing stig profile was not compatible with Server with GUI because DISA STIG demands uninstalling any Graphical User Interface. However, this can be overridden if properly documented by a Security Officer during evaluation. As a result, the new profile helps when installing a RHEL system as a Server with GUI aligned with the DISA STIG profile. ( BZ#2005431 ) Profiles for ANSSI-BP-028 Minimal, Intermediary and Enhanced levels are now available in SCAP Security Guide With the new profiles, you can harden the system to the recommendations from the French National Security Agency (ANSSI) for GNU/Linux Systems at the Minimal, Intermediary and Enhanced hardening levels. As a result, you can configure and automate compliance of your RHEL 8 systems according to your required ANSSI hardening level by using the ANSSI Ansible Playbooks and the ANSSI SCAP profiles. ( BZ#1778188 ) scap-workbench can now scan remote systems using sudo privileges The scap-workbench GUI tool now supports scanning remote systems using passwordless sudo access. This feature reduces the security risk imposed by supplying root's credentials. Be cautious when using scap-workbench with passwordless sudo access and the remediate option. Red Hat recommends dedicating a well-secured user account just for the OpenSCAP scanner. ( BZ#1877522 ) rhel8-tang container image is now available With this release, the rhel8/rhel8-tang container image is available in the registry.redhat.io catalog. The container image provides Tang-server decryption capabilities for Clevis clients that run either in OpenShift Container Platform (OCP) clusters or in separate virtual machines. (BZ#1913310) Clevis rebased to version 15 The clevis packages have been rebased to upstream version 15. This version provides many bug fixes and enhancements over the version, most notably: Clevis now produces a generic initramfs and no longer automatically adds the rd.neednet=1 parameter to the kernel command line. Clevis now properly handles incorrect configurations that use the sss pin, and the clevis encrypt sss sub-command returns outputs that indicate the error cause. ( BZ#1887836 ) Clevis no longer automatically adds rd.neednet=1 Clevis now correctly produces a generic initrd (initial ramdisk) without host-specific configuration options by default. As a result, Clevis no longer automatically adds the rd.neednet=1 parameter to the kernel command line. If your configuration uses the functionality, you can either enter the dracut command with the --hostonly-cmdline argument or create the clevis.conf file in the /etc/dracut.conf.d and add the hostonly_cmdline=yes option to the file. A Tang binding must be present during the initrd build process. ( BZ#1853651 ) New package: rsyslog-udpspoof The rsyslog-udpspoof subpackage has been added back to RHEL 8. This module is similar to the regular UDP forwarder, but permits relaying syslog between different network segments while maintaining the source IP in the syslog packets. ( BZ#1869874 ) fapolicyd rebased to 1.0.2 The fapolicyd packages have been rebased to upstream version 1.0.2. This version provides many bug fixes and enhancements over the version, most notably: Added the integrity configuration option for enabling integrity checks through: Comparing file sizes Comparing SHA-256 hashes Integrity Measurement Architecture (IMA) subsystem The fapolicyd RPM plugin now registers any system update that is handled by either the YUM package manager or the RPM Package Manager. Rules now can contain GID in subjects. You can now include rule numbers in debug and syslog messages. ( BZ#1887451 ) New RPM plugin notifies fapolicyd about changes during RPM transactions This update of the rpm packages introduces a new RPM plugin that integrates the fapolicyd framework with the RPM database. The plugin notifies fapolicyd about installed and changed files during an RPM transaction. As a result, fapolicyd now supports integrity checking. Note that the RPM plugin replaces the YUM plugin because its functionality is not limited to YUM transactions but covers also changes by RPM. ( BZ#1923167 ) 4.7. Networking The PTP capabilities output format of the ethtool utility has changed Starting with RHEL 8.4, the ethtool utility uses the netlink interface instead of the ioctl() system call to communicate with the kernel. Consequently, when you use the ethtool -T <network_controller> command, the format of Precision Time Protocol (PTP) values changes. Previously, with the ioctl() interface, ethtool translated the capability bit names by using an ethtool -internal string table and, the ethtool -T <network_controller> command displayed, for example: With the netlink interface, ethtool receives the strings from the kernel. These strings do not include the internal SOF_TIMESTAMPING_* names. Therefore, ethtool -T <network_controller> now displays, for example: If you use the PTP capabilities output of ethtool in scripts or applications, update them accordingly. (JIRA:RHELDOCS-18188) XDP is conditionally supported Red Hat supports the eXpress Data Path (XDP) feature only if all of the following conditions apply: You load the XDP program on an AMD or Intel 64-bit architecture You use the libxdp library to load the program into the kernel The XDP program does not use the XDP hardware offloading In RHEL 8.4, XDP_TX and XDP_REDIRECT return codes are now supported in XDP programs. For details about unsupported XDP features, see XDP features that are available as Technology Preview ( BZ#1952421 ) NetworkManager rebased to version 1.30.0 The NetworkManager packages have been upgraded to upstream version 1.30.0, which provides a number of enhancements and bug fixes over the version: The ipv4.dhcp-reject-servers connection property has been added to define from which DHCP server IDs NetworkManager should reject lease offers. The ipv4.dhcp-vendor-class-identifier connection property has been added to send a custom Vendor Class Identifier DHCP option value. The active_slave bond option has been deprecated. Instead, set the primary option in the controller connection. The nm-initrd-generator utility now supports MAC addresses to indicate interfaces. The nm-initrd-generator utility generator now supports creating InfiniBand connections. The timeout of the NetworkManager-wait-online service has been increased to 60 seconds. The ipv4.dhcp-client-id=ipv6-duid connection property has been added to be compliant to RFC4361 . Additional ethtool offload features have been added. Support for the WPA3 Enterprise Suite-B 192-bit mode has been added. Support for virtual Ethernet ( veth ) devices has been added. For further information about notable changes, read the upstream release notes: NetworkManager 1.30.0 NetworkManager 1.28.0 ( BZ#1878783 ) The iproute2 utility introduces traffic control actions to add MPLS headers before Ethernet header With this enhancement, the iproute2 utility offers three new traffic control ( tc ) actions: mac_push - The act_mpls module provides this action to add MPLS labels before the original Ethernet header. push_eth - The act_vlan module provides this action to build an Ethernet header at the beginning of the packet. pop_eth - The act_vlan module provides this action to drop the outer Ethernet header. These tc actions help in implementing layer 2 virtual private network (L2VPN) by adding multiprotocol label switching (MPLS) labels before Ethernet headers. You can use these actions while adding tc filters to the network interfaces. Red Hat provides these actions as unsupported Technology Preview, because MPLS itself is a Technology Preview feature. For more information about these actions and their parameters, refer to the tc-mpls(8) and tc-vlan(8) man pages. (BZ#1861261) The nmstate API is now fully supported Nmstate, which was previously a Technology Preview, is a network API for hosts and fully supported in RHEL 8.4. The nmstate packages provide a library and the nmstatectl command-line utility to manage host network settings in a declarative manner. The networking state is described by a predefined schema. Reporting of the current state and changes to the desired state both conform to the schema. For further details, see the /usr/share/doc/nmstate/README.md file and the sections about nmstatectl in the Configuring and managing networking documentation. (BZ#1674456) New package: rshim The rhsim package provides the Mellanox BlueField rshim user-space driver, which enables accessing the rshim resources on the BlueField SmartNIC target from the external host machine. The current version of the rshim user-space driver implements device files for boot image push and virtual console access. In addition, it creates a virtual network interface to connect to the BlueField target and provides a way to access internal rshim registers. Note that in order for the virtual console or virtual network interface to be operational, the target must be running a tmfifo driver. (BZ#1744737) iptraf-ng rebased to 1.2.1 The iptraf-ng packages have been rebased to upstream version 1.2.1, which provides several bug fixes and improvements. Most notably: The iptraf-ng application no longer causes 100% CPU usage when showing the detailed statistics of a deleted interface. The unsafe handling arguments of printf() functions have been fixed. Partial support for IP over InfiniBand (IPoIB) interface has been added. Because the kernel does not provide the source address on the interface, you cannot use this feature in the LAN station monitor mode. Packet capturing abstraction has been added to allow iptraf-ng to capture packets at multi-gigabit speed. You can now scroll using the Home , End , Page up , and Page down keyboard keys. The application now shows the dropped packet count. ( BZ#1906097 ) 4.8. Kernel Kernel version in RHEL 8.4 Red Hat Enterprise Linux 8.4 is distributed with the kernel version 4.18.0-305. See also Important Changes to External Kernel Parameters and Device Drivers . ( BZ#1839151 ) Extended Berkeley Packet Filter for RHEL 8.4 The Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine executes a special assembly-like code. The eBPF bytecode first loads to the kernel, followed by its verification, code translation to the native machine code with just-in-time compilation, and then the virtual machine executes the code. Red Hat ships numerous components that utilize the eBPF virtual machine. Each component is in a different development phase, and thus not all components are currently fully supported. In RHEL 8.4, the following eBPF components are supported: The BPF Compiler Collection (BCC) tools package, which provides tools for I/O analysis, networking, and monitoring of Linux operating systems using eBPF . The BCC library which allows the development of tools similar to those provided in the BCC tools package. The eBPF for Traffic Control (tc) feature, which enables programmable packet processing inside the kernel network data path. The eXpress Data Path (XDP) feature, which provides access to received packets before the kernel networking stack processes them, is supported under specific conditions. The libbpf package, which is crucial for bpf related applications like bpftrace and bpf/xdp development. The xdp-tools package, which contains userspace support utilities for the XDP feature, is now supported on the AMD and Intel 64-bit architectures. This includes the libxdp library, the xdp-loader utility for loading XDP programs, the xdp-filter example program for packet filtering, and the xdpdump utility for capturing packets from a network interface with XDP enabled. Note that all other eBPF components are available as Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as Technology Preview: The bpftrace tracing language The AF_XDP socket for connecting the eXpress Data Path (XDP) path to user space For more information regarding the Technology Preview components, see Technology Previews . ( BZ#1780124 ) New package: kmod-redhat-oracleasm This update adds the new kmod-redhat-oracleasm package, which provides the kernel module part of the ASMLib utility. Oracle Automated Storage Management (ASM) is a data volume manager for Oracle databases. ASMLib is an optional utility that can be used on Linux systems to manage Oracle ASM devices. (BZ#1827015) The xmon program changes to support Secure Boot and kernel_lock resilience against attacks If the Secure Boot mechanism is disabled, you can set the xmon program into read-write mode ( xmon=rw ) on the kernel command-line. However, if you specify xmon=rw and boot into Secure Boot mode, the kernel_lockdown feature overrides xmon=rw and changes it to read-only mode. The additional behavior of xmon depending on Secure Boot enablement is listed below: Secure Boot is on: xmon=ro (default) A stack trace is printed Memory read works Memory write is blocked Secure Boot is off: Possibility to set xmon=rw A stack trace is always printed Memory read always works Memory write is permitted only if xmon=rw These changes to xmon behavior aim to support the Secure Boot and kernel_lock resilience against attackers with root permissions. For information how to configure kernel command-line parameters, see Configuring kernel command-line parameters on the Customer Portal. (BZ#1952161) Cornelis Omni-Path Architecture (OPA) Host Software Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 8.4. OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on installing Omni-Path Architecture, see: Cornelis Omni-Path Fabric Software Release Notes file. ( BZ#1960412 ) SLAB cache merging disabled by default The CONFIG_SLAB_MERGE_DEFAULT kernel configuration option has been disabled, and now SLAB caches are not merged by default. This change aims to enhance the allocator's reliability and traceability of cache usage. If the slab-cache merging behavior was desirable, the user can re-enable it by adding the slub_merge parameter to the kernel command-line. For more information on how to set the kernel command-line parameters, see the Configuring kernel command-line parameters on Customer Portal. (BZ#1871214) The ima-evm-utils package rebased to version 1.3.2 The ima-evm-utils package has been upgraded to version 1.3.2, which provides multiple bug fixes and enhancements. Notable changes include: Added support for handling the Trusted Platform Module (TPM2) multi-banks feature Extended the boot aggregate value to Platform Configuration Registers (PCRs) 8 and 9 Preloaded OpenSSL engine through a CLI parameter Added support for Intel Task State Segment (TSS2) PCR reading Added support for the original Integrity Measurement Architecture (IMA) template Both the libimaevm.so.0 and libimaevm.so.2 libraries are part of ima-evm-utils . Users of libimaevm.so.0 will not be affected, when their more recent applications use libimaevm.so.2 . (BZ#1868683) Levelling IMA and EVM features across supported CPU architectures All CPU architectures, except ARM, have a similar level of feature support for Integrity Measurement Architecture (IMA) and Extended Verification Module (EVM) technologies. The enabled functionalities are different for each CPU architecture. The following are the most significant changes for each supported CPU architecture: IBM Z: IMA appraise and trusted keyring enablement. AMD64 and Intel 64: specific architecture policy in secure boot state. IBM Power System (little-endian): specific architecture policy in secure and trusted boot state. SHA-256 as default hash algorithm for all supported architectures. For all architectures, the measurement template has changed to IMA-SIG The template includes the signature bits when present. Its format is d-ng|n-ng|sig . The goal of this update is to decrease the level of feature difference in IMA and EVM, so that userspace applications can behave equally across all supported CPU architectures. (BZ#1869758) Proactive compaction is now included in RHEL 8 as disabled-by-default With ongoing workload activity, system memory becomes fragmented. The fragmentation can result in capacity and performance problems. In some cases, program errors are also possible. Thereby, the kernel relies on a reactive mechanism called memory compaction. The original design of the mechanism is conservative, and the compaction activity is initiated on demand of allocation request. However, reactive behavior tends to increase the allocation latency if the system memory is already heavily fragmented. Proactive compaction improves the design by regularly initiating memory compaction work before a request for allocation is made. This enhancement increases the chances that memory allocation requests find the physically contiguous blocks of memory without the need of memory compaction producing those on-demand. As a result, latency for specific memory allocation requests is lowered. Warning Proactive compaction can result in increased compaction activity. This might have serious, system-wide impact, because memory pages that belong to different processes are moved and remapped. Therefore, enabling proactive compaction requires utmost care to avoid latency spikes in applications. (BZ#1848427) EDAC support has been added in RHEL 8 With this update, RHEL 8 supports the Error Detection and Correction (EDAC) kernel module set in 8th and 9th generation Intel Core Processors (CoffeeLake). The EDAC kernel module mainly handles Error Code Correction (ECC) memory and detect and report PCI bus parity errors. (BZ#1847567) A new package: kpatch-dnf The kpatch-dnf package provides a DNF plugin, which makes it possible to subscribe a RHEL system to kernel live patch updates. The subscription will affect all kernels currently installed on the system, including kernels that will be installed in the future. For more details about kpatch-dnf , see the dnf-kpatch(8) manual page or the Managing, monitoring, and updating the kernel documentation. (BZ#1798711) A new cgroups controller implementation for slab memory A new implementation of slab memory controller for the control groups technology is now available in RHEL 8. Currently, a single memory slab can contain objects owned by different memory control group . The slab memory controller brings improvement in slab utilization (up to 45%) and enables to shift the memory accounting from the page level to the object level. Also, this change eliminates each set of duplicated per-CPU and per-node slab caches for each memory control group and establishes one common set of per-CPU and per-node slab caches for all memory control groups . As a result, you can achieve a significant drop in the total kernel memory footprint and observe positive effects on memory fragmentation. Note that the new and more precise memory accounting requires more CPU time. However, the difference seems to be negligible in practice. (BZ#1877019) Time namespace has been added in RHEL 8 The time namespace enables the system monotonic and boot-time clocks to work with per-namespace offsets on AMD64, Intel 64, and the 64-bit ARM architectures. This feature is suited for changing the date and time inside Linux containers and for in-container adjustments of clocks after restoration from a checkpoint. As a result, users can now independently set time for each individual container. (BZ#1548297) New feature: Free memory page returning With this update, the RHEL 8 host kernel is able to return memory pages that are not used by its virtual machines (VMs) back to the hypervisor. This improves the stability and resource efficiency of the host. Note that for memory page returning to work, it must be configured in the VM, and the VM must also use the virtio_balloon device. (BZ#1839055) Supports changing the sorting order in perf top With this update, perf top can now sort samples by arbitrary event column in case multiple events in a group are sampled, instead of sorting by the first column. As a result, pressing a number key sorts the table by the matching data column. Note The column numbering starts from 0 . Using the --group-sort-idx command line option, it is possible to sort by the column number. (BZ#1851933) The kabi_whitelist package has been renamed to kabi_stablelist In accordance with Red Hat commitment to replacing problematic language, we renamed the kabi_whitelist package to kabi_stablelist in the RHEL 8.4 release. (BZ#1867910, BZ#1886901 ) bpf rebased to version 5.9 The bpf kernel technology in RHEL 8 has been brought up-to-date with its upstream counterpart from the kernel v5.9. The update provides multiple bug fixes and enhancements. Notable changes include: Added Berkeley Packet Filter (BPF) iterator for map elements and to iterate all BPF programs for efficient in-kernel inspection. Programs in the same control group (cgroup) can share the cgroup local storage map. BPF programs can run on socket lookup. The SO_KEEPALIVE and related options are available to the bpf_setsockopt() helper. Note that some BPF programs may need changes to their source code. (BZ#1874005) The bcc package rebased to version 0.16.0 The bcc package has been upgraded to version 0.16.0, which provides multiple bug fixes and enhancements. Notable changes include: Added utilities klockstat and funcinterval Fixes in various parts of the tcpconnect manual page Fix to make the tcptracer tool output show SPORT and DPORT columns for IPv6 addresses Fix broken dependencies (BZ#1879411) bpftrace rebased to version 0.11.0 The bpftrace package has been upgraded to version 0.11.0, which provides multiple bug fixes and enhancements. Notable changes include: Added utilities threadsnoop , tcpsynbl , tcplife , swapin , setuids , and naptime Fixed failures to run of the tcpdrop.bt and syncsnoop.bt tools Fixed a failure to load the Berkeley Packet Filter (BPF) program on IBM Z architectures Fixed a symbol lookup error (BZ#1879413) libbpf rebased to version 0.2.0.1 The libbpf package has been upgraded to version 0.2.0.1, which provides multiple bug fixes and enhancements. Notable changes include: Added support for accessing Berkeley Packet Filter (BPF) map fields in the bpf_map struct from programs that have BPF Type Format (BTF) struct access Added BPF ring buffer Added bpf iterator infrastructure Improved bpf_link observability ( BZ#1919345 ) perf now supports adding or removing tracepoints from a running collector without having to stop or restart perf Previously, to add or remove tracepoints from an instance of perf record , the perf process had to be stopped. As a consequence, performance data that occurred during the time the process was stopped was not collected and, therefore, lost. With this update, you can dynamically enable and disable tracepoints being collected by perf record via the control pipe interface without having to stop the perf record process. (BZ#1844111) The perf tool now supports recording and displaying absolute timestamps for trace data With this update, perf script can now record and display trace data with absolute timestamps. Note: To display trace data with absolute timestamps, the data must be recorded with the clock ID specified. To record data with absolute timestamps, specify the clock ID: To display trace data recorded with the specified clock ID, execute the following command: (BZ#1811839) dwarves rebased to version 1.19.1 The dwarves package has been upgraded to version 1.19.1, which provides multiple bug fixes and enhancements. Notably, this update introduces a new way of checking functions from the DWARF debug data with related ftrace entries to ensure a subset of ftrace functions is generated. ( BZ#1903566 ) perf now supports circular buffers that use specified events to trigger snapshots With this update, you can create custom circular buffers that write data to a perf.data file when an event you specify is detected. As a result, perf record can run continuously in the system background without generating excess overhead by continuously writing data to a perf.data file, and only recording data you are interested in. To create a custom circular buffer using the perf tool that records event specific snapshots, use the following command: (BZ#1844086) Kernel DRBG and Jitter entropy source are compliant to NIST SP 800-90A and NIST SP 800-90B Kernel Deterministic Random Bit Generator (DRBG) and Jitter entropy source are now compliant to recommendation for random number generation using DRBG (NIST SP 800-90A) and recommendation for the entropy sources used for random bit generation (NIST SP 800-90B) specifications. As a result, applications in FIPS mode can use these sources as FIPS-compliant randomness and noise sources. (BZ#1905088) kdump now supports Virtual Local Area Network tagged team network interface This update adds support to configure Virtual Local Area Network tagged team interface for kdump . As a result, this feature now enables kdump to use a Virtual Local Area Network tagged team interface to dump a vmcore file. (BZ#1844941) kernel-rt source tree has been updated to RHEL 8.4 tree The kernel-rt source has been updated to use the latest Red Hat Enterprise Linux kernel source tree. The real-time patch set has also been updated to the latest upstream version, v5.10-rt7. Both of these updates provide a number of bug fixes and enhancements. (BZ#1858099, BZ#1858105) The stalld package is now added to RHEL 8.4 distribution This update adds the stalld package to RHEL 8.4.0. stalld is a daemon that monitors threads on a system running low latency applications. It checks for job threads that have been on a run-queue without being scheduled onto a CPU for a specified threshold. When it detects a stalled thread, stalld temporarily changes the scheduling policy to SCHED_DEADLINE and assigns the thread a slice of CPU time to make forward progress. When the time slice completes or the thread blocks, the thread goes back to its original scheduling policy. (BZ#1875037) Support for CPU hotplug in the hv_24x7 and hv_gpci PMUs With this update, PMU counters correctly react to the hot-plugging of a CPU. As a result, if a hv_gpci event counter is running on a CPU that gets disabled, the counting redirects to another CPU. (BZ#1844416) Metrics for POWERPC hv_24x7 nest events are now available Metrics for POWERPC hv_24x7 nest events are now available for perf . By aggregating multiple events, these metrics provide a better understanding of the values obtained from perf counters and how effectively the CPU is able to process the workload. (BZ#1780258) hwloc rebased to version 2.2.0 The hwloc package has been upgraded to version 2.2.0, which provides the following change: The hwloc functionality can report details on Nonvolatile Memory Express (NVMe) drives including total disk size and sector size. ( BZ#1841354 ) The igc driver is now fully supported The igc Intel 2.5G Ethernet Linux wired LAN driver was introduced in RHEL 8.1 as a Technology Preview. Starting with RHEL 8.4, it is fully supported on all architectures. The ethtool utility also supports igc wired LANs. (BZ#1495358) 4.9. File systems and storage RHEL installation now supports creating a swap partition of size 16 TiB Previously, when installing RHEL, the installer created a swap partition of maximum 128 GB for automatic and manual partitioning. With this update, for automatic partitioning, the installer continues to create a swap partition of maximum 128 GB, but in case of manual partitioning, you can now create a swap partition of 16 TiB. ( BZ#1656485 ) Surprise removal of NVMe devices With this enhancement, you can surprise remove NVMe devices from the Linux operating system without notifying the operating system beforehand. This will enhance the serviceability of NVMe devices because no additional steps are required to prepare the devices for orderly removal, which ensures the availability of servers by eliminating server downtime. Note the following: Surprise removal of NVMe devices requires kernel-4.18.0-193.13.2.el8_2.x86_64 version or later. Additional requirements from the hardware platform or the software running on the platform might be necessary for successful surprise removal of NVMe devices. Surprise removing an NVMe device that is critical to the system operation is not supported. For example, you cannot remove an NVMe device that contains the operating system or a swap partition. (BZ#1634655) Stratis filesystem symlink paths have changed With this enhancement, Stratis filesystem symlink paths have changed from /stratis/ <stratis-pool> / <filesystem-name> to /dev/stratis/ <stratis-pool> / <filesystem-name> . Consequently, all existing Stratis symlinks must be migrated to utilize the new symlink paths. Use the included stratis_migrate_symlinks.sh migration script or reboot your system to update the symlink paths. If you manually changed the systemd unit files or the /etc/fstab file to automatically mount Stratis filesystems, you must update them with the new symlink paths. Note If you do not update your configuration with the new Stratis symlink paths, or if you temporarily disable the automatic mounts, the boot process might not complete the time you reboot or start your system. ( BZ#1798244 ) Stratis now supports binding encrypted pools to a supplementary Clevis encryption policy With this enhancement, you can now bind encrypted Stratis pools to Network Bound Disk Encryption (NBDE) using a Tang server, or to the Trusted Platform Module (TPM) 2.0. Binding an encrypted Stratis pool to NBDE or TPM 2.0 facilitates automatic unlocking of pools. As a result, you can access your Stratis pools without having to provide the kernel keyring description after each system reboot. Note that binding a Stratis pool to a supplementary Clevis encryption policy does not remove the primary kernel keyring encryption. ( BZ#1868100 ) New mount options to control when DAX is enabled on XFS and ext4 file systems This update introduces new mount options which, when combined with the FS_XFLAG_DAX inode flag, provide finer-grained control of the Direct Access (DAX) mode for files on XFS and ext4 file systems. In prior releases, DAX was enabled for the entire file system using the dax mount option. Now, the direct access mode can be enabled on a per-file basis. The on-disk flag, FS_XFLAG_DAX , is used to selectively enable or disable DAX for a particular file or directory. The dax mount option dictates whether or not the flag is honored: -o dax=inode - follow FS_XFLAG_DAX . This is the default when no dax option is specified. -o dax=never - never enable DAX, ignore FS_XFLAG_DAX . -o dax=always - always enable DAX, ignore FS_XFLAG_DAX . -o dax - is a legacy option which is an alias for "dax=always". This may be removed in the future, so "-o dax=always" is preferred. You can set FS_XFLAG_DAX flag by using the xfs_io utility's chatter command: (BZ#1838876, BZ#1838344) SMB Direct is now supported With this update, the SMB client now supports SMB Direct. (BZ#1887940) New API for mounting filesystems has been added With this update, a new API for mounting filesystems based on an internal kernel structure called a filesystem context ( struct fs_context ) has been added into RHEL 8.4, allowing greater flexibility in communication of mount parameters between userspace, the VFS, and the file system. Along with this, there are following system calls for operating on the file system context: fsopen() - creates a blank filesystem configuration context within the kernel for the filesystem named in the fsname parameter, adds it into creation mode, and attaches it to a file descriptor, which it then returns. fsmount() - takes the file descriptor returned by fsopen() and creates a mount object for the file system root specified there. fsconfig() - supplies parameters to and issues commands against a file system configuration context as set up by the fsopen(2) or fspick(2) system calls. fspick() - creates a new file system configuration context within the kernel and attaches a pre-existing superblock to it so that it can be reconfigured. move_mount() - moves a mount from one location to another; it can also be used to attach an unattached mount created by fsmount() or open_tree() with the OPEN_TREE_CLONE system call. open_tree() - picks the mount object specified by the pathname and attaches it to a new file descriptor or clones it and attaches the clone to the file descriptor. Note that the old API based on the mount() system call is still supported. For additional information, see the Documentation/filesystems/mount_api.txt file in the kernel source tree. (BZ#1622041) Discrepancy in vfat file system mtime no longer occurs With this update, the discrepancy in the vfat file system mtime between in-memory and on-disk write times is no longer present. This discrepancy was caused by a difference between in-memory and on-disk mtime metadata, which no longer occurs. (BZ#1533270) RHEL 8.4 now supports close_range() system call With this update, the close_range() system call was backported to RHEL 8.4. This system call closes all file descriptors in a given range effectively, preventing timing problems which are present when closing a wide range of file descriptors sequentially if applications configure very large limits. (BZ#1900674) Support for user extended attributes through the NFSv4.2 protocol has been added This update adds NFSV4.2 client-side and server-side support for user extended attributes (RFC 8276) and newly includes the following protocol extensions: New operations: - GETXATTR - get an extended attribute of a file - SETXATTR - set an extended attribute of a file - LISTXATTR - list extended attributes of a file - REMOVEXATTR - remove an extended attribute of a file New error codes: - NFS4ERR-NOXATTR - xattr does not exist - NFS4ERR_XATTR2BIG - xattr value is too big New attribute: - xattr_support - per-fs read-only attribute determines whether xattrs are supported. When set to True , the object's file system supports extended attributes. (BZ#1888214) 4.10. High availability and clusters Noncritical resources in colocation constraints are now supported With this enhancement, you can configure a colocation constraint such that if the dependent resource of the constraint reaches its migration threshold for failure, Pacemaker will leave that resource offline and keep the primary resource on its current node rather than attempting to move both resources to another node. To support this behavior, colocation constraints now have an influence option, which can be set to true or false , and resources have a critical meta-attribute, which can also be set to true or false . The value of the critical resource meta option determines the default value of the influence option for all colocation constraints involving the resource as a dependent resource. When the influence colocation constraint option has a value of true Pacemaker will attempt to keep both the primary and dependent resource active. If the dependent resource reaches its migration threshold for failures, both resources will move to another node, if possible. When the influence colocation option has a value of false , Pacemaker will avoid moving the primary resource as a result of the status of the dependent resource. In this case, if the dependent resource reaches its migration threshold for failures, it will stop if the primary resource is active and can remain on its current node. By default, the value of the critical resource meta option is set to true , which in turn determines that the default value of the influence option is true . This preserves the behavior where Pacemaker attempted to keep both resources active. ( BZ#1371576 ) New number data type supported by Pacemaker rules PCS now supports a data type of number , which you can use when defining Pacemaker rules in any PCS command that accepts rules. Pacemaker rules implement number as a double-precision floating-point number and integer as a 64-bit integer. (BZ#1869399) Ability to specify a custom clone ID when creating a clone resource or promotable clone resource When you create a clone resource or a promotable clone resource, the clone resource is named resource-id -clone by default. If that ID is already in use, PCS adds the suffix - integer , starting with an integer value of 1 and incrementing by one for each additional clone. You can now override this default by specifying a name for a clone resource ID or promotable clone resource ID with the clone-id option when creating a clone resource with the pcs resource create or the pcs resource clone command. For information on creating clone resources, see Creating cluster resources that are active on multiple nodes . ( BZ#1741056 ) New command to display Corosync configuration You can now print the contents of the corosync.conf file in several output formats with the new pcs cluster config [show] command. By default, the pcs cluster config command uses the text output format, which displays the Corosync configuration in a human-readable form, with the same structure and option names as the pcs cluster setup and pcs cluster config update commands. ( BZ#1667066 ) New command to modify the Corosync configuration of an existing cluster You can now modify the parameters of the corosync.conf file with the new pcs cluster config update command. You can use this command, for example, to increase the totem token to avoid fencing during temporary system unresponsiveness. For information on modifying the corosync.conf file, see Modifying the corosync.conf file with the pcs command . ( BZ#1667061 ) Enabling and disabling Corosync traffic encryption in an existing cluster Previously, you could configure Corosync traffic encryption only when creating a new cluster. With this update: You can change the configuration of the Corosync crypto cipher and hash with the pcs cluster config update command. You can change the Corosync authkey with the pcs cluster authkey corosync command. ( BZ#1457314 ) New crypt resource agent for shared and encrypted GFS2 file systems RHEL HA now supports a new crypt resource agent, which allows you to configure a LUKS encrypted block device that can be used to provide shared and encrypted GFS2 file systems. Using the crypt resource is currently supported only with GFS2 file systems. For information on configuring an encrypted GFS2 file system, see Configuring an encrypted GFS2 file system in a cluster . (BZ#1471182) 4.11. Dynamic programming languages, web and database servers A new module: python39 RHEL 8.4 introduces Python 3.9, provided by the new module python39 and the ubi8/python-39 container image. Notable enhancements compared to Python 3.8 include: The merge ( | ) and update ( |= ) operators have been added to the dict class. Methods to remove prefixes and suffixes have been added to strings. Type hinting generics have been added to certain standard types, such as list and dict . The IANA Time Zone Database is now available through the new zoneinfo module. Python 3.9 and packages built for it can be installed in parallel with Python 3.8 and Python 3.6 on the same system. To install packages from the python39 module, use, for example: The python39:3.9 module stream will be enabled automatically. To run the interpreter, use, for example: See Installing and using Python for more information. Note that Red Hat will continue to provide support for Python 3.6 until the end of life of RHEL 8. Similarly to Python 3.8, Python 3.9 will have a shorter life cycle; see Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1877430) Changes in the default separator for the Python urllib parsing functions To mitigate the Web Cache Poisoning CVE-2021-23336 in the Python urllib library, the default separator for the urllib.parse.parse_qsl and urllib.parse.parse_qs functions is being changed from both ampersand ( & ) and semicolon ( ; ) to only an ampersand. This change has been implemented in Python 3.6 with the release of RHEL 8.4, and will be backported to Python 3.8 and Python 2.7 in the following minor release of RHEL 8. The change of the default separator is potentially backwards incompatible, therefore Red Hat provides a way to configure the behavior in Python packages where the default separator has been changed. In addition, the affected urllib parsing functions issue a warning if they detect that a customer's application has been affected by the change. For more information, see the Mitigation of Web Cache Poisoning in the Python urllib library (CVE-2021-23336) . Python 3.9 is unaffected and already includes the new default separator ( & ), which can be changed only by passing the separator parameter when calling the urllib.parse.parse_qsl and urllib.parse.parse_qs functions in Python code. (BZ#1935686, BZ#1928904 ) A new module stream: swig:4.0 RHEL 8.4 introduces the Simplified Wrapper and Interface Generator (SWIG) version 4.0, available as a new module stream, swig:4.0 . Notable changes over the previously released SWIG 3.0 include: The only supported Python versions are: 2.7 and 3.2 to 3.8. The Python module has been improved: the generated code has been simplified and most optimizations are now enabled by default. Support for Ruby 2.7 has been added. PHP 7 is now the only supported PHP version; support for PHP 5 has been removed. Performance has been significantly improved when running SWIG on large interface files. Support for a command-line options file (also referred to as a response file) has been added. Support for JavaScript Node.js versions 2 to 10 has been added. Support for Octave versions 4.4 to 5.1 has been added. To install the swig:4.0 module stream, use: If you want to upgrade from the swig:3.0 stream, see Switching to a later stream . For information about the length of support for the swig module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . ( BZ#1853639 ) A new module stream: subversion:1.14 RHEL 8.4 introduces a new module stream, subversion:1.14 . Subversion 1.14 is the most recent Long Term Support (LTS) release. Notable changes since Subversion 1.10 distributed in RHEL 8.0 include: Subversion 1.14 includes Python 3 bindings for automation and integration of Subversion into the customer's build and release infrastructure. A new svnadmin rev-size command enables users to determine the total size of a revision. A new svnadmin build-repcache command enables administrators to populate the rep-cache database with missing entries. A new experimental command has been added to provide an overview of the current working copy status. Various improvements to the svn log , svn info , and svn list commands have been implemented. For example, svn list --human-readable now uses human-readable units for file sizes. Significant improvements to svn status for large working copies have been made. Compatibility information: Subversion 1.10 clients and servers interoperate with Subversion 1.14 servers and clients. However, certain features might not be available unless both client and server are upgraded to the latest version. Repositories created under Subversion 1.10 can be successfully loaded in Subversion 1.14 . Subversion 1.14 distributed in RHEL 8 enables users to cache passwords in plain text on the client side. This behaviour is the same as Subversion 1.10 but different from the upstream release of Subversion 1.14 . The experimental Shelving feature has been significantly changed, and it is incompatible with shelves created in Subversion 1.10 . See the upstream documentation for details and upgrade instructions. The interpretation of path-based authentication configurations with both global and repository-specific rules has changed in Subversion 1.14 . See the upstream documentation for details on affected configurations. To install the subversion:1:14 module stream, use: If you want to upgrade from the subversion:1.10 stream, see Switching to a later stream . For information about the length of support for the subversion module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . ( BZ#1844947 ) A new module stream: redis:6 Redis 6 , an advanced key-value store, is now available as a new module stream, redis:6 . Notable changes over Redis 5 include: Redis now supports SSL on all channels. Redis now supports Access Control List (ACL), which defines user permissions for command calls and key pattern access. Redis now supports a new RESP3 protocol, which returns more semantical replies. Redis can now optionally use threads to handle I/O. Redis now offers server-side support for client-side caching of key values. The Redis active expire cycle has been improved to enable faster eviction of expired keys. Redis 6 is compatible with Redis 5 , with the exception of this backward incompatible change: When a set key does not exist, the SPOP <count> command no longer returns null. In Redis 6 , the command returns an empty set in this scenario, similar to a situation when it is called with a 0 argument. To install the redis:6 module stream, use: If you want to upgrade from the redis:5 stream, see Switching to a later stream . For information about the length of support for the redis module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1862063) A new module stream: postgresql:13 RHEL 8.4 introduces PostgreSQL 13 , which provides a number of new features and enhancements over version 12. Notable changes include: Performance improvements resulting from de-duplication of B-tree index entries Improved performance for queries that use aggregates or partitioned tables Improved query planning when using extended statistics Parallelized vacuuming of indexes Incremental sorting Note that support for Just-In-Time (JIT) compilation, available in upstream since PostgreSQL 11 , is not provided by the postgresql:13 module stream. See also Using PostgreSQL . To install the postgresql:13 stream, use: If you want to upgrade from an earlier postgresql stream within RHEL 8, follow the procedure described in Switching to a later stream and then migrate your PostgreSQL data as described in Migrating to a RHEL 8 version of PostgreSQL . For information about the length of support for the postgresql module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1855776) A new module stream: mariadb:10.5 MariaDB 10.5 is now available as a new module stream, mariadb:10.5 . Notable enhancements over the previously available version 10.3 include: MariaDB now uses the unix_socket authentication plug-in by default. The plug-in enables users to use operating system credentials when connecting to MariaDB through the local Unix socket file. MariaDB supports a new FLUSH SSL command to reload SSL certificates without a server restart. MariaDB adds mariadb-* named binaries and mysql* symbolic links pointing to the mariadb-* binaires. For example, the mysqladmin , mysqlaccess , and mysqlshow symlinks point to the mariadb-admin , mariadb-access , and mariadb-show binaries, respectively. MariaDB supports a new INET6 data type for storing IPv6 addresses. MariaDB now uses the Perl Compatible Regular Expressions (PCRE) library version 2. The SUPER privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. MariaDB adds a new global variable, binlog_row_metadata , as well as system variables and status variables to control the amount of metadata logged. The default value of the eq_range_index_dive_limit variable has been changed from 0 to 200 . A new SHUTDOWN WAIT FOR ALL SLAVES server command and a new mysqladmin shutdown --wait-for-all-slaves option have been added to instruct the server to shut down only after the last binlog event has been sent to all connected replicas. In parallel replication, the slave_parallel_mode variable now defaults to optimistic . The InnoDB storage engine introduces the following changes: InnoDB now supports an instant DROP COLUMN operation and enables users to change the column order. Defaults of the following variables have been changed: innodb_adaptive_hash_index to OFF and innodb_checksum_algorithm to full_crc32 . Several InnoDB variables have been removed or deprecated. MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes: Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments. Galera now fully supports Global Transaction ID (GTID). The default value for the wsrep_on option in the /etc/my.cnf.d/galera.cnf file has changed from 1 to 0 to prevent end users from starting wsrep replication without configuring required additional options. See also Using MariaDB . To install the mariadb:10.5 stream, use: If you want to upgrade from the mariadb:10.3 module stream, see Upgrading from MariaDB 10.3 to MariaDB 10.5 . For information about the length of support for the mariadb module streams, see the Red Hat Enterprise Linux 8 Application Streams Life Cycle . (BZ#1855781) MariaDB 10.5 provides the PAM plug-in version 2.0 MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plug-in. The PAM plug-in version 2.0 performs PAM authentication using a separate setuid root helper binary, which enables MariaDB to utilize additional PAM modules. In MariaDB 10.5 , the Pluggable Authentication Modules (PAM) plug-in and its related files have been moved to a new package, mariadb-pam . This package contains both PAM plug-in versions: version 2.0 is the default, and version 1.0 is available as the auth_pam_v1 shared object library. Note that the mariadb-pam package is not installed by default with the MariaDB server. To make the PAM authentication plug-in available in MariaDB 10.5 , install the mariadb-pam package manually. See also known issue PAM plug-in version 1.0 does not work in MariaDB . ( BZ#1936842 ) A new package: mysql-selinux RHEL 8.4 adds a new mysql-selinux package that provides an SELinux module with rules for the MariaDB and MySQL databases. The package is installed by default with the database server. The module's priority is set to 200 . (BZ#1895021) python-PyMySQL rebased to version 0.10.1 The python-PyMySQL package, which provides the pure-Python MySQL client library, has been updated to version 0.10.1. The package is included in the python36 , python38 , and python39 modules. Notable changes include: This update adds support for the ed25519 and caching_sha2_password authentication mechanisms. The default character set in the python38 and python39 modules is utf8mb4 , which aligns with upstream. The python36 module preserves the default latin1 character set to maintain compatibility with earlier versions of this module. In the python36 module, the /usr/lib/python3.6/site-packages/pymysql/tests/ directory is no longer available. ( BZ#1820628 , BZ#1885641 ) A new package: python3-pyodbc This update adds the python3-pyodbc package to RHEL 8. The pyodbc Python module provides access to Open Database Connectivity (ODBC) databases. This module implements the Python DB API 2.0 specification and can be used with third-party ODBC drivers. For example, you can now use the Performance Co-Pilot ( pcp ) to monitor performance of the SQL Server. (BZ#1881490) A new package: micropipenv A new micropipenv package is now available. It provides a lightweight wrapper for the pip package installer to support Pipenv and Poetry lock files. Note that the micropipenv package is distributed in the AppStream repository and is provided under the Compatibility level 4. For more information, see the Red Hat Enterprise Linux 8 Application Compatibility Guide . (BZ#1849096) New packages: py3c-devel and py3c-docs RHEL 8.4 introduces new py3c-devel and py3c-docs packages, which simplify porting C extensions to Python 3. These packages include a detailed guide and a set of macros for easier porting. Note that the py3c-devel and py3c-docs packages are distributed through the unsupported CodeReady Linux Builder (CRB) repository . (BZ#1841060) Enhanced ProxyRemote directive for configuring httpd The ProxyRemote configuration directive in the Apache HTTP Server has been enhanced to optionally take user name and password credentials. These credentials are used for authenticating to the remote proxy using HTTP Basic authentication. This feature has been backported from httpd 2.5 . (BZ#1869576) Non-end-entity certificates can be used with the SSLProxyMachineCertificateFile and SSLProxyMachineCertificatePath httpd directives With this update, you can use non-end-entity (non-leaf) certificates, such as a Certificate Authority (CA) or intermediate certificate, with the SSLProxyMachineCertificateFile and SSLProxyMachineCertificatePath configuration directives in the Apache HTTP Server. The Apache HTTP server now treats such certificates as trusted CAs, as if they were used with the SSLProxyMachineCertificateChainFile directive. Previously, if non-end-entity certificates were used with the SSLProxyMachineCertificateFile and SSLProxyMachineCertificatePath directives, httpd failed to start with a configuration error. (BZ#1883648) A new SecRemoteTimeout directive in the mod_security module Previously, you could not modify the default timeout for retrieving remote rules in the mod_security module for the Apache HTTP Server. With this update, you can set a custom timeout in seconds using the new SecRemoteTimeout configuration directive. When the timeout has been reached, httpd now fails with an error message Timeout was reached . Note that in this scenario, the error message also contains Syntax error even if the configuration file is syntactically valid. The httpd behavior upon timeout depends on the value of the SecRemoteRulesFailAction configuration directive (the default value is Abort ). ( BZ#1824859 ) The mod_fcgid module can now pass up to 1024 environment variables to an FCGI server process With this update, the mod_fcgid module for the Apache HTTP Server can pass up to 1024 environment variables to a FastCGI (FCGI) server process. The limit of 64 environment variables could cause applications running on the FCGI server to malfunction. ( BZ#1876525 ) perl-IO-String is now available in the AppStream repository The perl-IO-String package, which provides the Perl IO::String module, is now distributed through the supported AppStream repository. In releases of RHEL 8, the perl-IO-String package was available in the unsupported CodeReady Linux Builder repository. (BZ#1890998) A new package: quota-devel RHEL 8.4 introduces the quota-devel package, which provides header files for implementing the quota Remote Procedure Call (RPC) service. Note that the quota-devel package is distributed through the unsupported CodeReady Linux Builder (CRB) repository . ( BZ#1868671 ) 4.12. Compilers and development tools The glibc library now supports glibc-hwcaps subdirectories for loading optimized shared library implementations On certain architectures, hardware upgrades sometimes caused glibc to load libraries with baseline optimizations, rather than optimized libraries for the hardware generation. Additionally, when running on AMD CPUs, optimized libraries were not loaded at all. With this enhancement, glibc supports locating optimized library implementations in the glibc-hwcaps subdirectories. The dynamic loader checks for library files in the sub-directories based on the CPU in use and its hardware capabilities. This feature is available on following architectures: IBM Power Systems (little endian), IBM Z, 64-bit AMD and Intel. (BZ#1817513) The glibc dynamic loader now activates selected audit modules at run time Previously, the binutils link editor ld supported the --audit option to select audit modules for activation at run time, but the glibc dynamic loader ignored the request. With this update, the glib dynamic loader no longer ignores the request, and loads the indicated audit modules. As a result, it is possible to activate audit modules for specific programs without writing wrapper scripts or using similar mechanisms. ( BZ#1871385 ) glibc now provides improved performance on IBM POWER9 This update introduces new implementations of the functions strlen , strcpy , stpcpy , and rawmemchr for IBM POWER9. As a result, these functions now execute faster on IBM POWER9 hardware which leads to performance gains. ( BZ#1871387 ) Optimized performance of memcpy and memset on IBM Z With this enhancement, the core library implementation for the memcpy and memset APIs were adjusted to accelerate both small (< 64KiB) and larger data copies on IBM Z processors. As a result, applications working with in-memory data now benefit from significantly improved performance across a wide variety of workloads. ( BZ#1871395 ) GCC now supports the ARMv8.1 LSE atomic instructions With this enhancement, the GCC compiler now supports Large System Extensions (LSE), atomic instructions added with the ARMv8.1 specification. These instructions provide better performance in multi-threaded applications than the ARMv8.0 Load-Exclusive and Store-Exclusive instructions. (BZ#1821994) GCC now emits vector alignment hints for certain IBM Z systems This update enables the GCC compiler to emit vector load and store alignment hints for IBM z13 processors. To use this enhancement the assembler must support such hints. As a result, users now benefit from improved performance of certain vector operations. (BZ#1850498) Dyninst rebased to version 10.2.1 The Dyninst binary analysis and modification tool has been updated to version 10.2.1. Notable bug fixes and enhancements include: Support for the elfutils debuginfod client library. Improved parallel binary code analysis. Improved analysis and instrumentation of large binaries. ( BZ#1892001 ) elfutils rebased to version 0.182 The elfutils package has been updated to version 0.182. Notable bug fixes and enhancements include: Recognizes the DW_CFA_AARCH64_negate_ra_state instruction. When Pointer Authentication Code (PAC) is not enabled, you can use DW_CFA_AARCH64_negate_ra_state to unwind code that is compiled for PAC on the 64-bit ARM architecture. elf_update now fixes bad sh_addralign values in sections that have set the SHF_COMPRESSED flag. debuginfod-client now supports kernel ELF images compressed with ZSTD. debuginfod has a more efficient package traversal, tolerating various errors during scanning. The grooming process is more visible and interruptible, and provides more Prometheus metrics. ( BZ#1875318 ) SystemTap rebased to version 4.4 The SystemTap instrumentation tool has been updated to version 4.4, which provides multiple bug fixes and enhancements. Notable changes include: Performance and stability improvements to user-space probing. Users can now access implicit thread local storage variables on these architectures: AMD64, Intel 64, IBM Z, the little-endian variant of IBM Power Systems. Initial support for processing of floating point values. Improved concurrency for scripts using global variables. The locks required to protect concurrent access to global variables have been optimized so that they span the smallest possible critical region. New syntax for defining aliases with both a prologue and an epilogue. New @probewrite predicate. syscall arguments are writable again. For further information about notable changes, read the upstream release notes before updating. ( BZ#1875341 ) Valgrind now supports IBM z14 instructions With this update, the Valgrind tool suite supports instructions for the IBM z14 processor. As a result, you can now use the Valgrind tools to debug programs using the z14 vector instructions and the miscellaneous z14 instruction set. (BZ#1504123) CMake rebased to version 3.18.2 The CMake build system has been upgraded from version 3.11.4 to version 3.18.2. It is available in RHEL 8.4 as the cmake-3.18.2-8.el8 package. To use CMake on a project that requires the version 3.18.2 or less, use the command cmake_minimum_required(version x.y.z) . For further information on new features and deprecated functionalities, see the CMake Release Notes . ( BZ#1816874 ) libmpc rebased to version 1.1.0 The libmpc package has been rebased to version 1.1.0, which provides several enhancements and bug fixes over the version. For details, see GNU MPC 1.1.0 release notes . ( BZ#1835193 ) Updated GCC Toolset 10 GCC Toolset 10 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. Notable changes introduced with RHEL 8.4 include: The GCC compiler has been updated to the upstream version, which provides multiple bug fixes. elfutils has been updated to version 0.182. Dyninst has been updated to version 10.2.1. SystemTap has been updated to version 4.4. The following tools and versions are provided by GCC Toolset 10: Tool Version GCC 10.2.1 GDB 9.2 Valgrind 3.16.0 SystemTap 4.4 Dyninst 10.2.1 binutils 2.35 elfutils 0.182 dwz 0.12 make 4.2.1 strace 5.7 ltrace 0.7.91 annobin 9.29 To install GCC Toolset 10, run the following command as root: To run a tool from GCC Toolset 10: To run a shell session where tool versions from GCC Toolset 10 override system versions of these tools: For more information, see Using GCC Toolset . The GCC Toolset 10 components are available in the two container images: rhel8/gcc-toolset-10-toolchain , which includes the GCC compiler, the GDB debugger, and the make automation tool. rhel8/gcc-toolset-10-perftools , which includes the performance monitoring tools, such as SystemTap and Valgrind. To pull a container image, run the following command as root: Note that only the GCC Toolset 10 container images are now supported. Container images of earlier GCC Toolset versions are deprecated. For details regarding the container images, see Using the GCC Toolset container images . (BZ#1918055) GCC Toolset 10: GCC now supports bfloat16 In GCC Toolset 10, the GCC compiler now supports the bfloat16 extension through ACLE Intrinsics. This enhancement provides high-performance computing. (BZ#1656139) GCC Toolset 10: GCC now supports ENQCMD and ENQCMDS instructions on Intel Sapphire Rapids processors In GCC Toolset 10, the GNU Compiler Collection (GCC) now supports the ENQCMD and ENQCMDS instructions, which you can use to submit work descriptors to devices automatically. To apply this enhancement, run GCC with the -menqcmd option. (BZ#1891998) GCC Toolset 10: Dyninst rebased to version 10.2.1 In GCC Toolset 10, the Dyninst binary analysis and modification tool has been updated to version 10.2.1. Notable bug fixes and enhancements include: Support for the elfutils debuginfod client library. Improved parallel binary code analysis. Improved analysis and instrumentation of large binaries. ( BZ#1892007 ) GCC Toolset 10: elfutils rebased to version 0.182 In GCC Toolset 10, the elfutils package has been updated to version 0.182. Notable bug fixes and enhancements include: Recognizes the DW_CFA_AARCH64_negate_ra_state instruction. When Pointer Authentication Code (PAC) is not enabled, you can use DW_CFA_AARCH64_negate_ra_state to unwind code that is compiled for PAC on the 64-bit ARM architecture. elf_update now fixes bad sh_addralign values in sections that have set the SHF_COMPRESSED flag. debuginfod-client now supports kernel ELF images compressed with ZSTD. debuginfod has a more efficient package traversal, tolerating various errors during scanning. The grooming process is more visible and interruptible, and provides more Prometheus metrics. ( BZ#1879758 ) Go Toolset rebased to version 1.15.7 Go Toolset has been upgraded to 1.15.7. Notable enhancements include: Linking is now faster and requires less memory due to the newly implemented object file format and increased concurrency of internal phases. With this enhancement, internal linking is now the default. To disable this setting, use the compiler flag -ldflags=-linkmode=external . Allocating small objects has been improved for high core counts, including worst-case latency. Treating the CommonName field on X.509 certificates as a host name when no Subject Alternative Names are specified is now disabled by default. To enable it, add the value x509ignoreCN=0 to the GODEBUG environment variable. GOPROXY now supports skipping proxies that return errors. Go now includes the new package time/tzdata . It enables you to embed the timezone database into a program even if the timezone database is not available on your local system. For more information on Go Toolset, go to Using Go Toolset . (BZ#1870531) Rust Toolset rebased to version 1.49.0 Rust Toolset has been updated to version 1.49.0. Notable changes include: You can now use the path of a rustdoc page item to link to it in rustdoc. The rust test framework now hides thread output. Output of failed tests still show in the terminal. You can now use [T; N]: TryFrom<Vec<T>> to turn a vector into an array of any length. You can now use slice::select_nth_unstable to perform ordered partitioning. This function is also available with the following variants: slice::select_nth_unstable_by provides a comparator function. slice::select_nth_unstable_by_key provides a key extraction function. You can now use ManuallyDrop as the type of a union field. It is also possible to use impl Drop for Union to add the Drop trait to existing unions. This makes it possible to define unions where certain fields need to be dropped manually. Container images for Rust Toolset have been deprecated and Rust Toolset has been added to the Universal Base Images (UBI) repositories. For further information, see Using Rust Toolset . (BZ#1896712) LLVM Toolset rebased to version 11.0.0 LLVM Toolset has been upgraded to version 11.0.0. Notable changes include: Support for the -fstack-clash-protection command-line option has been added to the AMD and Intel 64-bit architectures, IBM Power Systems, Little Endian, and IBM Z. This new compiler flag protects from stack-clash attacks by automatically checking each stack page. The new compiler flag ffp-exception-behavior={ignore,maytrap,strict} enables the specification of floating-point exception behavior. The default setting is ignore . The new compiler flag ffp-model={precise,strict,fast} allows the simplification of single purpose floating-point options. The default setting is precise . The new compiler flag -fno-common is now enabled by default. With this enhancement, code written in C using tentative variable definitions in multiple translation units now triggers multiple-definition linker errors. To disable this setting, use the -fcommon flag. Container images for LLVM Toolset have been deprecated and LLVM Toolset has been added to the Universal Base Images (UBI) repositories. For more information, see Using LLVM Toolset . (BZ#1892716) pcp rebased to version 5.2.5 The pcp package has been upgraded to version 5.2.5. Notable changes include: SQL Server metrics support via a secure connection. eBPF/BCC netproc module with per-process network metrics. pmdaperfevent(1) support for the hv_24x7 core-level and hv_gpci event metrics. New Linux process accounting metrics, Linux ZFS metrics, Linux XFS metric, Linux kernel socket metrics, Linux multipath TCP metrics, Linux memory and ZRAM metrics, and S.M.A.R.T. metric support for NVM Express disks. New pcp-htop(1) utility to visualize the system and process metrics. New pmrepconf(1) utility to generate the pmrep/pcp2xxx configurations. New pmiectl(1) utility for controlling the pmie services. New pmlogctl(1) utility for controlling the pmlogger services. New pmlogpaste(1) utility for writing log string metrics. New pcp-atop(1) utility to process accounting statistics and per-process network statistics reporting. New pmseries(1) utility to query functions, language extensions, and REST API. New pmie(1) rules for detecting OOM kills and socket connection saturation. Bug fixes in the pcp-atopsar(1) , pcp-free(1) , pcp-dstat(1) , pmlogger(1) , and pmchart(1) utilities. REST API and C API support for per-context derived metrics. Improved OpenMetrics metric metadata (units, semantics). Rearranged installed /var file system layouts extensively. ( BZ#1854035 ) Accessing remote hosts through a central pmproxy for the Vector data source in grafana-pcp In some environments, the network policy does not allow connections from the dashboard viewer's browser to the monitored hosts directly. This update makes it possible to customize the hostspec in order to connect to a central pmproxy , which forwards the requests to the individual hosts. ( BZ#1845592 ) grafana rebased to version 7.3.6 The grafana package has been upgraded to version 7.3.6. Notable changes include: New panel editor and new data transformations feature Improved time zone support Default provisioning path now changed from the /usr/share/grafana/conf/provisioning to the /etc/grafana/provisioning directory. You can configure this setting in the /etc/grafana/grafana.ini configuration file. For more information, see What's New in Grafana v7.0 , What's New in Grafana v7.1 , What's New in Grafana v7.2 , and What's New in Grafana v7.3 . ( BZ#1850471 ) grafana-pcp rebased to version 3.0.2 The grafana-pcp package has been upgraded to version 3.0.2. Notable changes include: Redis: Supports creating an alert in Grafana. Using the label_values(metric, label) in a Grafana variable query is deprecated due to performance reasons. The label_values(label) query is still supported. Vector: Supports derived metrics, which allows the usage of arithmetic operators and statistical functions inside a query. For more information, see the pmRegisterDerived(3) man page. Configurable hostspec, where you can access remote Performance Metrics Collector Daemon (PMCDs) through a central pmproxy . Automatically configures the unit of the panel. Dashboards: Detects potential performance issues and shows possible solutions with the checklist dashboards, using the Utilization Saturation and Errors (USE) method. New MS SQL server dashboard, eBPF/BCC dashboard, and container overview dashboard with the CGroups v2 . All dashboards are now located in the Dashboards tab in the Datasource settings pages and are not imported automatically. Upgrade notes: Update the Grafana configuration file: Edit the /etc/grafana/grafana.ini Grafana configuration file and make sure that the following option is set: Restart the Grafana server: ( BZ#1854093 ) Active Directory authentication for accessing SQL Server metrics in PCP With this update, a system administrator can configure pmdamssql(1) to connect securely to the SQL Server metrics using Active Directory (AD) authentication. ( BZ#1847808 ) grafana-container rebased to version 7.3.6 The rhel8/grafana container image provides Grafana. Grafana is an open source utility with metrics dashboard, and graphic editor for Graphite, Elasticsearch, OpenTSDB, Prometheus, InfluxDB, and Performance Co-Pilot (PCP). The grafana-container package has been upgraded to version 7.3.6. Notable changes include: The grafana package is now updated to version 7.3.6. The grafana-pcp package is now updated to version 3.0.2. The rebase updates the rhel8/grafana image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1916154 ) pcp-container rebased to version 5.2.5 The rhel8/pcp container image provides Performance Co-Pilot, which is a system performance analysis toolkit. The pcp-container package has been upgraded to version 5.2.5. Notable changes include: The pcp package is now updated to version 5.2.5. Introduced a new PCP_SERVICES environment variable, which specifies a comma-separated list of PCP services to start inside the container. The rebase updates the rhel8/pcp image in the Red Hat Container Registry. To pull this container image, execute the following command: ( BZ#1916155 ) JDK Mission Control rebased to version 8.0.0 The JDK Mission Control (JMC) profiler for HotSpot JVMs, provided by the jmc:rhel8 module stream, has been upgraded to version 8.0.0. Notable enhancements include: The Treemap viewer has been added to the JOverflow plug-in for visualizing memory usage by classes. The Threads graph has been enhanced with more filtering and zoom options. JDK Mission Control now provides support for opening JDK Flight Recorder recordings compressed with the LZ4 algorithm. New columns have been added to the Memory and TLAB views to help you identify areas of allocation pressure. Graph view has been added to improve visualization of stack traces. The Percentage column has been added to histogram tables. JMC in RHEL 8 requires JDK version 8 or later to run. Target Java applications must run with at least OpenJDK version 8 so that JMC can access JDK Flight Recorder features. The jmc:rhel8 module stream has two profiles: The common profile, which installs the entire JMC application The core profile, which installs only the core Java libraries ( jmc-core ) To install the common profile of the jmc:rhel8 module stream, use: Change the profile name to core to install only the jmc-core package. (BZ#1919283) 4.13. Identity Management Making Identity Management more inclusive Red Hat is committed to using conscious language. In Identity Management, planned terminology replacements include: block list replaces blacklist allow list replaces whitelist secondary replaces slave The word master is going to be replaced with more precise language, depending on the context: IdM server replaces IdM master CA renewal server replaces CA renewal master CRL publisher server replaces CRL master multi-supplier replaces multi-master (JIRA:RHELPLAN-73418) The dsidm utility supports renaming and moving entries With this enhancement, you can use the dsidm utility to rename and move users, groups, POSIX groups, roles, and organizational units (OU) in Directory Server. For further details and examples, see the Renaming Users, Groups, POSIX Groups, and OUs section in the Directory Server Administration Guide. ( BZ#1859218 ) Deleting Sub-CAs in IdM With this enhancement, if you run the ipa ca-del command and have not disabled the Sub-CA, an error indicates the Sub-CA cannot be deleted and it must be disabled. First run the ipa ca-disable command to disable the Sub-CA and then delete it using the ipa ca-del command. Note that you cannot disable or delete the IdM CA. (JIRA:RHELPLAN-63081) IdM now supports new Ansible management role and modules RHEL 8.4 provides Ansible modules for automated management of role-based access control (RBAC) in Identity Management (IdM), an Ansible role for backing up and restoring IdM servers, and an Ansible module for location management: You can use the ipapermission module to create, modify, and delete permissions and permission members in IdM RBAC. You can use the ipaprivilege module to create, modify, and delete privileges and privilege members in IdM RBAC. You can use the iparole module to create, modify, and delete roles and role members in IdM RBAC. You can use the ipadelegation module to delegate permissions over users in IdM RBAC. You can use the ipaselfservice module to create, modify, and delete self-service access rules in IdM. You can use the ipabackup role to create, copy, and remove IdM server backups and restore an IdM server either locally or from the control node. You can use the ipalocation module to ensure the presence or absence of the physical locations of hosts, such as their data center racks. (JIRA:RHELPLAN-72660) IdM in FIPS mode now supports a cross-forest trust with AD With this enhancement, administrators can establish a cross-forest trust between an IdM domain with FIPS mode enabled and an Active Directory (AD) domain. Note that you cannot establish a trust using a shared secret while FIPS mode is enabled in IdM, see FIPS compliance . (JIRA:RHELPLAN-58629) AD users can now log in to IdM with UPN suffixes subordinate to known UPN suffixes Previously, Active Directory (AD) users could not log into Identity Management (IdM) with a Universal Principal Name (UPN) (for example, sub1.ad-example.com ) that is a subdomain of a known UPN suffix (for example, ad-example.com ) because internal Samba processes filtered subdomains as duplicates of any Top Level Names (TLNs). This update validates UPNs by testing if they are subordinate to the known UPN suffixes. As a result, users can now log in using subordinate UPN suffixes in the described scenario. ( BZ#1891056 ) IdM now supports new password policy options With this update, Identity Management (IdM) supports additional libpwquality library options: --maxrepeat Specifies the maximum number of the same character in sequence. --maxsequence Specifies the maximum length of monotonic character sequences ( abcd ). --dictcheck Checks if the password is a dictionary word. --usercheck Checks if the password contains the username. If any of the new password policy options are set, then the minimum length of passwords is 6 characters regardless of the value of the --minlength option. The new password policy settings are applied only to new passwords. In a mixed environment with RHEL 7 and RHEL 8 servers, the new password policy settings are enforced only on servers running on RHEL 8.4 and later. If a user is logged in to an IdM client and the IdM client is communicating with an IdM server running on RHEL 8.3 or earlier, then the new password policy requirements set by the system administrator will not be applied. To ensure consistent behavior, upgrade or update all servers to RHEL 8.4 and later. ( BZ#1340463 ) Improved Active Directory site discovery process The SSSD service now discovers Active Directory sites in parallel over connection-less LDAP (CLDAP) to multiple domain controllers to speed up site discovery in situations where some domain controllers are unreachable. Previously, site discovery was performed sequentially and, in situations where domain controllers were unreachable, a timeout eventually occurred and SSSD went offline. ( BZ#1819012 ) The default value of nsslapd-nagle has been turned off to increase the throughput Previously, the nsslapd-nagle parameter in the cn=config entry was enabled by default. As a consequence, Directory Server performed a high number of setsocketopt system calls which slowed down the server. This update changes the default value of nsslapd-nagle to off . As a result, Directory Server performs a lower number of setsocketopt system calls and can handle a higher number of operations per second. (BZ#1996076) Enabling or disabling SSSD domains within the [domain] section of the sssd.conf file With this update, you can now enable or disable an SSSD domain by modifying its respective [domain] section in the sssd.conf file. Previously, if your SSSD configuration contained a standalone domain, you still had to modify the domains option in the [sssd] section of the sssd.conf file. This update allows you to set the enabled= option in the domain configuration to true or false. Setting the enabled option to true enables a domain, even if it is not listed under the domains option in the [sssd] section of the sssd.conf file. Setting the enabled option to false disables a domain, even if it is listed under the domains option in the [sssd] section of the sssd.conf file. If the enabled option is not set, the configuration in the domains option in the [sssd] section of the sssd.conf is used. ( BZ#1884196 ) Added an option to manually control the maximum offline timeout The offline_timeout period determines the time incrementation between attempts by SSSD to go back online. Previously, the maximum possible value for this interval was hardcoded to 3600 seconds, which was adequate for general usage but resulted in issues in fast or slow changing environments. This update adds the offline_timeout_max option to manually control the maximum length of each interval, allowing you more flexibility to track the server behavior in SSSD. Note that you should set this value in correlation to the offline_timeout parameter value. A value of 0 disables the incrementing behavior. ( BZ#1884213 ) Support for exclude_users and exclude_groups with scope=all in SSSD session recording configuration Red Hat Enterprise 8.4 now provides new SSSD options for defining session recording for large lists of groups or users: exclude_users A comma-separated list of users to be excluded from recording, only applicable with the scope=all configuration option. exclude_groups A comma-separated list of groups, members of which should be excluded from recording. Only applicable with the scope=all configuration option. For more information, refer to the sssd-session-recording man page. ( BZ#1784459 ) samba rebased to version 4.13.2 The samba packages have been upgraded to upstream version 4.13.2, which provides a number of bug fixes and enhancements over the version: To avoid a security issue that allows unauthenticated users to take over a domain using the netlogon protocol, ensure that your Samba servers use the default value ( yes ) of the server schannel parameter. To verify, use the testparm -v | grep 'server schannel' command. For further details, see CVE-2020-1472 . The Samba "wide links" feature has been converted to a VFS module . Running Samba as a PDC or BDC is deprecated . You can now use Samba on RHEL with FIPS mode enabled. Due to the restrictions of the FIPS mode: You cannot use NT LAN Manager (NTLM) authentication because the RC4 cipher is blocked. By default in FIPS mode, Samba client utilities use Kerberos authentication with AES ciphers. You can use Samba as a domain member only in Active Directory (AD) or Red Hat Identity Management (IdM) environments with Kerberos authentication that uses AES ciphers. Note that Red Hat continues supporting the primary domain controller (PDC) functionality IdM uses in the background. The following parameters for less-secure authentication methods, which are only usable over the server message block version 1 (SMB1) protocol, are now deprecated: client plaintext auth client NTLMv2 auth client lanman auth client use spnego An issue with the GlusterFS write-behind performance translator, when used with Samba, has been fixed to avoid data corruption. The minimum runtime support is now Python 3.6. The deprecated ldap ssl ads parameter has been removed. Samba automatically updates its tdb database files when the smbd , nmbd , or winbind service starts. Back up the database files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating. ( BZ#1878109 ) New GSSAPI PAM module for passwordless sudo authentication with SSSD With the new pam_sss_gss.so Pluggable Authentication Module (PAM), you can configure the System Security Services Daemon (SSSD) to authenticate users to PAM-aware services with the Generic Security Service Application Programming Interface (GSSAPI). For example, you can use this module for passwordless sudo authentication with a Kerberos ticket. For additional security in an IdM environment, you can configure SSSD to grant access only to users with specific authentication indicators in their tickets, such as users that have authenticated with a smart card or a one-time password. For additional information, see Granting sudo access to an IdM user on an IdM client . ( BZ#1893698 ) Directory Server rebased to version 1.4.3.16 The 389-ds-base packages have been upgraded to upstream version 1.4.3.16, which provides a number of bug fixes and enhancements over the version. For a complete list of notable changes, read the upstream release notes before updating: https://www.port389.org/docs/389ds/releases/release-1-4-3-16.html https://www.port389.org/docs/389ds/releases/release-1-4-3-15.html https://www.port389.org/docs/389ds/releases/release-1-4-3-14.html https://www.port389.org/docs/389ds/releases/release-1-4-3-13.html https://www.port389.org/docs/389ds/releases/release-1-4-3-12.html https://www.port389.org/docs/389ds/releases/release-1-4-3-11.html https://www.port389.org/docs/389ds/releases/release-1-4-3-10.html https://www.port389.org/docs/389ds/releases/release-1-4-3-9.html ( BZ#1862529 ) Directory Server now logs the work and operation time in RESULT entries With this update, Directory Server now logs two additional time values in RESULT entries in the /var/log/dirsrv/slapd-<instance_name>/access file: The wtime value indicates how long it took for an operation to move from the work queue to a worker thread. The optime value shows the time the actual operation took to be completed once a worker thread started the operation. The new values provide additional information about how the Directory Server handles load and processes operations. For further details, see the Access Log Reference section in the Red Hat Directory Server Configuration, Command, and File Reference. ( BZ#1850275 ) Directory Server can now reject internal unindexed searches This enhancement adds the nsslapd-require-internalop-index parameter to the cn= <database_name> ,cn=ldbm database,cn=plugins,cn=config entry to reject internal unindexed searches. When a plug-in modifies data, it has a write lock on the database. On large databases, if a plug-in then executes an unindexed search, the plug-in sometimes uses all database locks, which corrupts the database or causes the server to become unresponsive. To avoid this problem, you can now reject internal unindexed searches by enabling the nsslapd-require-internalop-index parameter. ( BZ#1851975 ) 4.14. Desktop You can configure the unresponsive application timeout in GNOME GNOME periodically sends a signal to every application to detect if the application is unresponsive. When GNOME detects an unresponsive application, it displays a dialog over the application window that asks if you want to stop the application or wait. Certain applications cannot respond to the signal in time. As a consequence, GNOME displays the dialog even when the application is working properly. With this update, you can configure the time between the signals. The setting is stored in the org.gnome.mutter.check-alive-timeout GSettings key. To completely disable the unresponsive application detection, set the key to 0. For details on configuring a GSettings key, see Working with GSettings keys on command line . (BZ#1886034) 4.15. Graphics infrastructures Intel Tiger Lake GPUs are now supported This release adds support for the Intel Tiger Lake CPU microarchitecture with integrated graphics. This includes Intel UHD Graphics and Intel Xe integrated GPUs found with the following CPU models: Intel Core i7-1160G7 Intel Core i7-1185G7 Intel Core i7-1165G7 Intel Core i7-1165G7 Intel Core i7-1185G7E Intel Core i7-1185GRE Intel Core i7-11375H Intel Core i7-11370H Intel Core i7-1180G7 Intel Core i5-1130G7 Intel Core i5-1135G7 Intel Core i5-1135G7 Intel Core i5-1145G7E Intel Core i5-1145GRE Intel Core i5-11300H Intel Core i5-1145G7 Intel Core i5-1140G7 Intel Core i3-1115G4 Intel Core i3-1115G4 Intel Core i3-1110G4 Intel Core i3-1115GRE Intel Core i3-1115G4E Intel Core i3-1125G4 Intel Core i3-1125G4 Intel Core i3-1120G4 Intel Pentium Gold 7505 Intel Celeron 6305 Intel Celeron 6305E You no longer have to set the i915.alpha_support=1 or i915.force_probe=* kernel option to enable Tiger Lake GPU support. (BZ#1882620) Intel GPUs that use the 11th generation Core microprocessors are now supported This release adds support for the 11th generation Core CPU architecture (formerly known as Rocket Lake ) with Xe gen 12 integrated graphics, which is found in the following CPU models: Intel Core i9-11900KF Intel Core i9-11900K Intel Core i9-11900 Intel Core i9-11900F Intel Core i9-11900T Intel Core i7-11700K Intel Core i7-11700KF Intel Core i7-11700T Intel Core i7-11700 Intel Core i7-11700F Intel Core i5-11500T Intel Core i5-11600 Intel Core i5-11600K Intel Core i5-11600KF Intel Core i5-11500 Intel Core i5-11600T Intel Core i5-11400 Intel Core i5-11400F Intel Core i5-11400T (BZ#1784246, BZ#1784247, BZ#1937558) Nvidia Ampere is now supported This release adds support for the Nvidia Ampere GPUs that use the GA102 or GA104 chipset. That includes the following GPU models: GeForce RTX 3060 Ti GeForce RTX 3070 GeForce RTX 3080 GeForce RTX 3090 RTX A4000 RTX A5000 RTX A6000 Nvidia A40 Note that the nouveau graphics driver does not yet support 3D acceleration with the Nvidia Ampere family. (BZ#1916583) Various updated graphics drivers The following graphics drivers have been updated to the latest upstream version: The Matrox mgag200 driver The Aspeed ast driver (JIRA:RHELPLAN-72994, BZ#1854354, BZ#1854367) 4.16. The web console Software Updates page checks for required restarts With this update, the Software Updates page in the RHEL web console checks if it is sufficient to only restart some services or running processes for updates to become effective after installation. In these cases this avoids having to reboot the machine. (JIRA:RHELPLAN-59941) Graphical performance analysis in the web console With this update the system graphs page has been replaced with a new dedicated page for analyzing the performance of a machine. To view the performance metrics, click View details and history from the Overview page. It shows current metrics and historical events based on the Utilization Saturation, and Errors (USE) method. (JIRA:RHELPLAN-59938) Web console assists with SSH key setup Previously, the web console allowed logging into remote hosts with your initial login password when Reuse my password for remote connections was selected during login. This option has been removed, and instead of that the web console now helps with setting up SSH keys for users that want automatic and password-less login to remote hosts. Check Managing remote systems in the web console for more details. (JIRA:RHELPLAN-59950) 4.17. Red Hat Enterprise Linux system roles The RELP secure transport support added to the Logging role configuration Reliable Event Logging Protocol, RELP, is a secure, reliable protocol to forward and receive log messages among rsyslog servers. With this enhancement, administrators can now benefit from the RELP, which is a useful protocol with high demands from rsyslog users, as rsyslog servers are capable of forwarding and receiving log messages over the RELP protocol. ( BZ#1889484 ) SSH Client RHEL system role is now supported Previously, there was no vendor-supported automation tooling to configure RHEL SSH in a consistent and stable manner for servers and clients. With this enhancement, you can use the RHEL system roles to configure SSH clients in a systematic and unified way, independently of the operating system version. ( BZ#1893712 ) An alternative to the traditional RHEL system roles format: Ansible Collection RHEL 8.4 introduces RHEL system roles in the Collection format, available as an option to the traditional RHEL system roles format. This update introduces the concept of a fully qualified collection name (FQCN), that consists of a namespace and the collection name. For example, the Kernel role fully qualified name is: redhat.rhel_system_roles.kernel_settings The combination of a namespace and a collection name guarantees that the objects are unique. The combination of a namespace and a collection name ensures that the objects are shared across the Collections and namespaces without any conflicts. Install the Collection using an RPM package. Ensure that you have the python3-jmespath installed on the host on which you execute the playbook: The RPM package includes the roles in both the legacy Ansible Roles format as well as the new Ansible Collection format. For example, to use the network role, perform the following steps: Legacy format: Collection format: If you are using Automation Hub and want to install the system roles Collection hosted in Automation Hub, enter the following command: Then you can use the roles in the Collection format, as previously described. This requires configuring your system with the ansible-galaxy command to use Automation Hub instead of Ansible Galaxy. See How to configure the ansible-galaxy client to use Automation Hub instead of Ansible Galaxy for more details. ( BZ#1893906 ) Metrics role supports configuration and enablement of metrics collection for SQL server via PCP The metrics RHEL system role now provides the ability to connect SQL Server, mssql with Performance Co-Pilot, pcp . SQL Server is a general purpose relational database from Microsoft. As it runs, SQL Server updates internal statistics about the operations it is performing. These statistics can be accessed using SQL queries but it is important for system and database administrators undertaking performance analysis tasks to be able to record, report, visualize these metrics. With this enhancement, users can use the metrics RHEL system role to automate connecting SQL server, mssql , with Performance Co-Pilot, pcp , which provides recording, reporting, and visualization functionality for mssql metrics. ( BZ#1893908 ) exporting-metric-data-to-elasticsearch functionality available in the Metrics RHEL system role Elasticsearch is a popular, powerful and scalable search engine. With this enhancement, by exporting metric values from the Metrics RHEL system role to the Elasticsearch, users are able to access metrics via Elasticsearch interfaces, including via graphical interfaces, REST APIs, between others. As a result, users are able to use these Elasticsearch interfaces to help diagnose performance problems and assist in other performance related tasks like capacity planning, benchmarking and so on. ( BZ#1895188 ) Support for SSHD RHEL system role Previously, there was no vendor-supported automation tooling to configure SSH RHEL system roles in a consistent and stable manner for servers and clients. With this enhancement, you can use the RHEL system roles to configure sshd servers in a systematic and unified way regardless of operating system version. ( BZ#1893696 ) Crypto Policies RHEL system role is now supported With this enhancement, RHEL 8 introduces a new feature for system-wide cryptographic policy management. By using RHEL system roles, you now can consistently and easily configure cryptographic policies on any number of RHEL 8 systems. ( BZ#1893699 ) The Logging RHEL system role now supports rsyslog behavior With this enhancement, rsyslog receives the message from Red Hat Virtualization and forwards the message to the elasticsearch . ( BZ#1889893 ) The networking RHEL system role now supports the ethtool settings With this enhancement, you can use the networking RHEL system role to configure ethtool coalesce settings of a NetworkManager connection. When using the interrupt coalescing procedure, the system collects network packets and generates a single interrupt for multiple packets. As a result, this increases the amount of data sent to the kernel with one hardware interrupt, which reduces the interrupt load, and maximizes the throughput. ( BZ#1893961 ) 4.18. Virtualization IBM Z virtual machines can now run up to 248 CPUs Previously, the number of CPUs that you could use in an IBM Z (s390x) virtual machine (VM), with DIAG318 enabled, was limited to 240. Now, using the Extended-Length SCCB, IBM Z VMs can run up to 248 CPUs. (JIRA:RHELPLAN-44450) HMAT is now supported on RHEL KVM With this update, ACPI Heterogeneous Memory Attribute Table (HMAT) is now supported on RHEL KVM. The ACPI HMAT optimizes memory by providing information about memory attributes, such as memory side cache attributes as well as bandwidth and latency details related to the System Physical Address (SPA) Memory Ranges. (JIRA:RHELPLAN-37817) Virtual machines can now use features of Intel Atom P5000 Processors The Snowridge CPU model name is now available for virtual machines (VMs). On hosts with Intel Atom P5000 processors, using Snowridge as the CPU type in the XML configuration of the VM exposes new features of these processors to the VM. (JIRA:RHELPLAN-37579) virtio-gpu devices now work better on virtual machines with Windows 10 and later This update extends the virtio-win drivers to also provide custom drivers for virtio-gpu devices on selected Windows platforms. As a result, the virtio-gpu devices now have improved performance on virtual machines that use Windows 10 or later as their guest systems. In addition, the devices will also benefit from future enhancements to virtio-win . ( BZ#1861229 ) Virtualization support for 3rd generation AMD EPYC processors With this update, virtualization on RHEL 8 adds support for the 3rd generation AMD EPYC processors, also known as EPYC Milan. As a result, virtual machines hosted on RHEL 8 can now use the EPYC-Milan CPU model and utilise new features that the processors provide. (BZ#1790620) 4.19. RHEL in cloud environments Automatic registration for gold images for AWS With this update, gold images of RHEL 8.4 and later for Amazon Web Services and Microsoft Azure can be configured by the user to automatically register to Red Hat Subscription Management (RHSM) and Red Hat Insights. This makes it faster and easier to configure a large number of virtual machines created from a gold image. However, if you require consuming repositories provided by RHSM, ensure that the manage_repos option in /etc/rhsm/rhsm.conf is set to 1 . For more information, please refer to Red Hat KnowledgeBase . ( BZ#1905398 , BZ#1932804 ) cloud-init is now supported on Power Systems Virtual Server in IBM Cloud With this update, the cloud-init utility can be used to configure RHEL 8 virtual machines hosted on IBM Power Systems hosts and running in the IBM Cloud Virtual Server service. ( BZ#1886430 ) 4.20. Supportability sos rebased to version 4.0 The sos package has been upgraded to version 4.0. This major version release includes a number of new features and changes. Major changes include: A new sos binary has replaced the former sosreport binary as the main entry point for the utility. sos report is now used to generate sosreport tarballs. The sosreport binary is maintained as a redirection point and now invokes sos report . The /etc/sos.conf file has been moved to /etc/sos/sos.conf , and its layout has changed as follows: The [general] section has been renamed to [global] , and may be used to specify options that are available to all sos commands and sub-commands. The [tunables] section has been renamed to [plugin_options] . Each sos component, report , collect , and clean , has its own dedicated section. For example, sos report loads options from global and from report . sos is now a Python3-only utility. Python2 is no longer supported in any capacity. sos collect sos collect formally brings the sos-collector utility into the main sos project, and is used to collect sosreports from multiple nodes simultaneously. The sos-collector binary is maintained as a redirection point and invokes sos collect . The standalone sos-collector project will no longer be independently developed. Enhancements for sos collect include: sos collect is now supported on all distributions that sos report supports, that is any distribution with a Policy defined. The --insecure-sudo option has been renamed to --nopasswd-sudo . The --threads option, used to connect simultaneously to the number of nodes, has been renamed to --jobs sos clean sos clean formally brings the functionality of the soscleaner utility into the main sos project. This subcommand performs further data obfuscation on reports, such as cleaning IP addresses, domain names, and user-provided keywords. Note: When the --clean option is used with the sos report or sos collect command, sos clean is applied on a report being generated. Thus, it is not necessary to generate a report and only after then apply the cleaner function on it. Key enhancements for sos clean include: Support for IPv4 address obfuscation. Note that this will attempt to preserve topological relationships between discovered addresses. Support for host name and domain name obfuscation. Support for user-provided keyword obfuscations. The --clean or --mask flag used with the sos report command obfuscates a report being generated. Alternatively, the following command obfuscates an already existing report: Using the former results in a single obfuscated report archive, while the latter results in two; an obfuscated archive and the un-obfuscated original. For full information on the changes contained in this release, see sos-4.0 . (BZ#1966838) 4.21. Containers Podman now supports volume plugins written for Docker Podman now has support for Docker volume plugins. These volume plugins or drivers, written by vendors and community members, can be used by Podman to create and manage container volumes. The podman volume create command now supports creation of the volume using a volume plugin with the given name. The volume plugins must be defined in the [engine.volume_plugins] section of the container.conf configuration file. Example: where testvol is the name of the plugin and /run/docker/plugins/testvol.sock is the path to the plugin socket. You can use the podman volume create --driver testvol to create a volume using a testvol plugin. (BZ#1734854) The ubi-micro container image is now available The registry.redhat.io/ubi8/ubi-micro container image is the smallest base image that uses the package manager on the underlying host to install packages, typically using Buildah or multi-stage builds with Podman. Excluding package managers and all of its dependencies increases the level of security of the image. (JIRA:RHELPLAN-56664) Support to auto-update container images is available With this enhancement, users can use the podman auto-update command to auto-update containers according to their auto-update policy. The containers have to be labeled with a specified "io.containers.autoupdate=image" label to check if the image has been updated. If it has, Podman pulls the new image and restarts the systemd unit executing the container. The podman auto-update command relies on systemd and requires a fully-specified image name to create a container. (JIRA:RHELPLAN-56661) Podman now supports secure short names Short-name aliases for images can now be configured in the registries.conf file in the [aliases] table. The short-names modes are: Enforcing: If no matching alias is found during the image pull, Podman prompts the user to choose one of the unqualified-search registries. If the selected image is pulled successfully, Podman automatically records a new short-name alias in the users USDHOME/.config/containers/short-name-aliases.conf file. If the user cannot be prompted (for example, stdin or stdout are not a TTY), Podman fails. Note that the short-name-aliases.conf file has precedence over registries.conf file if both specify the same alias. Permissive: Similar to enforcing mode but it does not fail if the user cannot be prompted. Instead, Podman searches in all unqualified-search registries in the given order. Note that no alias is recorded. Example: (JIRA:RHELPLAN-39843) container-tools:3.0 stable stream is now available The container-tools:3.0 stable module stream, which contains the Podman, Buildah, Skopeo, and runc tools is now available. This update provides bug fixes and enhancements over the version. For instructions how to upgrade from an earlier stream, see Switching to a later stream . (JIRA:RHELPLAN-56782) | [
"Time stamping parameters for <network_controller> : Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)",
"Time stamping parameters for <network_controller> : Capabilities: hardware-transmit software-transmit",
"perf record -k CLOCK_MONOTONIC sleep 1",
"perf script -F+tod",
"perf record --overwrite -e _events_to_be_collected_ --switch-output-event _snapshot_trigger_event_",
"xfs_io -c \"chattr +x\" filename",
"yum install python39 yum install python39-pip",
"python3.9 python3.9 -m pip --help",
"yum module install swig:4.0",
"yum module install subversion:1.14",
"yum module install redis:6",
"yum module install postgresql:13",
"yum module install mariadb:10.5",
"yum install gcc-toolset-10",
"scl enable gcc-toolset-10 tool",
"scl enable gcc-toolset-10 bash",
"podman pull registry.redhat.io/<image_name>",
"allow_loading_unsigned_plugins = pcp-redis-datasource",
"systemctl restart grafana-server",
"podman pull registry.redhat.io/rhel8/grafana",
"podman pull registry.redhat.io/rhel8/pcp",
"yum module install jmc:rhel8/common",
"yum install rhel-system-roles",
"--- - hosts: all roles: rhel-system-roles.network",
"--- - hosts: all roles: redhat.rhel_system_roles.network",
"ansible-galaxy collection install redhat.rhel_system_roles",
"[user@server1 ~]USD sudo sos (clean|mask) USDarchive",
"[engine.volume_plugins] testvol = \"/run/docker/plugins/testvol.sock\"",
"unqualified-search-registries=[\"registry.fedoraproject.org\", \"quay.io\"] [aliases] \"fedora\"=\"registry.fedoraproject.org/fedora\""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.4_release_notes/new-features |
Installation configuration | Installation configuration OpenShift Container Platform 4.14 Cluster-wide configuration during installations Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installation_configuration/index |
2.6. Network Power Switches | 2.6. Network Power Switches You can fence GFS nodes with power switches and fencing agents available with Red Hat Cluster Suite. For more information about fencing with network power switches, refer to Configuring and Managing a Red Hat Cluster . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-sysreq-netpower |
Chapter 23. New Drivers | Chapter 23. New Drivers Network Drivers Thunderbolt network driver (thunderbolt-net.ko.xz). AMD 10 Gigabit Ethernet Driver (amd-xgbe.ko.xz). Storage Drivers Command Queue Host Controller Interface driver (cqhci.ko.xz). Graphics Drivers and Miscellaneous Drivers DRM GPU scheduler (gpu-sched.ko.xz). Closed hash table (chash.ko.xz). RMI4 SMBus driver (rmi_smbus.ko.xz). RMI bus. RMI F03 module (rmi_core.ko.xz). Dell WMI descriptor driver (dell-wmi-descriptor.ko.xz). Intel(R) PMC Core Driver (intel_pmc_core.ko.xz). Intel(R) WMI Thunderbolt force power driver (intel-wmi-thunderbolt.ko.xz). ACPI Hardware Watchdog (WDAT) driver (wdat_wdt.ko.xz). IIO helper functions for setting up triggered buffers (industrialio-triggered-buffer.ko.xz). HID Sensor Pressure (hid-sensor-press.ko.xz). HID Sensor Device Rotation (hid-sensor-rotation.ko.xz). HID Sensor Inclinometer 3D (hid-sensor-incl-3d.ko.xz). HID Sensor trigger processing (hid-sensor-trigger.ko.xz). HID Sensor common attribute processing (hid-sensor-iio-common.ko.xz). HID Sensor Magnetometer 3D (hid-sensor-magn-3d.ko.xz). HID Sensor ALS (hid-sensor-als.ko.xz). HID Sensor Proximity (hid-sensor-prox.ko.xz). HID Sensor Gyroscope 3D (hid-sensor-gyro-3d.ko.xz). HID Sensor Accel 3D (hid-sensor-accel-3d.ko.xz). HID Sensor Hub driver (hid-sensor-hub.ko.xz). HID Sensor Custom and Generic sensor driver (hid-sensor-custom.ko.xz). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_drivers |
Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] | Chapter 13. VolumeSnapshotContent [snapshot.storage.k8s.io/v1] Description VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. status object status represents the current information of a snapshot. 13.1.1. .spec Description spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. Type object Required deletionPolicy driver source volumeSnapshotRef Property Type Description deletionPolicy string deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. driver string driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. source object source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. sourceVolumeMode string SourceVolumeMode is the mode of the volume whose snapshot is taken. Can be either "Filesystem" or "Block". If not specified, it indicates the source volume's mode is unknown. This field is immutable. This field is an alpha field. volumeSnapshotClassName string name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. volumeSnapshotRef object volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 13.1.2. .spec.source Description source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. Type object Property Type Description snapshotHandle string snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. volumeHandle string volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 13.1.3. .spec.volumeSnapshotRef Description volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. Type object Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 13.1.4. .status Description status represents the current information of a snapshot. Type object Property Type Description creationTime integer creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command date +%s%N returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. error object error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. readyToUse boolean readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. snapshotHandle string snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. volumeGroupSnapshotHandle string VolumeGroupSnapshotHandle is the CSI "group_snapshot_id" of a group snapshot on the underlying storage system. 13.1.5. .status.error Description error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 13.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents DELETE : delete collection of VolumeSnapshotContent GET : list objects of kind VolumeSnapshotContent POST : create a VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} DELETE : delete a VolumeSnapshotContent GET : read the specified VolumeSnapshotContent PATCH : partially update the specified VolumeSnapshotContent PUT : replace the specified VolumeSnapshotContent /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status GET : read status of the specified VolumeSnapshotContent PATCH : partially update status of the specified VolumeSnapshotContent PUT : replace status of the specified VolumeSnapshotContent 13.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents HTTP method DELETE Description delete collection of VolumeSnapshotContent Table 13.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotContent Table 13.2. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContentList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotContent Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.4. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 202 - Accepted VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name} Table 13.6. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent HTTP method DELETE Description delete a VolumeSnapshotContent Table 13.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotContent Table 13.9. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotContent Table 13.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotContent Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty 13.2.3. /apis/snapshot.storage.k8s.io/v1/volumesnapshotcontents/{name}/status Table 13.15. Global path parameters Parameter Type Description name string name of the VolumeSnapshotContent HTTP method GET Description read status of the specified VolumeSnapshotContent Table 13.16. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshotContent Table 13.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshotContent Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body VolumeSnapshotContent schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotContent schema 201 - Created VolumeSnapshotContent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage_apis/volumesnapshotcontent-snapshot-storage-k8s-io-v1 |
30.5. Modifying sudo Commands and Command Groups | 30.5. Modifying sudo Commands and Command Groups Modifying sudo Commands and Command Groups in the Web UI Under the Policy tab, click Sudo Sudo Commands or Sudo Sudo Command Groups . Click the name of the command or command group to display its configuration page. Change the settings as required. On some configuration pages, the Save button is available at the top of the page. On these pages, you must click the button to confirm the changes. Modifying sudo Commands and Command Groups from the Command Line To modify a command or command group, use the following commands: ipa sudocmd-mod ipa sudocmdgroup-mod Add command-line options to the above-mentioned commands to update the sudo command or command group attributes. For example, to add a new description for the /usr/bin/less command: For more information about these commands and the options they accept, run them with the --help option added. | [
"ipa sudocmd-mod /usr/bin/less --desc=\"For reading log files\" ------------------------------------- Modified Sudo Command \"/usr/bin/less\" ------------------------------------- Sudo Command: /usr/bin/less Description: For reading log files Sudo Command Groups: files"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/modify-sudo-cmd-cmdgroup |
Chapter 1. Creating an unfiltered Microsoft Azure integration | Chapter 1. Creating an unfiltered Microsoft Azure integration Note If you want to create a filtered Azure integration, do not complete the following steps. Instead, go to Creating a filtered Microsoft Azure integration. If you are using RHEL metering, after you integrate your data with cost management, go to Adding RHEL metering to a Microsoft Azure integration to finish configuring your integration for RHEL metering. You must create a Microsoft Azure integration for cost management from the Integrations page and configure your Microsoft Azure account to allow cost management access. To create an Azure integration, you will complete the following tasks: Create a storage account and resource group Choose the appropriate scope for your cost export Configure a Storage Account Contributor and Reader roles for access Schedule daily cost exports Azure is a third-party product and its UI and documentation can change. The instructions for configuring third-party integrations are correct at the time of publishing. For the most up-to-date information, see the Microsoft Azure's documentation . 1.1. Adding a Microsoft Azure account Add your Microsoft Azure account as an integration so cost management can process the cost and usage data. Prerequisites You must have a Red Hat user account with Cloud Administrator entitlements. In cost management: Click Settings Menu > Integrations . In the Cloud tab, click Add integration . In the Add a cloud integration wizard, select Microsoft Azure and click . Enter a name for your integration and click . In the Select application step, select Cost management and click . In the Specify cost export scope step, select I am OK with sending the default data to Cost Management . If you are registering RHEL usage billing, select Include RHEL usage . Otherwise, proceed to the step. Select the scope of your cost data export from the menu. You can export data at the subscription level or by other scopes in your subscription. Copy the command that is generated. In your Microsoft Azure account : Click Cloud Shell and run the command that you copied from cost management. Copy the returned value. In cost management: In the Specify cost export scope step, paste the value that you copied from Microsoft Azure into Cost export scope . Click . You will continue using the wizard in the following sections. 1.2. Creating a Microsoft Azure resource group and storage account Create a storage account in Microsoft Azure to house your cost data and metrics. In the Add a cloud integration wizard in cost management, enter the storage account name in the corresponding fields. Prerequisites You must have a Red Hat user account with Cloud Administrator entitlements. In your Microsoft Azure account : Search for storage and click Storage accounts . On the Storage accounts page, click Create . On the Create a storage account page, in the Resource Group field, click Create new . Enter a name and click OK . In this example, use cost-data-group . In Instance details , enter a name in the Storage account name field. In this example, use costdata . Copy the names of the resource group and storage account so that you can add them to Red Hat Hybrid Cloud Console later. Click Review . Review the storage account and click Create . In cost management: In the Add a cloud integration wizard, paste the resource group and storage account names that you copied into Resource group name and Storage account name . Click . You will continue using the wizard in the following sections. 1.3. Configuring a daily Microsoft Azure data export schedule , set up an automatic export of your cost data to your Microsoft Azure storage account so that cost management can retrieve your data daily. In your Microsoft Azure account : In the search bar, enter "cost exports" and click the result. Click Create . Under Select a template , click Cost and usage (actual) to export your standard usage and purchase charges. Follow the steps in the Azure wizard. Select the correct subscription and Storage account that you created in the sections. You must set Format to CSV . Set Compression type to None or Gzip . Review the information and click Create . In cost management: Return to the Add a cloud integration wizard and complete the steps in Daily export . Click . You will continue using the wizard in the following sections. For more help with creating exports in Azure, see Microsoft's documentation . 1.4. Finding your Microsoft Azure subscription ID Find your subscription_id in the Microsoft Azure Cloud Shell and add it to the Add a cloud integration wizard in cost management. In your Microsoft Azure account : Click Cloud Shell . Enter the following command to get your Subscription ID: az account show --query "{subscription_id: id }" Copy the value that is generated for subscription_id . Example response { "subscription_id": 00000000-0000-0000-000000000000 } In cost management: In the Subscription ID field of the Add a cloud integration wizard, paste the value that you copied in the step. Click . You will continue using the wizard in the following sections. 1.5. Creating Microsoft Azure roles for Red Hat access To grant Red Hat access to your data, you must configure dedicated roles in Microsoft Azure. If you have an additional resource under the same Azure subscription, you might not need to create a new service account. In cost management: In the Roles section of the Add a cloud integration wizard, copy the az ad sp create-for-rbac command to create a service principal with the Cost Management Storage Account Contributor role. In your Microsoft Azure account : Click Cloud Shell . In the cloud shell prompt, paste the command that you copied. Copy the values from the returned data for the client ID, secret, and tenant: Example response { "client_id": "00000000-0000-0000-000000000000", "secret": "00000000-0000-0000-000000000000", "tenant": "00000000-0000-0000-000000000000" } In cost management: Return to the Add a cloud integration wizard and paste the values that you copied into their corresponding fields on the Roles page. Copy the second az role assignment command that is generated from the wizard. In your Microsoft Azure account : Return to the cloud shell prompt and paste the command to create a Cost management reader role. In cost management: Return to the Add a cloud integration wizard and click . Review the information that you provided and click Add . 1.6. Viewing your data You have now successfully created an unfiltered integration. To learn more about what you can do with your data, continue to steps for managing your costs . Do not follow the instructions in Creating a filtered Microsoft Azure integration. | [
"az account show --query \"{subscription_id: id }\"",
"{ \"subscription_id\": 00000000-0000-0000-000000000000 }",
"{ \"client_id\": \"00000000-0000-0000-000000000000\", \"secret\": \"00000000-0000-0000-000000000000\", \"tenant\": \"00000000-0000-0000-000000000000\" }"
] | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_microsoft_azure_data_into_cost_management/assembly-adding-azure-int |
probe::sunrpc.clnt.bind_new_program | probe::sunrpc.clnt.bind_new_program Name probe::sunrpc.clnt.bind_new_program - Bind a new RPC program to an existing client Synopsis sunrpc.clnt.bind_new_program Values progname the name of new RPC program old_prog the number of old RPC program vers the version of new RPC program servername the server machine name old_vers the version of old RPC program old_progname the name of old RPC program prog the number of new RPC program | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-clnt-bind-new-program |
14.2. PKI Instance Execution Management | 14.2. PKI Instance Execution Management The act of starting, stopping, restarting, or obtaining the status of a PKI instance is known as execution management. Each PKI instance, separate or shared, is started, stopped, restarted, and has its status obtained separately. This section describes the execution management for any PKI instance. 14.2.1. Starting, Stopping, and Restarting a PKI Instance A PKI instance is started, stopped, and restarted like other system programs, using systemd . Log in to the server machine as root . Run the systemctl command, specifying the action and the instance name: For example: Alternatively, you can use the pki-server alias: For example: 14.2.2. Restarting a PKI Instance after a Machine Restart If a computer running one or more PKI instances is shut down unexpectedly, more services than just the PKI instances must be restarted, in the proper order, for the subsystem to be available both through the HTML services page and the administrative console. If the Directory Server instance used by the subsystem is installed on the local machine, restart the Administration Server and the Directory Server processes. Start the Certificate System subsystem instances. 14.2.3. Checking the PKI Instance Status The systemctl command can be used to check the status of a process, showing whether it is running or stopped. For example: If the instance is running, the status check returns information similar to the following example: 14.2.4. Configuring a PKI Instance to Automatically Start Upon Reboot The systemctl command can be used to automatically start instances upon reboot. For example, the following commands automatically start the Red Hat Administration Server, Directory Server, and a CA upon reboot: Note The default PKI instance installation and configuration using the pkispawn command automatically enables the instance to start upon reboot. To disable this behavior (that is, to prevent PKI instances from automatically starting upon reboot), issue the following commands: 14.2.5. Setting sudo Permissions for Certificate System Services For both simplicity of administration and security, the Certificate System and Directory Server processes can be configured so that PKI administrators (instead of only root) can start and stop the services. A recommended option when setting up subsystems is to use a pkiadmin system group. (Details are in the Red Hat Certificate System Planning, Installation, and Deployment Guide .) All of the operating system users which will be Certificate System administrators are then added to this group. If this pkiadmin system group exists, then it can be granted sudo access to perform certain tasks. Edit the /etc/sudoers file; on Red Hat Enterprise Linux 8, this can be done using the visudo command: # visudo Depending on what is installed on the machine, add a line for the Directory Server, the Administration Server, PKI management tools, and each PKI subsystem instance, granting sudo rights to the pkiadmin group: # For Directory Server services %pkiadmin ALL = PASSWD: /usr/bin/systemctl * dirsrv.target %pkiadmin ALL = PASSWD: /usr/bin/systemctl * dirsrv-admin.service # For PKI instance management %pkiadmin ALL = PASSWD: /usr/sbin/pkispawn * %pkiadmin ALL = PASSWD: /usr/sbin/pkidestroy * # For PKI instance services %pkiadmin ALL = PASSWD: /usr/bin/systemctl * pki-tomcatd@ instance_name .service Important Make sure to set sudo permissions for every Certificate System, Directory Server, and Administration Server on the machine - and only for those instances on the machine. There could be multiple instances of the same subsystem type on a machine or no instance of a subsystem type. It depends on the deployment. | [
"systemctl start|stop|restart pki-tomcatd@ instance_name .service",
"systemctl restart [email protected]",
"pki-server start|stop|restart instance_name",
"pki-server restart pki-tomcat",
"systemctl start dirsrv-admin.service systemctl start dirsrv@ instance_name .service",
"pki-server start instance_name",
"systemctl -l status [email protected] [email protected] - PKI Tomcat Server pki-tomcat Loaded: loaded (/lib/systemd/system/[email protected]; enabled) Active: inactive (dead) since Fri 2015-11-20 19:04:11 MST; 12s ago Process: 8728 ExecStop=/usr/libexec/tomcat/server stop (code=exited, status=0/SUCCESS) Process: 8465 ExecStart=/usr/libexec/tomcat/server start (code=exited, status=143) Process: 8316 ExecStartPre=/usr/bin/pkidaemon start tomcat %i (code=exited, status=0/SUCCESS) Main PID: 8465 (code=exited, status=143) Nov 20 19:04:10 pki.example.com server[8728]: options used: -Dcatalina.base=/var/lib/pki/pki-tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/lib/pki/pki-tomcat/temp -Djava.util.logging.config.file=/var/lib/pki/pki-tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager Nov 20 19:04:10 pki.example.com server[8728]: arguments used: stop Nov 20 19:04:11 pki.example.com server[8465]: Nov 20, 2015 7:04:11 PM org.apache.catalina.core.StandardServer await Nov 20 19:04:11 pki.example.com server[8465]: INFO: A valid shutdown command was received via the shutdown port. Stopping the Server instance. Nov 20 19:04:11 pki.example.com server[8465]: PKIListener: org.apache.catalina.core.StandardServer[before_stop] Nov 20 19:04:11 pki.example.com server[8465]: PKIListener: org.apache.catalina.core.StandardServer[stop] Nov 20 19:04:11 pki.example.com server[8465]: PKIListener: org.apache.catalina.core.StandardServer[configure_stop] Nov 20 19:04:11 pki.example.com server[8465]: Nov 20, 2015 7:04:11 PM org.apache.coyote.AbstractProtocol pause Nov 20 19:04:11 pki.example.com server[8465]: INFO: Pausing ProtocolHandler [\"http-bio-8080\"] Nov 20 19:04:11 pki.example.com systemd[1]: Stopped PKI Tomcat Server pki-tomcat.",
"systemctl -l status [email protected] [email protected] - PKI Tomcat Server pki-tomcat Loaded: loaded (/lib/systemd/system/[email protected]; enabled) Active: active (running) since Fri 2015-11-20 19:09:09 MST; 3s ago Process: 8728 ExecStop=/usr/libexec/tomcat/server stop (code=exited, status=0/SUCCESS) Process: 9154 ExecStartPre=/usr/bin/pkidaemon start tomcat %i (code=exited, status=0/SUCCESS) Main PID: 9293 (java) CGroup: /system.slice/system-pki\\x2dtomcatd.slice/[email protected] ������9293 java -DRESTEASY_LIB=/usr/share/java/resteasy-base -Djava.library.path=/usr/lib64/nuxwdog-jni -classpath /usr/share/tomcat/bin/bootstrap.jar:/usr/share/tomcat/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/var/lib/pki/pki-tomcat -Dcatalina.home=/usr/share/tomcat -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/lib/pki/pki-tomcat/temp -Djava.util.logging.config.file=/var/lib/pki/pki-tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.security.manager -Djava.security.policy==/var/lib/pki/pki-tomcat/conf/catalina.policy org.apache.catalina.startup.Bootstrap start Nov 20 19:09:10 pki.example.com server[9293]: Nov 20, 2015 7:09:10 PM org.apache.catalina.core.StandardService startInternal Nov 20 19:09:10 pki.example.com server[9293]: INFO: Starting service Catalina Nov 20 19:09:10 pki.example.com server[9293]: Nov 20, 2015 7:09:10 PM org.apache.catalina.core.StandardEngine startInternal Nov 20 19:09:10 pki.example.com server[9293]: INFO: Starting Servlet Engine: Apache Tomcat/7.0.54 Nov 20 19:09:10 pki.example.com server[9293]: Nov 20, 2015 7:09:10 PM org.apache.catalina.startup.HostConfig deployDescriptor Nov 20 19:09:10 pki.example.com server[9293]: INFO: Deploying configuration descriptor /etc/pki/pki-tomcat/Catalina/localhost/ROOT.xml Nov 20 19:09:12 pki.example.com server[9293]: Nov 20, 2015 7:09:12 PM org.apache.catalina.startup.HostConfig deployDescriptor Nov 20 19:09:12 pki.example.com server[9293]: INFO: Deployment of configuration descriptor /etc/pki/pki-tomcat/Catalina/localhost/ROOT.xml has finished in 2,071 ms Nov 20 19:09:12 pki.example.com server[9293]: Nov 20, 2015 7:09:12 PM org.apache.catalina.startup.HostConfig deployDescriptor Nov 20 19:09:12 pki.example.com server[9293]: INFO: Deploying configuration descriptor /etc/pki/pki-tomcat/Catalina/localhost/pki#admin.xml",
"systemctl enable dirsrv-admin.service systemctl enable dirsrv.target systemctl enable [email protected]",
"systemctl disable [email protected] systemctl disable dirsrv.target systemctl disable dirsrv-admin.service",
"visudo",
"For Directory Server services %pkiadmin ALL = PASSWD: /usr/bin/systemctl * dirsrv.target %pkiadmin ALL = PASSWD: /usr/bin/systemctl * dirsrv-admin.service For PKI instance management %pkiadmin ALL = PASSWD: /usr/sbin/pkispawn * %pkiadmin ALL = PASSWD: /usr/sbin/pkidestroy * For PKI instance services %pkiadmin ALL = PASSWD: /usr/bin/systemctl * pki-tomcatd@ instance_name .service"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/pki_instance_execution_management |
Chapter 6. Developing a Kafka client | Chapter 6. Developing a Kafka client Create a Kafka client in your preferred programming language and connect it to AMQ Streams. To interact with a Kafka cluster, client applications need to be able to produce and consume messages. To develop and configure a basic Kafka client application, as a minimum, you must do the following: Set up configuration to connect to a Kafka cluster Use producers and consumers to send and receive messages Setting up the basic configuration for connecting to a Kafka cluster and using producers and consumers is the first step in developing a Kafka client. After that, you can expand into improving the inputs, security, performance, error handling, and functionality of the client application. Prerequisites You can create a client properties file that contains property values for the following: Basic configuration to connect to the Kafka cluster Configuration for securing the connection Procedure Choose a Kafka client library for your programming language, e.g. Java, Python, .NET, etc. Install the library, either through a package manager or manually by downloading the library from its source. Import the necessary classes and dependencies for your Kafka client in your code. Create a Kafka consumer or producer object, depending on the type of client you want to create. You can have a client that does both. Provide the configuration properties to connect to the Kafka cluster, including the broker address, port, and credentials if necessary. Use the Kafka consumer or producer object to subscribe to topics, produce messages, or retrieve messages from the Kafka cluster. Handle any errors that may occur during the connection or communication with AMQ Streams. 6.1. Example Kafka producer client This Java-based Kafka producer client is an example of a self-contained application that produces messages to a Kafka topic. The client uses the Kafka Producer API to send messages asynchronously, with some error handling. The client implements the Callback interface for message handling. To run the Kafka producer client, you execute the main method in the Producer class. The client generates a random byte array as the message payload using the randomBytes method. The client produces messages to the Kafka topic until NUM_MESSAGES messages (100 in the example configuration) have been sent. The producer is thread-safe, allowing multiple threads to use a single producer instance. This example client provides a basic foundation for building more complex Kafka producers for specific use cases. You can incorporate additional functionality, such as integrating with a logging framework. Note You can add SLF4J binding to each client to see client API logs. Prerequisites Kafka brokers running on the specified BOOTSTRAP_SERVERS A Kafka topic named TOPIC_NAME to which messages are produced. Configuration You can configure the producer client through the following constants specified in the Producer class: BOOTSTRAP_SERVERS The address and port to connect to the Kafka brokers (for example, localhost:9092 ). TOPIC_NAME The name of the Kafka topic to produce messages to. NUM_MESSAGES The number of messages to produce before stopping. MESSAGE_SIZE_BYTES The size of each message in bytes. PROCESSING_DELAY_MS The delay in milliseconds between sending messages. This can simulate message processing time, which is useful for testing. Example producer client import java.util.Properties; import java.util.Random; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.producer.Callback; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.clients.producer.RecordMetadata; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArraySerializer; import org.apache.kafka.common.serialization.LongSerializer; public class Producer implements Callback { private static final Random RND = new Random(0); private static final String BOOTSTRAP_SERVERS = "localhost:9092"; private static final String TOPIC_NAME = "my-topic"; private static final long NUM_MESSAGES = 100; private static final int MESSAGE_SIZE_BYTES = 100; private static final long PROCESSING_DELAY_MS = 0L; protected AtomicLong messageCount = new AtomicLong(0); public static void main(String[] args) { new Producer().run(); } public void run() { System.out.println("Running producer"); try (var producer = createKafkaProducer()) { 1 byte[] value = randomBytes(MESSAGE_SIZE_BYTES); 2 while (messageCount.get() < NUM_MESSAGES) { 3 sleep(PROCESSING_DELAY_MS); 4 producer.send(new ProducerRecord<>(TOPIC_NAME, messageCount.get(), value), this); 5 messageCount.incrementAndGet(); } } } private KafkaProducer<Long, byte[]> createKafkaProducer() { Properties props = new Properties(); 6 props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 7 props.put(ProducerConfig.CLIENT_ID_CONFIG, "client-" + UUID.randomUUID()); 8 props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class); 9 props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class); return new KafkaProducer<>(props); } private void sleep(long ms) { 10 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private byte[] randomBytes(int size) { 11 if (size <= 0) { throw new IllegalArgumentException("Record size must be greater than zero"); } byte[] payload = new byte[size]; for (int i = 0; i < payload.length; ++i) { payload[i] = (byte) (RND.nextInt(26) + 65); } return payload; } private boolean retriable(Exception e) { 12 if (e == null) { return false; } else if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onCompletion(RecordMetadata metadata, Exception e) { 13 if (e != null) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } else { System.out.printf("Record sent to %s-%d with offset %d%n", metadata.topic(), metadata.partition(), metadata.offset()); } } } 1 The client creates a Kafka producer using the createKafkaProducer method. The producer sends messages to the Kafka topic asynchronously. 2 A byte array is used as the payload for each message sent to the Kafka topic. 3 The maximum number of messages sent is determined by the NUM_MESSAGES constant value. 4 The message rate is controlled with a delay between each message sent. 5 The producer passes the topic name, the message count value, and the message value. 6 The client creates the KafkaProducer instance using the provided configuration. You can use a properties file or add the configuration directly. For more information on the basic configuration, see Chapter 4, Configuring client applications for connecting to a Kafka cluster . 7 The connection to the Kafka brokers. 8 A unique client ID for the producer using a randomly generated UUID. A client ID is not required, but it is useful to track the source of requests. 9 The appropriate serializer classes for handling keys and values as byte arrays. 10 Method to introduce a delay to the message sending process for a specified number of milliseconds. If the thread responsible for sending messages is interrupted while paused, it throws an InterruptedException error. 11 Method to create a random byte array of a specific size, which serves as the payload for each message sent to the Kafka topic. The method generates a random integer and adds 65 to represent an uppercase letter in ascii code (65 is A , 66 is B , and so on). The ascii code is stored as a single byte in the payload array. If the payload size is not greater than zero, it throws an IllegalArgumentException . 12 Method to check whether to retry sending a message following an exception. Null and specified exceptions are not retried, nor are exceptions that do not implement the RetriableException interface. You can customize this method to include other errors. 13 Method called when a message has been acknowledged by the Kafka broker. On success, a message is printed with the details of the topic, partition, and offset position for the message. If an error ocurred when sending the message, an error message is printed. The method checks the exception and takes appropriate action based on whether it's a fatal or non-fatal error. If the error is non-fatal, the message sending process continues. If the error is fatal, a stack trace is printed and the producer is terminated. Error handling Fatal exceptions caught by the producer client: InterruptedException Error thrown when the current thread is interrupted while paused. Interruption typically occurs when stopping or shutting down the producer. The exception is rethrown as a RuntimeException , which terminates the producer. IllegalArgumentException Error thrown when the producer receives invalid or inappropriate arguments. For example, the exception is thrown if the topic is missing. UnsupportedOperationException Error thrown when an operation is not supported or a method is not implemented. For example, the exception is thrown if an attempt is made to use an unsupported producer configuration or call a method that is not supported by the KafkaProducer class. Non-fatal exceptions caught by the producer client: RetriableException Error thrown for any exception that implements the RetriableException interface provided by the Kafka client library. With non-fatal errors, the producer continues to send messages. 6.2. Example Kafka consumer client This Java-based Kafka consumer client is an example of a self-contained application that consumes messages from a Kafka topic. The client uses the Kafka Consumer API to fetch and process messages from a specified topic asynchronously, with some error handling. It follows at-least-once semantics by committing offsets after successfully processing messages. The client implements the ConsumerRebalanceListener interface for partition handling and the OffsetCommitCallback interface for committing offsets. To run the Kafka consumer client, you execute the main method in the Consumer class. The client consumes messages from the Kafka topic until NUM_MESSAGES messages (100 in the example configuration) have been consumed. The consumer is not designed to be safely accessed concurrently by multiple threads. This example client provides a basic foundation for building more complex Kafka consumers for specific use cases. You can incorporate additional functionality, such as integrating with a logging framework. Note You can add SLF4J binding to each client to see client API logs. Prerequisites Kafka brokers running on the specified BOOTSTRAP_SERVERS A Kafka topic named TOPIC_NAME from which messages are consumed. Configuration You can configure the consumer client through the following constants specified in the Consumer class: BOOTSTRAP_SERVERS The address and port to connect to the Kafka brokers (for example, localhost:9092 ). GROUP_ID The consumer group identifier. POLL_TIMEOUT_MS The maximum time to wait for new messages during each poll. TOPIC_NAME The name of the Kafka topic to consume messages from. NUM_MESSAGES The number of messages to consume before stopping. PROCESSING_DELAY_MS The delay in milliseconds between sending messages. This can simulate message processing time, which is useful for testing. Example consumer client import java.util.Collection; import java.util.HashMap; import java.util.Map; import java.util.Properties; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRebalanceListener; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.consumer.NoOffsetForPartitionException; import org.apache.kafka.clients.consumer.OffsetAndMetadata; import org.apache.kafka.clients.consumer.OffsetCommitCallback; import org.apache.kafka.clients.consumer.OffsetOutOfRangeException; import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.errors.RebalanceInProgressException; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArrayDeserializer; import org.apache.kafka.common.serialization.LongDeserializer; import static java.time.Duration.ofMillis; import static java.util.Collections.singleton; public class Consumer implements ConsumerRebalanceListener, OffsetCommitCallback { private static final String BOOTSTRAP_SERVERS = "localhost:9092"; private static final String GROUP_ID = "my-group"; private static final long POLL_TIMEOUT_MS = 1_000L; private static final String TOPIC_NAME = "my-topic"; private static final long NUM_MESSAGES = 100; private static final long PROCESSING_DELAY_MS = 0L; private KafkaConsumer<Long, byte[]> kafkaConsumer; protected AtomicLong messageCount = new AtomicLong(0); private Map<TopicPartition, OffsetAndMetadata> pendingOffsets = new HashMap<>(); public static void main(String[] args) { new Consumer().run(); } public void run() { System.out.println("Running consumer"); try (var consumer = createKafkaConsumer()) { 1 kafkaConsumer = consumer; consumer.subscribe(singleton(TOPIC_NAME), this); 2 System.out.printf("Subscribed to %s%n", TOPIC_NAME); while (messageCount.get() < NUM_MESSAGES) { 3 try { ConsumerRecords<Long, byte[]> records = consumer.poll(ofMillis(POLL_TIMEOUT_MS)); 4 if (!records.isEmpty()) { 5 for (ConsumerRecord<Long, byte[]> record : records) { System.out.printf("Record fetched from %s-%d with offset %d%n", record.topic(), record.partition(), record.offset()); sleep(PROCESSING_DELAY_MS); 6 pendingOffsets.put(new TopicPartition(record.topic(), record.partition()), 7 new OffsetAndMetadata(record.offset() + 1, null)); if (messageCount.incrementAndGet() == NUM_MESSAGES) { break; } } consumer.commitAsync(pendingOffsets, this); 8 pendingOffsets.clear(); } } catch (OffsetOutOfRangeException | NoOffsetForPartitionException e) { 9 System.out.println("Invalid or no offset found, using latest"); consumer.seekToEnd(e.partitions()); consumer.commitSync(); } catch (Exception e) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } } } private KafkaConsumer<Long, byte[]> createKafkaConsumer() { Properties props = new Properties(); 10 props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 11 props.put(ConsumerConfig.CLIENT_ID_CONFIG, "client-" + UUID.randomUUID()); 12 props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID); 13 props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class); 14 props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); 15 props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest"); 16 return new KafkaConsumer<>(props); } private void sleep(long ms) { 17 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private boolean retriable(Exception e) { 18 if (e == null) { return false; } else if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RebalanceInProgressException) || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onPartitionsAssigned(Collection<TopicPartition> partitions) { 19 System.out.printf("Assigned partitions: %s%n", partitions); } @Override public void onPartitionsRevoked(Collection<TopicPartition> partitions) { 20 System.out.printf("Revoked partitions: %s%n", partitions); kafkaConsumer.commitSync(pendingOffsets); pendingOffsets.clear(); } @Override public void onPartitionsLost(Collection<TopicPartition> partitions) { 21 System.out.printf("Lost partitions: {}", partitions); } @Override public void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception e) { 22 if (e != null) { System.err.println("Failed to commit offsets"); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } } 1 The client creates a Kafka consumer using the createKafkaConsumer method. 2 The consumer subscribes to a specific topic. After subscribing to the topic, a confirmation message is printed. 3 The maximum number of messages consumed is determined by the NUM_MESSAGES constant value. 4 The poll to fetch messages must be called within session.timeout.ms to avoid a rebalance. 5 A condition to check that the records object containing the batch messages fetched from Kafka is not empty. If the records object is empty, there are no new messages to process and the process is skipped. 6 Method to introduce a delay to the message fetching process for a specified number of milliseconds. 7 The consumer uses a pendingOffsets map to store the offsets of the consumed messages that need to be committed. 8 After processing a batch of messages, the consumer asynchronously commits the offsets using the commitAsync method, implementing at-least-once semantics. 9 A catch to handle non-fatal and fatal errors when consuming messages. For non-fatal errors, the consumer seeks to the end of the partition and starts consuming from the latest available offset. If an exception cannot be retried, a stack trace is printed, and the consumer is terminated. 10 The client creates the KafkaConsumer instance using the provided configuration. You can use a properties file or add the configuration directly. For more information on the basic configuration, see Chapter 4, Configuring client applications for connecting to a Kafka cluster . 11 The connection to the Kafka brokers. 12 A unique client ID for the producer using a randomly generated UUID. A client ID is not required, but it is useful to track the source of requests. 13 The group ID for consumer coordination of assignments to partitions. 14 The appropriate deserializer classes for handling keys and values as byte arrays. 15 Configuration to disable automatic offset commits. 16 Configuration for the consumer to start consuming messages from the earliest available offset when no committed offset is found for a partition. 17 Method to introduce a delay to the message consuming process for a specified number of milliseconds. If the thread responsible for sending messages is interrupted while paused, it throws an InterruptedException error. 18 Method to check whether to retry committing a message following an exception. Null and specified exceptions are not retried, nor are exceptions that do not implement the RebalanceInProgressException or RetriableException interfaces. You can customize this method to include other errors. 19 Method to print a message to the console indicating the list of partitions that have been assigned to the consumer. 20 Method called when the consumer is about to lose ownership of partitions during a consumer group rebalance. The method prints the list of partitions that are being revoked from the consumer. Any pending offsets are committed. 21 Method called when the consumer loses ownership of partitions during a consumer group rebalance, but failed to commit any pending offsets. The method prints the list of partitions lost by the consumer. 22 Method called when the consumer is committing offsets to Kafka. If an error ocurred when committing an offset, an error message is printed. The method checks the exception and takes appropriate action based on whether it's a fatal or non-fatal error. If the error is non-fatal, the offset committing process continues. If the error is fatal, a stack trace is printed and the consumer is terminated. Error handling Fatal exceptions caught by the consumer client: InterruptedException Error thrown when the current thread is interrupted while paused. Interruption typically occurs when stopping or shutting down the consumer. The exception is rethrown as a RuntimeException , which terminates the consumer. IllegalArgumentException Error thrown when the consumer receives invalid or inappropriate arguments. For example, the exception is thrown if the topic is missing. UnsupportedOperationException Error thrown when an operation is not supported or a method is not implemented. For example, the exception is thrown if an attempt is made to use an unsupported consumer configuration or call a method that is not supported by the KafkaConsumer class. Non-fatal exceptions caught by the consumer client: OffsetOutOfRangeException Error thrown when the consumer attempts to seek to an invalid offset for a partition, typically when the offset is outside the valid range of offsets for that partition. NoOffsetForPartitionException Error thrown when there is no committed offset for a partition, and auto-reset policy is not enabled, or the requested offset is invalid. RebalanceInProgressException Error thrown during a consumer group rebalance when partitions are being assigned. Offset commits cannot be completed when the consumer is undergoing a rebalance. RetriableException Error thrown for any exception that implements the RetriableException interface provided by the Kafka client library. With non-fatal errors, the consumer continues to process messages. 6.3. Using cooperative rebalancing with consumers Kafka consumers use a partition assignment strategy determined by the rebalancing protocol in place. By default, Kafka employs the RangeAssignor protocol, which involves consumers relinquishing their partition assignments during a rebalance, leading to potential service disruptions. To improve efficiency and reduce downtime, you can switch to the CooperativeStickyAssignor protocol, a cooperative rebalancing approach. Unlike the default protocol, cooperative rebalancing enables consumers to work together, retaining their partition assignments during a rebalance, and releasing partitions only when necessary to achieve a balance within the consumer group. Procedure In the consumer configuration, use the partition.assignment.strategy property to switch to using CooperativeStickyAssignor as the protocol. For example, if the current configuration is partition.assignment.strategy=RangeAssignor, CooperativeStickyAssignor , update it to partition.assignment.strategy=CooperativeStickyAssignor . Instead of modifying the consumer configuration file directly, you can also set the partition assignment strategy using props.put in the consumer application code: # ... props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, "org.apache.kafka.clients.consumer.CooperativeStickyAssignor"); # ... Restart each consumer in the group one at a time, allowing them to rejoin the group after each restart. Warning After switching to the CooperativeStickyAssignor protocol, a RebalanceInProgressException may occur during consumer rebalancing, leading to unexpected stoppages of multiple Kafka clients in the same consumer group. Additionally, this issue may result in the duplication of uncommitted messages, even if Kafka consumers have not changed their partition assignments during rebalancing. If you are using automatic offset commits ( enable.auto.commit=true ), you don't need to make any changes. If you are manually committing offsets ( enable.auto.commit=false ), and a RebalanceInProgressException occurs during the manual commit, change the consumer implementation to call poll() in the loop to complete the consumer rebalancing process. For more information, see the CooperativeStickyAssignor article on the customer portal. | [
"import java.util.Properties; import java.util.Random; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.producer.Callback; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerConfig; import org.apache.kafka.clients.producer.ProducerRecord; import org.apache.kafka.clients.producer.RecordMetadata; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArraySerializer; import org.apache.kafka.common.serialization.LongSerializer; public class Producer implements Callback { private static final Random RND = new Random(0); private static final String BOOTSTRAP_SERVERS = \"localhost:9092\"; private static final String TOPIC_NAME = \"my-topic\"; private static final long NUM_MESSAGES = 100; private static final int MESSAGE_SIZE_BYTES = 100; private static final long PROCESSING_DELAY_MS = 0L; protected AtomicLong messageCount = new AtomicLong(0); public static void main(String[] args) { new Producer().run(); } public void run() { System.out.println(\"Running producer\"); try (var producer = createKafkaProducer()) { 1 byte[] value = randomBytes(MESSAGE_SIZE_BYTES); 2 while (messageCount.get() < NUM_MESSAGES) { 3 sleep(PROCESSING_DELAY_MS); 4 producer.send(new ProducerRecord<>(TOPIC_NAME, messageCount.get(), value), this); 5 messageCount.incrementAndGet(); } } } private KafkaProducer<Long, byte[]> createKafkaProducer() { Properties props = new Properties(); 6 props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 7 props.put(ProducerConfig.CLIENT_ID_CONFIG, \"client-\" + UUID.randomUUID()); 8 props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class); 9 props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class); return new KafkaProducer<>(props); } private void sleep(long ms) { 10 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private byte[] randomBytes(int size) { 11 if (size <= 0) { throw new IllegalArgumentException(\"Record size must be greater than zero\"); } byte[] payload = new byte[size]; for (int i = 0; i < payload.length; ++i) { payload[i] = (byte) (RND.nextInt(26) + 65); } return payload; } private boolean retriable(Exception e) { 12 if (e == null) { return false; } else if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onCompletion(RecordMetadata metadata, Exception e) { 13 if (e != null) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } else { System.out.printf(\"Record sent to %s-%d with offset %d%n\", metadata.topic(), metadata.partition(), metadata.offset()); } } }",
"import java.util.Collection; import java.util.HashMap; import java.util.Map; import java.util.Properties; import java.util.UUID; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; import org.apache.kafka.clients.consumer.ConsumerConfig; import org.apache.kafka.clients.consumer.ConsumerRebalanceListener; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.apache.kafka.clients.consumer.ConsumerRecords; import org.apache.kafka.clients.consumer.KafkaConsumer; import org.apache.kafka.clients.consumer.NoOffsetForPartitionException; import org.apache.kafka.clients.consumer.OffsetAndMetadata; import org.apache.kafka.clients.consumer.OffsetCommitCallback; import org.apache.kafka.clients.consumer.OffsetOutOfRangeException; import org.apache.kafka.common.TopicPartition; import org.apache.kafka.common.errors.RebalanceInProgressException; import org.apache.kafka.common.errors.RetriableException; import org.apache.kafka.common.serialization.ByteArrayDeserializer; import org.apache.kafka.common.serialization.LongDeserializer; import static java.time.Duration.ofMillis; import static java.util.Collections.singleton; public class Consumer implements ConsumerRebalanceListener, OffsetCommitCallback { private static final String BOOTSTRAP_SERVERS = \"localhost:9092\"; private static final String GROUP_ID = \"my-group\"; private static final long POLL_TIMEOUT_MS = 1_000L; private static final String TOPIC_NAME = \"my-topic\"; private static final long NUM_MESSAGES = 100; private static final long PROCESSING_DELAY_MS = 0L; private KafkaConsumer<Long, byte[]> kafkaConsumer; protected AtomicLong messageCount = new AtomicLong(0); private Map<TopicPartition, OffsetAndMetadata> pendingOffsets = new HashMap<>(); public static void main(String[] args) { new Consumer().run(); } public void run() { System.out.println(\"Running consumer\"); try (var consumer = createKafkaConsumer()) { 1 kafkaConsumer = consumer; consumer.subscribe(singleton(TOPIC_NAME), this); 2 System.out.printf(\"Subscribed to %s%n\", TOPIC_NAME); while (messageCount.get() < NUM_MESSAGES) { 3 try { ConsumerRecords<Long, byte[]> records = consumer.poll(ofMillis(POLL_TIMEOUT_MS)); 4 if (!records.isEmpty()) { 5 for (ConsumerRecord<Long, byte[]> record : records) { System.out.printf(\"Record fetched from %s-%d with offset %d%n\", record.topic(), record.partition(), record.offset()); sleep(PROCESSING_DELAY_MS); 6 pendingOffsets.put(new TopicPartition(record.topic(), record.partition()), 7 new OffsetAndMetadata(record.offset() + 1, null)); if (messageCount.incrementAndGet() == NUM_MESSAGES) { break; } } consumer.commitAsync(pendingOffsets, this); 8 pendingOffsets.clear(); } } catch (OffsetOutOfRangeException | NoOffsetForPartitionException e) { 9 System.out.println(\"Invalid or no offset found, using latest\"); consumer.seekToEnd(e.partitions()); consumer.commitSync(); } catch (Exception e) { System.err.println(e.getMessage()); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } } } private KafkaConsumer<Long, byte[]> createKafkaConsumer() { Properties props = new Properties(); 10 props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS); 11 props.put(ConsumerConfig.CLIENT_ID_CONFIG, \"client-\" + UUID.randomUUID()); 12 props.put(ConsumerConfig.GROUP_ID_CONFIG, GROUP_ID); 13 props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class); 14 props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, ByteArrayDeserializer.class); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); 15 props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, \"earliest\"); 16 return new KafkaConsumer<>(props); } private void sleep(long ms) { 17 try { TimeUnit.MILLISECONDS.sleep(ms); } catch (InterruptedException e) { throw new RuntimeException(e); } } private boolean retriable(Exception e) { 18 if (e == null) { return false; } else if (e instanceof IllegalArgumentException || e instanceof UnsupportedOperationException || !(e instanceof RebalanceInProgressException) || !(e instanceof RetriableException)) { return false; } else { return true; } } @Override public void onPartitionsAssigned(Collection<TopicPartition> partitions) { 19 System.out.printf(\"Assigned partitions: %s%n\", partitions); } @Override public void onPartitionsRevoked(Collection<TopicPartition> partitions) { 20 System.out.printf(\"Revoked partitions: %s%n\", partitions); kafkaConsumer.commitSync(pendingOffsets); pendingOffsets.clear(); } @Override public void onPartitionsLost(Collection<TopicPartition> partitions) { 21 System.out.printf(\"Lost partitions: {}\", partitions); } @Override public void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception e) { 22 if (e != null) { System.err.println(\"Failed to commit offsets\"); if (!retriable(e)) { e.printStackTrace(); System.exit(1); } } } }",
"props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG, \"org.apache.kafka.clients.consumer.CooperativeStickyAssignor\");"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/developing_kafka_client_applications/proc-generic-java-client-str |
28.4. Modifying Password Policy Attributes | 28.4. Modifying Password Policy Attributes Important When you modify a password policy, the new rules apply to new passwords only. The changes are not applied retroactively to existing passwords. For the change to take effect, users must change their existing passwords, or the administrator must reset the passwords of other users. See Section 22.1.1, "Changing and Resetting User Passwords" . Note For recommendations on secure user passwords, see Password Security in the Security Guide . To modify a password policy using: the web UI, see the section called "Web UI: Modifying a Password Policy" the command line, see the section called "Command Line: Modifying a Password Policy" Note that setting a password policy attribute to 0 means no attribute restriction. For example, if you set maximum lifetime to 0 , user passwords never expire. Web UI: Modifying a Password Policy Select Policy Password Policies . Click the policy you want to change. Update the required attributes. For details on the available attributes, see Section 28.2.1, "Supported Password Policy Attributes" . Click Save to confirm the changes. Command Line: Modifying a Password Policy Use the ipa pwpolicy-mod command to change the policy's attributes. For example, to update the global password policy and set the minimum password length to 10 : To update a group policy, add the group name to ipa pwpolicy-mod . For example: Optional. Use the ipa pwpolicy-show command to display the new policy settings. To display the global policy: To display a group policy, add the group name to ipa pwpolicy-show : | [
"ipa pwpolicy-mod --minlength=10",
"ipa pwpolicy-mod group_name --minlength=10",
"ipa pwpolicy-show",
"ipa pwpolicy-show group_name"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/pwd-policies-mod |
Chapter 2. Logging in to OpenShift AI | Chapter 2. Logging in to OpenShift AI After you install OpenShift AI, log in to the OpenShift AI dashboard so that you can set up your development and deployment environment. Prerequisites You know the OpenShift AI identity provider and your login credentials. If you are a data scientist, data engineer, or ML engineer, your administrator must provide you with the OpenShift AI instance URL, for example: You have the latest version of one of the following supported browsers: Google Chrome Mozilla Firefox Safari Procedure Browse to the OpenShift AI instance URL and click Log in with OpenShift . If you have access to OpenShift, you can browse to the OpenShift web console and click the Application Launcher ( ) Red Hat OpenShift AI . Click the name of your identity provider, for example, GitHub , Google , or your company's single sign-on method. Enter your credentials and click Log in (or equivalent for your identity provider). Verification The OpenShift AI dashboard opens on the Home page. | [
"https://rhoai-dashboard-redhat-oai-applications.apps.example.abc1.p1.openshiftapps.com/"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/getting_started_with_red_hat_openshift_ai_self-managed/logging-in_get-started |
Chapter 5. User [user.openshift.io/v1] | Chapter 5. User [user.openshift.io/v1] Description Upon log in, every user of the system receives a User and Identity resource. Administrators may directly manipulate the attributes of the users for their own tracking, or set groups via the API. The user name is unique and is chosen based on the value provided by the identity provider - if a user already exists with the incoming name, the user name may have a number appended to it depending on the configuration of the system. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required groups 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources fullName string FullName is the full name of user groups array (string) Groups specifies group names this user is a member of. This field is deprecated and will be removed in a future release. Instead, create a Group object containing the name of this User. identities array (string) Identities are the identities associated with this user kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta 5.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/users DELETE : delete collection of User GET : list or watch objects of kind User POST : create an User /apis/user.openshift.io/v1/watch/users GET : watch individual changes to a list of User. deprecated: use the 'watch' parameter with a list operation instead. /apis/user.openshift.io/v1/users/{name} DELETE : delete an User GET : read the specified User PATCH : partially update the specified User PUT : replace the specified User /apis/user.openshift.io/v1/watch/users/{name} GET : watch changes to an object of kind User. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 5.2.1. /apis/user.openshift.io/v1/users Table 5.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of User Table 5.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 5.3. Body parameters Parameter Type Description body DeleteOptions schema Table 5.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind User Table 5.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 5.6. HTTP responses HTTP code Reponse body 200 - OK UserList schema 401 - Unauthorized Empty HTTP method POST Description create an User Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.8. Body parameters Parameter Type Description body User schema Table 5.9. HTTP responses HTTP code Reponse body 200 - OK User schema 201 - Created User schema 202 - Accepted User schema 401 - Unauthorized Empty 5.2.2. /apis/user.openshift.io/v1/watch/users Table 5.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of User. deprecated: use the 'watch' parameter with a list operation instead. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 5.2.3. /apis/user.openshift.io/v1/users/{name} Table 5.12. Global path parameters Parameter Type Description name string name of the User Table 5.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an User Table 5.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 5.15. Body parameters Parameter Type Description body DeleteOptions schema Table 5.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified User Table 5.17. HTTP responses HTTP code Reponse body 200 - OK User schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified User Table 5.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 5.19. Body parameters Parameter Type Description body Patch schema Table 5.20. HTTP responses HTTP code Reponse body 200 - OK User schema 201 - Created User schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified User Table 5.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.22. Body parameters Parameter Type Description body User schema Table 5.23. HTTP responses HTTP code Reponse body 200 - OK User schema 201 - Created User schema 401 - Unauthorized Empty 5.2.4. /apis/user.openshift.io/v1/watch/users/{name} Table 5.24. Global path parameters Parameter Type Description name string name of the User Table 5.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind User. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 5.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/user_and_group_apis/user-user-openshift-io-v1 |
Migration Toolkit for Containers | Migration Toolkit for Containers OpenShift Container Platform 4.15 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_containers/1.8/html/migration_toolkit_for_containers/index |
1.3. Indirect Integration | 1.3. Indirect Integration The main advantage of the indirect integration is to manage Linux systems and policies related to those systems centrally while enabling users from Active Directory (AD) domains to transparently access Linux systems and services. There are two different approaches to the indirect integration: Trust-based solution The recommended approach is to leverage Identity Management (IdM) in Red Hat Enterprise Linux as the central server to control Linux systems and then establish cross-realm Kerberos trust with AD, enabling users from AD to log on to and to use single sign-on to access Linux systems and resources. This solution uses the Kerberos capability to establish trusts between different identity sources. IdM presents itself to AD as a separate forest and takes advantage of the forest-level trusts supported by AD. In complex environments, a single IdM forest can be connected to multiple AD forests. This setup enables better separation of duties for different functions in the organization. AD administrators can focus on users and policies related to users while Linux administrators have full control over the Linux infrastructure. In such a case, the Linux realm controlled by IdM is analogous to an AD resource domain or realm but with Linux systems in it. Note In Windows, every domain is a Kerberos realm and a DNS domain at the same time. Every domain managed by the domain controller needs to have its own dedicated DNS zone. The same applies when IdM is trusted by AD as a forest. AD expects IdM to have its own DNS domain. For the trust setup to work, the DNS domain needs to be dedicated to the Linux environment. Note that in trust environments, IdM enables you to use ID views to configure POSIX attributes for AD users on the IdM server. For details, see: Chapter 8, Using ID Views in Active Directory Environments SSSD Client-side Views in the System-Level Authentication Guide Synchronization-based solution An alternative to a trust-based solution is to leverage user synchronization capability, also available in IdM or Red Hat Directory Server (RHDS), allowing user accounts (and with RHDS also group accounts) to be synchronized from AD to IdM or RHDS, but not in the opposite direction. User synchronization has certain limitations, including: duplication of users the need to synchronize passwords, which requires a special component on all domain controllers in an AD domain to be able to capture passwords, all users must first manually change them synchronization supports only a single domain only one domain controller in AD can be used to synchronize data to one instance of IdM or RHDS In some integration scenarios, the user synchronization may be the only available option, but in general, use of the synchronization approach is discouraged in favor of the cross-realm trust-based integration. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/summary-indirect |
Chapter 3. Red Hat Enterprise Linux AI hardware requirements | Chapter 3. Red Hat Enterprise Linux AI hardware requirements Various hardware accelerators require different requirements for serving and inferencing as well as installing, generating and training the granite-7b-starter model on Red Hat Enterprise Linux AI. 3.1. Hardware requirements for end-to-end workflow of Granite models The following charts show the hardware requirements for running the full InstructLab end-to-end workflow to customize the Granite student model. This includes: synthetic data generation (SDG), training, and evaluating a custom Granite model. 3.1.1. Bare metal Hardware vendor Supported accelerators (GPUs) Aggregate GPU memory Recommended additional disk storage NVIDIA 2xA100 4xA100 8xA100 160 GB 320 GB 640 GB 1 TB NVIDIA 2xH100 4xH100 8xH100 160 GB 320 GB 640 GB 1 TB NVIDIA 4xL40S 8xL40S 192 GB 384 GB 1 TB 3.1.2. Amazon Web Services (AWS) Hardware vendor Supported accelerators (GPUs) Aggregate GPU Memory AWS Instance Recommended additional disk storage NVIDIA 8xA100 640 GB p4de.24xlarge 1 TB NVIDIA 8xH100 640 GB p5.48xlarge 1 TB 3.2. Hardware requirements for inference serving Granite models The following charts display the minimum hardware requirements for inference serving a model on Red Hat Enterprise Linux AI. 3.2.1. Bare metal Hardware vendor Supported accelerators (GPUs) minimum Aggregate GPU memory Recommended additional disk storage NVIDIA A100 80 GB 1 TB NVIDIA H100 80 GB 1 TB NVIDIA L40S 48 GB 1 TB NVIDIA L4 24 GB 1 TB 3.2.2. Amazon Web Services (AWS) Hardware vendor Supported accelerators (GPUs) Minimum Aggregate GPU Memory Recommended additional disk storage NVIDIA A100 80 GB 1 TB NVIDIA H100 80 GB 1 TB NVIDIA L40S 48 GB 1 TB NVIDIA L4 24 GB 1 TB 3.2.3. IBM cloud Hardware vendor Supported accelerators (GPUs) Minimum Aggregate GPU Memory Recommended additional disk storage NVIDIA L40S 48 GB 1 TB NVIDIA L4 24 GB 1 TB | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html/getting_started/hardware_requirements_rhelai |
Authorization | Authorization Red Hat Developer Hub 1.2 Configuring authorization by using role based access control (RBAC) in Red Hat Developer Hub Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/authorization/index |
Reference Architecture for deploying Red Hat OpenShift Container Platform on Red Hat OpenStack Platform | Reference Architecture for deploying Red Hat OpenShift Container Platform on Red Hat OpenStack Platform Red Hat OpenStack Platform 16.2 Guidelines for a validated, private cloud solution August Simonelli [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/reference_architecture_for_deploying_red_hat_openshift_container_platform_on_red_hat_openstack_platform/index |
Chapter 5. Compiler and Tools | Chapter 5. Compiler and Tools dmidecode now supports SMBIOS 3.0.0 This update adds SMBIOS 3.0.0 support to the dmidecode utility. Now, dmidecode can work with 64-bit structures according to SMBIOS 3.0.0 specification. (BZ# 1232558 ) mcelog now supports additional Intel processors The mcelog utility now supports 6th generation Intel Core processors, Intel Xeon processor E3 v5, and current Intel Pentium and Intel Celeron-branded processors. These new processors report with cpuid 0x4E and 0x5E . Additionally, mcelog now also recognizes cpuids for current Intel Atom processors ( 0x26 , 0x27 , 0x35 , 0x36 , 0x37 , 0x4a , 0x4c , 0x4d , 0x5a , and 0x5d ) and Intel Xeon processor E5 v4, E7 v4, and Intel Xeon D ( 0x56 and 0x4f ). (BZ#1255561) python-linux-procfs rebased to version 0.4.9 The python-linux-procfs packages have been upgraded to upstream version 0.4.9, which provides a number of bug fixes and enhancements over the version. Notable fixes include: The package now contains API documentation installed in the /usr/share/docs/python-linux-procfs directory. Handling of space separated fields in /proc/PID/flags has been improved which removes parsing errors previously encountered by python-linux-procfs . (BZ# 1255725 ) trace-cmd rebased to version 2.2.4 The trace-cmd packages have been upgraded to upstream version 2.2.4, which provides a number of bug fixes and enhancements over the version. Notable changes include: A new option -P is available for the trace-cmd list command. Use this option to list loaded plug-in files by path. The trace-cmd report command has a new option, -t , which can be used to print full time stamps in reports. (BZ# 1218670 ) tcsh now supports USDanyerror and USDtcsh_posix_status The tcsh command-language interpreter now supports the use of the USDanyerror and USDtcsh_posix_status variables, which define the tcsh behavior in case of an error of any pipelined command. This update brings the tcsh functionality closer to the Red Hat Enterprise Linux 7 tcsh version. Note that these two variables have opposite logical meanings. For more information, see the tcsh(1) manual page. (BZ#1256653) OpenJDK 8 now supports ECC With this update, OpenJDK 8 supports Elliptic Curve Cryptography (ECC) and the associated ciphers for TLS connections. ECC is in most cases preferable to older cryptographic solutions for making secure network connections. Additionally, the java-1.8.0 package priority has been expanded to 7 digits. (BZ# 1208307 ) RC4 is now disabled by default in OpenJDK 6 and OpenJDK 7 Earlier OpenJDK packages allowed the RC4 cryptographic algorithm to be used when making secure connections using Transport Layer Security (TLS). This algorithm is no longer secure, and so has been disabled in this release. To retain its use, it is necessary to revert to the earlier setting of the jdk.tls.disabledAlgorithms of SSLv3, DH keySize < 768 . This can be done permanently in the <java.home>/jre/lib/security/java.security file or by adding the following line: to a new text file and passing the location of that file to Java on the command line using the argument -Djava.security.properties=<path to file> . (BZ#1217131) rhino rebased to version 1.7R4 Rhino , an open-source implementation of JavaScript written in Java, has been rebased to version 1.7R4. This update fixes a JSON-related bug in the java-1.7.0-openjdk package, which uses rhino as a build dependency. Additionally, the previously missing manual page, README and LICENSE files have been added. (BZ# 1244351 ) pcp rebased to version 3.10.9 Several enhancements have been made to Performance Co-Pilot (PCP). Note that the majority of Performance Metric Domain Agents (PMDA) have been split into their own subrpms. This allows for more streamlined PCP installations. Additions include new kernel metrics such as Intel NVME device support, IPv6 metrics, and container mappings to LXC containers, several new PMDAs (MIC, json, dm, slurm, pipe), and several new tools, including; pcp-verify(1), pcp-shping(1), pcp-atopsar(1), and pmrep(1). An export to Zabbix tool has also been added via zbxpcp(3). The pcp-atop tool has received a full rewrite, including a new NFS feature set. PCP's Performance Metrics Web Daemon (pmwebd) has received improvements, such as opening directories-as-archives for graphite, as well as adding support for the PCP pmStore(3) protocols. sar2pcp(1) has also been updated to include support for sysstat 11.0.1 commands. (BZ# 1248272 ) openmpi rebased to version 1.10.2 The openmpi packages have been upgraded to upstream version 1.10.2, which provides a number of bug fixes and enhancements over the version. Notable changes include the following: The new name of the binary package is openmpi-1.10 . Its environment module name on the x86_64 architecture is openmpi-1.10-x86_64 . To preserve compatibility with Red Hat Enterprise Linux 6.7, openmpi-1.8 is still available. Its package name is openmpi-1.8 and it keeps the environment module name ( openmpi-x86_64 on the x86_64 architecture) it had in Red Hat Enterprise Linux 6.7. (BZ#1130442) Changes in Open MPI distribution Open MPI is an open source Message Passing Interface implementation. The compat-openmpi package, which provides earlier versions of Open MPI for backward compatibility with minor releases of Red Hat Enterprise Linux 6, has been split into several subpackages based on the Open MPI version. The names of the subpackages (and their respective environment module names on the x86_64 architecture) are: openmpi-1.4 (openmpi-1.4-x86_64) openmpi-1.4-psm (openmpi-1.4-psm-x86_64) openmpi-1.5.3 (compat-openmpi-x86_64, aliased as openmpi-1.5.3-x86_64) openmpi-1.5.3-psm (compat-openmpi-psm-x86_64, aliased as openmpi-1.5.3-psm-x86_64) openmpi-1.5.4 (openmpi-1.5.4-x86_64) openmpi-1.8 (openmpi-x86_64, aliased as openmpi-1.8-x86_64) The yum install openmpi command in Red Hat Enterprise Linux 6.8 installs the openmpi-1.8 package for maximum compatibility with Red Hat Enterprise Linux 6.7. A later version of Open MPI is available in the openmpi-1.10 package. (BZ# 1158864 ) Omping is now fully supported Open Multicast Ping (Omping) is a tool to test the IP multicast functionality, primarily in the local network. This utility allows users to test IP multicast functionality and assists in the diagnosing whether a problem is in the network configuration or there is a bug. In Red Hat Enterprise Linux 6, Omping was previously provided as a Technology Preview and it is now fully supported. (BZ# 657370 ) elfutils rebased to version 0.164 The eu-addr2line utility introduces the following improvements: Input addresses are now always interpreted as hexadecimal numbers, never as octal or decimal numbers. A new option, -a , --addresses , to print address before each entry. A new option, -C , --demangle , to show demangled symbols. A new option, --pretty-print , to print all information on one line. The eu-strip utility is now able to: Handle ELF files with merged strtab and shstrtab tables. Handle missing SHF_INFO_LINK section flags. The libdw library introduces improvements in the following functions: dwfl_standard_find_debuginfo now searches any subdirectory of the binary path under the debuginfo root when the separate debug file could not be found by build ID. dwfl_linux_proc_attach can now be called before any Dwfl_Modules have been reported. dwarf_peel_type now also handles DW_TAG_atomic_type . Various new preliminary DWARF5 constants are now recognized, namely DW_TAG_atomic_type , DW_LANG_Fortran03 , DW_LANG_Fortran08 , DW_LANG_Haskell . Additionally, a new header file, elfutils/known-dwarf.h , is now installed by the devel package. (BZ#1254647) glibc now supports BIG5-HKSCS-2008 Previously, glibc supported an earlier version of the Hong Kong Supplementary Character Set, BIG5-HKSCS-2004. The BIG5-HKSCS character set map has been updated to the HKSCS-2008 revision of the standard. This allows Red Hat Enterprise Linux customers to write applications processing text that is encoded with this version of the standard. (BZ# 1211748 ) Human-readable installed-rpms The format of the installed-rpms sosreport list has been simplified to allow for optimal human readability. (BZ# 1267677 ) OProfile now supports 6th Generation Intel Core processors With this update, OProfile recognizes the 6th Generation Intel Core processors, and it now provides non-architected performance events for the 6th Generation Intel Core processors instead of defaulting to the small subset of architected performance events. (BZ#1254764) OProfile updated to recognize the Intel Xeon Processor D-1500 product family With this update, support for Intel Xeon Processor D-1500 product family has been added to OProfile, and the processor-specific events for this product family are now available. Note that some events, such as LLC_REFS and LLC_MISSES , may not count correctly. Check http://www.intel.com/content/www/us/en/processors/xeon/xeon-d-1500-specification-update.html for a complete list of performance events affected. (BZ#1231399) SystemTap rebased to version 2.9 The SystemTap instrumentation system has been rebased to version 2.9. Major improvements in this update include more complete manual pages, more portable and usable netfilter probes, better support for kernel backtraces without debuginfo, better debuginfo-related diagnostics, reduced translator memory usage, and better performance of generated code. (BZ#1254648) powerpc-utils rebased to version 1.3.0 The powerpc-utils packages have been upgraded to upstream version 1.3.0, which provides a number of bug fixes and enhancements over the version. (BZ#1252706) ipmitool rebased to version 1.8.15 The ipmitool packages have been upgraded to upstream version 1.8.15, which provides a number of bug fixes and enhancements over the version. The notable changes include support for the 13G Dell PowerEdge systems, support for host names longer than 64 bytes, and improved IPv6 support. (BZ#1253416) memtest86+ rebased to version 5.01 The memtest86+ package has been upgraded to upstream version 5.01, which provides a number of bug fixes and enhancements over the version. Notable changes include the following: Support for up to 2 TB of RAM on AMD64 and Intel 64 CPUs Support for new Intel and AMD CPUs, for example Intel Haswell Experimental SMT support up to 32 cores For detailed changes, see http://www.memtest.org/#change (BZ#1009083) New package: java-1.8.0-ibm This update adds IBM Java 8 to Red Hat Enterprise Linux 6. The java-1.8.0-ibm package is available in the Supplementary channel. (BZ#1148503) New option for arpwatch: -p This update introduces option -p for the arpwatch command of the arpwatch network monitoring tool. This option disables promiscuous mode. (BZ#1006479) | [
"jdk.tls.disabledAlgorithms=SSLv3, DH keySize < 768"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_release_notes/new_features_compiler_and_tools |
Appendix C. Revision History | Appendix C. Revision History 0.3-4 Fri June 28, Lenka Spackova ( [email protected] ) Updated a link to the Converting from a Linux distribution to RHEL using the Convert2RHEL utility guide (Overview). 0.3-3 Fri Apr 28 2023, Lucie Varakova ( [email protected] ) Added a known issue JIRA:RHELPLAN-155168 (Authentication and Interoperability). 0.3-2 Wed Oct 19 2022, Lenka Spackova ( [email protected] ) Added information on how to configure unbound to run inside chroot , BZ#2121623 (Networking). 0.3-1 Wed Sep 21 2022, Lenka Spackova ( [email protected] ) Added two new ehnancements, BZ#1967950 and BZ#1993822 (Security). 0.3-0 Fri Apr 22 2022, Lenka Spackova ( [email protected] ) Added two deprecated packages to Deprecated Functionality . 0.2-9 Thu Feb 17 2022, Lenka Spackova ( [email protected] ) Added two notes related to supportability to Deprecated Functionality . 0.2-8 Tue Feb 08 2022, Lenka Spackova ( [email protected] ) Added information about the hidepid=n mount option not being recommended in RHEL 7 to Deprecated Functionality . 0.2-7 Wed Jan 26 2022, Lenka Spackova ( [email protected] ) Added a known issue BZ#2042313 (System and Subscription Management). 0.2-6 Tue Dec 07 2021, Lenka Spackova ( [email protected] ) Added a bug fix BZ#1942281 (Security). Changed a known issue to a bug fix BZ#1976123 (Security). 0.2-5 Tue Aug 17 2021, Lenka Spackova ( [email protected] ) Updated the Red Hat Software Collections section. 0.2-4 Wed Jul 21 2021, Lenka Spackova ( [email protected] ) Added enhancements BZ#1958789 and BZ#1955180 (Security). 0.2-3 Mon Jul 12 2021, Lenka Spackova ( [email protected] ) Added a known issue BZ#1976123 (Security). 0.2-2 Thu Jun 03 2021, Lenka Spackova ( [email protected] ) Added a known issue BZ#1933998 (Kernel). Added a bug fix BZ#1890111 (Security). 0.2-1 Fri May 21 2021, Lenka Spackova ( [email protected] ) Updated information about OS conversion in Overview . 0.2-0 Wed Apr 28 2020, Lenka Spackova ( [email protected] ) Added a bug fix BZ#1891435 (Security). 0.1-9 Mon Apr 26 2020, Lenka Spackova ( [email protected] ) Added a known issue BZ#1942865 (Storage). 0.1-8 Tue Apr 06 2021, Lenka Spackova ( [email protected] ) Improved the list of supported architectures. 0.1-7 Wed Mar 31 2021, Lenka Spackova ( [email protected] ) Updated information about OS conversions with the availability of the supported Convert2RHEL utility. 0.1-6 Tue Mar 30 2021, Lenka Spackova ( [email protected] ) Added a known issue (Kernel). 0.1-5 Tue Mar 02 2021, Lenka Spackova ( [email protected] ) Updated a link to Upgrading from RHEL 6 to RHEL 7 . Fixed CentOS Linux name. 0.1-4 Wed Feb 03 2021, Lenka Spackova ( [email protected] ) Added a note about deprecated parameters for the network configuration in the kernel command line. 0.1-3 Tue Feb 02 2021, Lenka Spackova ( [email protected] ) Added a retirement notice for Red Hat Enterprise Linux Atomic Host . 0.1-2 Thu Jan 28 2021, Lenka Spackova ( [email protected] ) Added a note related to the new page_owner kernel parameter. 0.1-1 Tue Jan 19 2021, Lenka Spackova ( [email protected] ) Updated deprecated packages. 0.1-0 Wed Dec 16 2020, Lenka Spackova ( [email protected] ) Added mthca to deprecated drivers. 0.0-9 Tue Dec 15 2020, Lenka Spackova ( [email protected] ) Added information about the STIG security profile update (Security). 0.0-8 Wed Nov 25 2020, Lenka Spackova ( [email protected] ) Added a known issue (Security). 0.0-7 Wed Nov 11 2020, Lenka Spackova ( [email protected] ) Added a known issue (RHEL in cloud environments). 0.0-6 Tue Oct 13 2020, Lenka Spackova ( [email protected] ) Updated deprecated adapters. Fixed a driver name in a Technology Preview note ( iavf ). 0.0-5 Tue Sep 29 2020, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.9 Release Notes. 0.0-4 Mon Sep 7 2020, Jaroslav Klech ( [email protected] ) Provided the correct expansion of BERT in the kernel parameters section. 0.0-3 Thu Jun 25 2020, Lenka Spackova ( [email protected] ) Added a known issue related to OpenLDAP libraries (Servers and Services). 0.0-2 Tue Jun 23 2020, Jaroslav Klech ( [email protected] ) Added and granulated the kernel parameters chapter. Added the device drivers chapter. 0.0-1 Thu Jun 18 2020, Lenka Spackova ( [email protected] ) Various additions. 0.0-0 Wed May 20 2020, Lenka Spackova ( [email protected] ) Release of the Red Hat Enterprise Linux 7.9 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.9_release_notes/revision_history |
Chapter 12. Uninstalling Red Hat OpenShift AI Self-Managed | Chapter 12. Uninstalling Red Hat OpenShift AI Self-Managed This section shows how to use the OpenShift command-line interface (CLI) to uninstall the Red Hat OpenShift AI Operator and any OpenShift AI components installed and managed by the Operator. Note Using the CLI is the recommended way to uninstall the Operator. Depending on your version of OpenShift, using the web console to perform the uninstallation might not prompt you to uninstall all associated components. This could leave you unclear about the final state of your cluster. 12.1. Understanding the uninstallation process Installing Red Hat OpenShift AI created several custom resource instances on your OpenShift cluster for various components of OpenShift AI. After installation, users likely created several additional resources while using OpenShift AI. Uninstalling OpenShift AI removes the resources that were created by the Operator, but retains the resources created by users to prevent inadvertently deleting information you might want. What is deleted Uninstalling OpenShift AI removes the following resources from your OpenShift cluster: DataScienceCluster custom resource instance and the custom resource instances it created for each component DSCInitialization custom resource instance Auth custom resource instance created during or after installation FeatureTracker custom resource instances created during or after installation ServiceMesh custom resource instance created by the Operator during or after installation KNativeServing custom resource instance created by the Operator during or after installation redhat-ods-applications , redhat-ods-monitoring , and rhods-notebooks namespaces created by the Operator Workloads in the rhods-notebooks namespace Subscription , ClusterServiceVersion , and InstallPlan objects KfDef object (version 1 Operator only) What might remain Uninstalling OpenShift AI retains the following resources in your OpenShift cluster: Data science projects created by users Custom resource instances created by users Custom resource definitions (CRDs) created by users or by the Operator While these resources might still remain in your OpenShift cluster, they are not functional. After uninstalling, Red Hat recommends that you review the data science projects and custom resources in your OpenShift cluster and delete anything no longer in use to prevent potential issues, such as pipelines that cannot run, notebooks that cannot be undeployed, or models that cannot be undeployed. Additional resources Operator Lifecycle Manager (OLM) uninstall documentation 12.2. Uninstalling OpenShift AI Self-Managed by using the CLI The following procedure shows how to use the OpenShift command-line interface (CLI) to uninstall the Red Hat OpenShift AI Operator and any OpenShift AI components installed and managed by the Operator. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You have backed up the persistent disks or volumes used by your persistent volume claims (PVCs). Procedure Open a new terminal window. In the OpenShift command-line interface (CLI), log in to your OpenShift cluster as a cluster administrator, as shown in the following example: Create a ConfigMap object for deletion of the Red Hat OpenShift AI Operator. To delete the rhods-operator , set the addon-managed-odh-delete label to true . When all objects associated with the Operator are removed, delete the redhat-ods-operator project. Set an environment variable for the redhat-ods-applications project. Wait until the redhat-ods-applications project has been deleted. When the redhat-ods-applications project has been deleted, you see the following output. When the redhat-ods-applications project has been deleted, delete the redhat-ods-operator project. Verification Confirm that the rhods-operator subscription no longer exists. Confirm that the following projects no longer exist. redhat-ods-applications redhat-ods-monitoring redhat-ods-operator rhods-notebooks Note The rhods-notebooks project was created only if you installed the workbenches component of OpenShift AI. See Installing and managing Red Hat OpenShift AI components . | [
"oc login <openshift_cluster_url> -u system:admin",
"oc create configmap delete-self-managed-odh -n redhat-ods-operator",
"oc label configmap/delete-self-managed-odh api.openshift.com/addon-managed-odh-delete=true -n redhat-ods-operator",
"PROJECT_NAME=redhat-ods-applications",
"while oc get project USDPROJECT_NAME &> /dev/null; do echo \"The USDPROJECT_NAME project still exists\" sleep 1 done echo \"The USDPROJECT_NAME project no longer exists\"",
"The redhat-ods-applications project no longer exists",
"oc delete namespace redhat-ods-operator",
"oc get subscriptions --all-namespaces | grep rhods-operator",
"oc get namespaces | grep -e redhat-ods* -e rhods*"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/installing_and_uninstalling_openshift_ai_self-managed_in_a_disconnected_environment/uninstalling-openshift-ai-self-managed_uninstalling-openshift-ai-self-managed |
28.3. Disabling Anonymous Binds | 28.3. Disabling Anonymous Binds Accessing domain resources and running client tools always require Kerberos authentication. However, the backend LDAP directory used by the IdM server allows anonymous binds by default. This potentially opens up all of the domain configuration to unauthorized users, including information about users, machines, groups, services, netgroups, and DNS configuration. It is possible to disable anonymous binds on the 389 Directory Server instance by using LDAP tools to reset the nsslapd-allow-anonymous-access attribute. Change the nsslapd-allow-anonymous-access attribute to rootdse . Important Anonymous access can be completely allowed (on) or completely blocked (off). However, completely blocking anonymous access also blocks external clients from checking the server configuration. LDAP and web clients are not necessarily domain clients, so they connect anonymously to read the root DSE file to get connection information. The rootdse allows access to the root DSE and server configuration without any access to the directory data. Restart the 389 Directory Server instance to load the new setting. | [
"ldapmodify -x -D \"cn=Directory Manager\" -w secret -h server.example.com -p 389 Enter LDAP Password: dn: cn=config changetype: modify replace: nsslapd-allow-anonymous-access nsslapd-allow-anonymous-access: rootdse",
"service dirsrv restart"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/disabling-anon-binds |
Chapter 2. Deploy OpenShift Data Foundation using local storage devices | Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on bare metal infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create OpenShift Data Foundation cluster on bare metal . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as either 4.9 or stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 2.3. Creating Multus networks [Technology Preview] OpenShift Container Platform uses the Multus CNI plug-in to allow chaining of CNI plug-ins. During cluster installation, you can configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plug-ins and attach one or more of these networks to your pods. To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of the NetworkAttachmentDefinition defines how that interface is created. OpenShift Data Foundation uses the CNI plug-in called macvlan. Creating a macvlan-based additional network allows pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. Important Multus support is a Technology Preview feature that is only supported and has been tested on bare metal and VMWare deployments. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . 2.3.1. Creating network attachment definitions To utilize Multus, an already working cluster with the correct networking configuration is required, see Recommended network configuration and requirements for a Multus configuration . The newly created NetworkAttachmentDefinition (NAD) can be selected during the Storage Cluster installation. This is the reason they must be created before the Storage Cluster. As detailed in the Planning Guide, the Multus networks you create depend on the number of available network interfaces you have for OpenShift Data Foundation traffic. It is possible to separate all of the storage traffic onto one of two interfaces (one interface used for default OpenShift SDN) or to further segregate storage traffic into client storage traffic (public) and storage replication traffic (private or cluster). The following is an example NetworkAttachmentDefinition for all storage traffic, public and cluster, on the same interface. It requires one additional interface on all schedulable nodes (OpenShift default SDN on separate network interface). Note All network interface names must be the same on all the nodes attached to the Multus network (that is, ens2 for ocs-public-cluster ). The following is an example NetworkAttachmentDefinition for storage traffic on separate Multus networks, public, for client storage traffic, and cluster, for replication traffic. It requires two additional interfaces on OpenShift nodes hosting OSD pods and one additional interface on all other schedulable nodes (OpenShift default SDN on separate network interface). Example NetworkAttachmentDefinition : Note All network interface names must be the same on all the nodes attached to the Multus networks (that is, ens2 for ocs-public , and ens3 for ocs-cluster ). 2.4. Creating OpenShift Data Foundation cluster on bare metal Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. If you want to use the technology preview feature of multus support, before deployment you must create network attachment definitions (NADs) that is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select the Create a new StorageClass using the local storage devices option. Expand Advanced and select Full Deployment for the Deployment type option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. This feature is available only in new deployments of Red Hat OpenShift Data Foundation versions 4.7 and later. Storage clusters upgraded from a version to version 4.7 or later do not support flexible scaling. For more information, see Flexible scaling of OpenShift Data Foundation cluster in the New features section of Release Notes . If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> ''), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide CA Certificate , Client Certificate and Client Private Key . Click Save . Choose one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verifying the Multus networking . 2.5.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information about the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation cluster" . Click the Running and Completed tabs to verify that the following pods are in Running and Completed state: Table 2.1. Pods corresponding to OpenShift Data Foundation cluster Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . 2.5.4. Verifying that the OpenShift Data Foundation specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 2.5.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output: | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-public namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens2\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.1.0/24\" } }'",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ocs-cluster namespace: openshift-storage spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens3\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.2.0/24\" } }'",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'",
"{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'",
"[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'",
"openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_bare_metal_infrastructure/deploy-using-local-storage-devices-bm |
Preface | Preface Red Hat Enterprise Linux (RHEL) minor releases are an aggregation of individual security, enhancement, and bug fix errata. The Red Hat Enterprise Linux 7 Release Notes document describes the major changes made to the Red Hat Enterprise Linux 7 operating system and its accompanying applications for this minor release, as well as known problems and a complete list of all currently available Technology Previews. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.8_release_notes/preface |
Chapter 12. Enabling JSON logging | Chapter 12. Enabling JSON logging You can configure the Log Forwarding API to parse JSON strings into a structured object. 12.1. Parsing JSON logs Logs including JSON logs are usually represented as a string inside the message field. That makes it hard for users to query specific fields inside a JSON document. OpenShift Logging's Log Forwarding API enables you to parse JSON logs into a structured object and forward them to either OpenShift Logging-managed Elasticsearch or any other third-party system supported by the Log Forwarding API. To illustrate how this works, suppose that you have the following structured JSON log entry. Example structured JSON log entry {"level":"info","name":"fred","home":"bedrock"} Normally, the ClusterLogForwarder custom resource (CR) forwards that log entry in the message field. The message field contains the JSON-quoted string equivalent of the JSON log entry, as shown in the following example. Example message field {"message":"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"", "more fields..."} To enable parsing JSON log, you add parse: json to a pipeline in the ClusterLogForwarder CR, as shown in the following example. Example snippet showing parse: json pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json When you enable parsing JSON logs by using parse: json , the CR copies the JSON-structured log entry in a structured field, as shown in the following example. This does not modify the original message field. Example structured output containing the structured JSON log entry {"structured": { "level": "info", "name": "fred", "home": "bedrock" }, "more fields..."} Important If the log entry does not contain valid structured JSON, the structured field will be absent. To enable parsing JSON logs for specific logging platforms, see Forwarding logs to third-party systems . 12.2. Configuring JSON log data for Elasticsearch If your JSON logs follow more than one schema, storing them in a single index might cause type conflicts and cardinality problems. To avoid that, you must configure the ClusterLogForwarder custom resource (CR) to group each schema into a single output definition. This way, each schema is forwarded to a separate index. Important If you forward JSON logs to the default Elasticsearch instance managed by OpenShift Logging, it generates new indices based on your configuration. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Structure types You can use the following structure types in the ClusterLogForwarder CR to construct index names for the Elasticsearch log store: structuredTypeKey (string, optional) is the name of a message field. The value of that field, if present, is used to construct the index name. kubernetes.labels.<key> is the Kubernetes pod label whose value is used to construct the index name. openshift.labels.<key> is the pipeline.label.<key> element in the ClusterLogForwarder CR whose value is used to construct the index name. kubernetes.container_name uses the container name to construct the index name. structuredTypeName : (string, optional) If structuredTypeKey is not set or its key is not present, OpenShift Logging uses the value of structuredTypeName as the structured type. When you use both structuredTypeKey and structuredTypeName together, structuredTypeName provides a fallback index name if the key in structuredTypeKey is missing from the JSON log data. Note Although you can set the value of structuredTypeKey to any field shown in the "Log Record Fields" topic, the most useful fields are shown in the preceding list of structure types. A structuredTypeKey: kubernetes.labels.<key> example Suppose the following: Your cluster is running application pods that produce JSON logs in two different formats, "apache" and "google". The user labels these application pods with logFormat=apache and logFormat=google . You use the following snippet in your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: <application> outputRefs: default parse: json 2 1 Uses the value of the key-value pair that is formed by the Kubernetes logFormat label. 2 Enables parsing JSON logs. In that case, the following structured log record goes to the app-apache-write index: And the following structured log record goes to the app-google-write index: A structuredTypeKey: openshift.labels.<key> example Suppose that you use the following snippet in your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2 1 Uses the value of the key-value pair that is formed by the OpenShift myLabel label. 2 The myLabel element gives its string value, myValue , to the structured log record. In that case, the following structured log record goes to the app-myValue-write index: Additional considerations The Elasticsearch index for structured records is formed by prepending "app-" to the structured type and appending "-write". Unstructured records are not sent to the structured index. They are indexed as usual in the application, infrastructure, or audit indices. If there is no non-empty structured type, forward an unstructured record with no structured field. It is important not to overload Elasticsearch with too many indices. Only use distinct structured types for distinct log formats , not for each application or namespace. For example, most Apache applications use the same JSON log format and structured type, such as LogApache . 12.3. Forwarding JSON logs to the Elasticsearch log store For an Elasticsearch log store, if your JSON log entries follow different schemas , configure the ClusterLogForwarder custom resource (CR) to group each JSON schema into a single output definition. This way, Elasticsearch uses a separate index for each schema. Important Because forwarding different schemas to the same index can cause type conflicts and cardinality problems, you must perform this configuration before you forward data to the Elasticsearch store. To avoid performance issues associated with having too many indices, consider keeping the number of possible schemas low by standardizing to common schemas. Procedure Add the following snippet to your ClusterLogForwarder CR YAML file. outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json Optional: Use structuredTypeKey to specify one of the log record fields, as described in the preceding topic, Configuring JSON log data for Elasticsearch . Otherwise, remove this line. Optional: Use structuredTypeName to specify a <name> , as described in the preceding topic, Configuring JSON log data for Elasticsearch . Otherwise, remove this line. Important To parse JSON logs, you must set either structuredTypeKey or structuredTypeName , or both structuredTypeKey and structuredTypeName . For inputRefs , specify which log types to forward by using that pipeline, such as application, infrastructure , or audit . Add the parse: json element to pipelines. Create the CR object: USD oc create -f <file-name>.yaml The Red Hat OpenShift Logging Operator redeploys the Fluentd pods. However, if they do not redeploy, delete the Fluentd pods to force them to redeploy. USD oc delete pod --selector logging-infra=collector Additional resources Forwarding logs to third-party systems | [
"{\"level\":\"info\",\"name\":\"fred\",\"home\":\"bedrock\"}",
"{\"message\":\"{\\\"level\\\":\\\"info\\\",\\\"name\\\":\\\"fred\\\",\\\"home\\\":\\\"bedrock\\\"\", \"more fields...\"}",
"pipelines: - inputRefs: [ application ] outputRefs: myFluentd parse: json",
"{\"structured\": { \"level\": \"info\", \"name\": \"fred\", \"home\": \"bedrock\" }, \"more fields...\"}",
"outputDefaults: elasticsearch: structuredTypeKey: kubernetes.labels.logFormat 1 structuredTypeName: nologformat pipelines: - inputRefs: <application> outputRefs: default parse: json 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"apache\", ...}} }",
"{ \"structured\":{\"name\":\"wilma\",\"home\":\"bedrock\"}, \"kubernetes\":{\"labels\":{\"logFormat\": \"google\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: openshift.labels.myLabel 1 structuredTypeName: nologformat pipelines: - name: application-logs inputRefs: - application - audit outputRefs: - elasticsearch-secure - default parse: json labels: myLabel: myValue 2",
"{ \"structured\":{\"name\":\"fred\",\"home\":\"bedrock\"}, \"openshift\":{\"labels\":{\"myLabel\": \"myValue\", ...}} }",
"outputDefaults: elasticsearch: structuredTypeKey: <log record field> structuredTypeName: <name> pipelines: - inputRefs: - application outputRefs: default parse: json",
"oc create -f <file-name>.yaml",
"oc delete pod --selector logging-infra=collector"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/logging/cluster-logging-enabling-json-logging |
D.3. Additional resources | D.3. Additional resources Installing Red Hat Virtualization Hosts Configuring and Applying SCAP Policies During Installation Installers and Images for Red Hat Virtualization Manager (v. 4.4 for x86_64) Security policies available in the SCAP Security Guide Security Hardening for Red Hat Enterprise Linux 8 | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/additional_resources_2 |
6.4. Specifying the Location of a Driver Update Image File or a Driver Update Disk | 6.4. Specifying the Location of a Driver Update Image File or a Driver Update Disk If the installer detects more than one possible device that could hold a driver update, it prompts you to select the correct device. If you are not sure which option represents the device on which the driver update is stored, try the various options in order until you find the correct one. Figure 6.7. Selecting a driver disk source If the device that you choose contains no suitable update media, the installer will prompt you to make another choice. If you made a driver update disk on CD, DVD, or USB flash drive, the installer now loads the driver update. However, if the device that you selected is a type of device that could contain more than one partition (whether the device currently has more than one partition or not), the installer might prompt you to select the partition that holds the driver update. Figure 6.8. Selecting a driver disk partition The installer prompts you to specify which file contains the driver update: Figure 6.9. Selecting an ISO image Expect to see these screens if you stored the driver update on an internal hard drive or on a USB storage device. You should not see them if the driver update is on a CD or DVD. Regardless of whether you are providing a driver update in the form of an image file or with a driver update disk, the installer now copies the appropriate update files into a temporary storage area (located in system RAM and not on disk). The installer might ask whether you would like to use additional driver updates. If you select Yes , you can load additional updates in turn. When you have no further driver updates to load, select No . If you stored the driver update on removable media, you can now safely eject or disconnect the disk or device. The installer no longer requires the driver update, and you can re-use the media for other purposes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sect-Driver_updates-Specifying_the_location_of_a_driver_update_image_file_or_driver_update_disk-x86 |
Managing directory attributes and values | Managing directory attributes and values Red Hat Directory Server 12 Managing directory entries using ldapadd, ldapmodify, ldapdelete, and dsconf utilities or web console Red Hat Customer Content Services | [
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=people,dc=example,dc=com changetype: modify delete: telephoneNumber - add: manager manager: cn=manager_name,ou=people,dc=example,dc=com modifying entry \"uid=user,ou=people,dc=example,dc=com\" ^D",
"command_that_outputs_LDIF | ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x",
"dn: uid=user,ou=people,dc=example,dc=com changetype: modify delete: telephoneNumber - add: manager manager: cn=manager_name,ou=people,dc=example,dc=com",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -f ~/example.ldif",
"ldpamodify -c -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x",
"ldapadd -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com uid: user givenName: given_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetorgperson sn: surname cn: user",
"ldapmodify -a -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com uid: user givenName: given_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetorgperson sn: surname cn: user",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: dc=example,dc=com changetype: add objectClass: top objectClass: domain dc: example",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: telephoneNumber telephoneNumber: 555-1234567",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: telephoneNumber telephoneNumber: 555-1234567 telephoneNumber: 555-7654321",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify replace: manager manager: uid=manager_name,ou=People,dc=example,dc=com",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: telephoneNumber telephoneNumber: 555-1234567 - add: telephoneNumber telephoneNumber: 555-9876543",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: manager",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify delete: telephoneNumber telephoneNumber: 555-1234567",
"The following rename operations exist:",
"dn: cn=new_group,ou=Groups,dc=example,dc=com objectClass: top objectClass: groupOfUniqueNames cn: old_group cn: new_group",
"dn: cn=new_group,ou=Groups,dc=example,dc=com objectClass: top objectClass: groupofuniquenames cn: new_group",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=demo1,dc=example,dc=com changetype: modrdn newrdn: cn=demo2 deleteOldRDN: 1",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: cn=demo,ou=Germany,dc=example,dc=com changetype: modrdn newrdn: cn=demo newSuperior: ou=France,dc=example,dc=com deleteOldRDN: 1",
"ldapdelete -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x \" uid=user,ou=People,dc=example,dc=com \"",
"ldapdelete -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x \" uid=user1,ou=People,dc=example,dc=com \" \" uid=user2,ou=People,dc=example,dc=com \"",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: delete",
"ldapmodify -a -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x",
"ldapmodify -a -D \" cn=uid=user,ou=People,dc=example.com Chicago\\, IL \" -W -H ldap://server.example.com -x",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify add: jpegPhoto jpegPhoto:< file: ///home/user_name/photo.jpg",
"ldapmodify -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x dn: uid=user,ou=People,dc=example,dc=com changetype: modify replace: homePostalAddress; lang-fr homePostalAddress; lang-fr : 34 rue de Seine",
"ldapmodify -D \"cn=Directory Manager\" -W -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: posixAccount uid: jsmith *cn: John Smith uidNumber: 0 gidNumber: 0",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: uidNumber uidNumber: 0 - add:gidNumber gidNumber: 0",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: uidNumber idNumber: 0 ^D ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: uid=jsmith,ou=people,dc=example,dc=com changetype: modify add: employeeId employeeId: magic",
"dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config objectClass: top objectClass: dnaPluginConfig cn: Account UIDs dnatype: uidNumber dnafilter: (objectclass=posixAccount) dnascope: ou=people,dc=example,dc=com dnaNextValue: 1",
"dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config objectClass: top objectClass: dnaPluginConfig cn: Account UIDs dnatype: uidNumber dnafilter: (objectclass=posixAccount) dnascope: ou=people,dc=example,dc=com dnanextvalue: 1 dnaMaxValue: 1300 dnasharedcfgdn: cn=Account UIDs,ou=ranges,dc=example,dc=com dnathreshold: 100 dnaRangeRequestTimeout: 60 dnaNextRange: 1301-2301",
"dn: dnaHostname=ldap1.example.com+dnaPortNum=389,cn=Account UIDs,ou=Ranges,dc=example,dc=com objectClass: dnaSharedConfig objectClass: top dnahostname: ldap1.example.com dnaPortNum: 389 dnaSecurePortNum: 636 dnaRemainingValues: 1000",
"dsconf -D \"cn=Directory Manager\" instance_name plugin dna config \"Account UIDs\" add --type uidNumber --filter \"(objectclass=posixAccount)\" --scope ou=People,dc=example,dc=com --next-value 1 --max-value 1300 --shared-config-entry \"cn=Account UIDs,ou=Ranges,dc=example,dc=com\" --threshold 100 --range-request-timeout 60 --magic-regen 99999 Successfully created the cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: ou=Ranges,dc=example,dc=com changetype: add objectclass: top objectclass: extensibleObject objectclass: organizationalUnit ou: Ranges - dn: cn=Account UIDs,ou=Ranges,dc=example,dc=com changetype: add objectclass: top objectclass: extensibleObject cn: Account UIDs",
"dsconf -D \"cn=Directory Manager\" instance_name plugin dna enable Enabled plugin 'Distributed Numeric Assignment Plugin'",
"dsconf -D \"cn=Directory Manager\" instance_name plugin dna config \"Account UIDs\" show dn: cn=Account UIDs,cn=Distributed Numeric Assignment Plugin,cn=plugins,cn=config cn: Account UIDs dnaFilter: \"(objectclass=posixAccount)\" dnaInterval: 1 dnaMagicRegen: 99999 dnaMaxValue: 1300 dnaNextValue: 1 dnaRangeRequestTimeout: 60 dnaScope: ou=People,dc=example,dc=com dnaSharedCfgDN: cn=Account UIDs,ou=Ranges,dc=example,dc=com dnaThreshold: 100 dnaType: uidNumber objectClass: top objectClass: dnaPluginConfig",
"uniqueness-attribute-name: mail uniqueness-subtrees: ou=accounting,dc=example,dc=com uniqueness-subtrees: ou=sales,dc=example,dc=com uniqueness-across-all-subtrees: on uniqueness-exclude-subtrees: ou=private,ou=people,dc=example,dc=com",
"uniqueness-attribute-name: mail uniqueness-top-entry-oc: nsContainer uniqueness-subtree-entries-oc: inetOrgPerson uniqueness-exclude-subtrees: ou=private,ou=people,dc=example,dc=com",
"dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin attr-uniq add \"Mail Uniqueness\" --attr-name mail --subtree ou=sales,dc=example,dc=com ou=accounting,dc=example,dc=com",
"dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin attr-uniq set \"Mail Uniqueness\" --across-all-subtrees on",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: cn=Mail Uniqueness,cn=plugins,cn=config changetype: modify add: uniqueness-exclude-subtrees uniqueness-exclude-subtrees: ou=internal,ou=sales,dc=example,dc=com",
"dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin attr-uniq set \"Mail Uniqueness\" --subtree-entries-oc=inetOrgPerson",
"dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin attr-uniq enable \"Mail Uniqueness\"",
"dsctl instance_name restart",
"dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin attr-uniq show \"Mail Uniqueness\" dn: cn=Mail Uniqueness,cn=plugins,cn=config cn: Mail Uniqueness nsslapd-plugin-depends-on-type: database nsslapd-pluginDescription: Enforce unique attribute values nsslapd-pluginEnabled: on uniqueness-across-all-subtrees: on uniqueness-attribute-name: mail uniqueness-exclude-subtrees: ou=internal,ou=sales,dc=example,dc=com uniqueness-subtree-entries-oc: inetOrgPerson uniqueness-subtrees: ou=accounting,dc=example,dc=com uniqueness-subtrees: ou=sales,dc=example,dc=com",
"dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin attr-uniq add \"Mail Uniqueness with OC\" --attr-name mail --subtree-entries-oc=inetOrgPerson --top-entry-oc=nsContainer",
"ldapmodify -D \"cn=Directory Manager\" -W -H ldap:// server.example.com -x dn: cn=Mail Uniqueness with OC,cn=plugins,cn=config changetype: modify add: uniqueness-exclude-subtrees uniqueness-exclude-subtrees: ou=internal,ou=sales,dc=example,dc=com",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin attr-uniq enable \"Mail Uniqueness with OC\"",
"dsctl instance_name restart",
"dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin attr-uniq show \"Mail Uniqueness with OC\" dn: cn=Mail Uniqueness with OC,cn=plugins,cn=config cn: Mail Uniqueness with OC nsslapd-plugin-depends-on-type: database nsslapd-pluginDescription: none nsslapd-pluginEnabled: on uniqueness-attribute-name: mail uniqueness-exclude-subtrees: ou=internal,ou=sales,dc=example,dc=com uniqueness-subtree-entries-oc: inetOrgPerson uniqueness-top-entry-oc: nsContainer"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html-single/managing_directory_attributes_and_values/managing_directory_attributes_and_values |
Chapter 84. user | Chapter 84. user This chapter describes the commands under the user command. 84.1. user create Create new user Usage: Table 84.1. Positional arguments Value Summary <name> New user name Table 84.2. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Default domain (name or id) --project <project> Default project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --password <password> Set user password --password-prompt Prompt interactively for password --email <email-address> Set user email address --description <description> User description --enable Enable user (default) --disable Disable user --or-show Return existing user Table 84.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 84.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 84.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 84.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 84.2. user delete Delete user(s) Usage: Table 84.7. Positional arguments Value Summary <user> User(s) to delete (name or id) Table 84.8. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <user> (name or id) 84.3. user list List users Usage: Table 84.9. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter users by <domain> (name or id) --group <group> Filter users by <group> membership (name or id) --project <project> Filter users by <project> (name or id) --long List additional fields in output Table 84.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 84.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 84.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 84.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 84.4. user password set Change current user password Usage: Table 84.14. Command arguments Value Summary -h, --help Show this help message and exit --password <new-password> New user password --original-password <original-password> Original user password 84.5. user set Set user properties Usage: Table 84.15. Positional arguments Value Summary <user> User to modify (name or id) Table 84.16. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set user name --domain <domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --project <project> Set default project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --password <password> Set user password --password-prompt Prompt interactively for password --email <email-address> Set user email address --description <description> Set user description --enable Enable user (default) --disable Disable user 84.6. user show Display user details Usage: Table 84.17. Positional arguments Value Summary <user> User to display (name or id) Table 84.18. Command arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <user> (name or id) Table 84.19. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 84.20. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 84.21. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 84.22. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack user create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--project <project>] [--project-domain <project-domain>] [--password <password>] [--password-prompt] [--email <email-address>] [--description <description>] [--enable | --disable] [--or-show] <name>",
"openstack user delete [-h] [--domain <domain>] <user> [<user> ...]",
"openstack user list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--domain <domain>] [--group <group> | --project <project>] [--long]",
"openstack user password set [-h] [--password <new-password>] [--original-password <original-password>]",
"openstack user set [-h] [--name <name>] [--domain <domain>] [--project <project>] [--project-domain <project-domain>] [--password <password>] [--password-prompt] [--email <email-address>] [--description <description>] [--enable | --disable] <user>",
"openstack user show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] <user>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/user |
Chapter 5. Upgrading from RHEL 7.9 to RHEL 8 | Chapter 5. Upgrading from RHEL 7.9 to RHEL 8 Similarly to the in-place upgrade from RHEL 6 to RHEL 7, the in-place upgrade from RHEL 7 to RHEL 8 consists of two major stages, a pre-upgrade assessment of the system in which the system remains unchanged, and the actual in-place upgrade. In case of a RHEL 7 to RHEL 8 upgrade, both phases are handled by the Leapp utility. Note that RHEL version 7.9 is a prerequisite for upgrading to RHEL 8. To perform an in-place upgrade from RHEL 7.9 to RHEL 8: Assess the upgradability of your system and fix reported problems as described in Reviewing the pre-upgrade report of the Upgrading from RHEL 7 to RHEL 8 document. Upgrade your RHEL 7 system to RHEL 8 per instructions in Performing the upgrade from RHEL 7 to RHEL 8 of the Upgrading from RHEL 7 to RHEL 8 document. Additional resources Troubleshooting in the Upgrading from RHEL 7 to RHEL 8 document | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_6_to_rhel_8/upgrading-from-rhel-7-9-to-rhel-8_upgrading-from-rhel-6-to-rhel-8 |
Chapter 4. Customizing developer environments | Chapter 4. Customizing developer environments Red Hat CodeReady Workspaces is an extensible and customizable developer-workspaces platform. There are three different ways to extend Red Hat CodeReady Workspaces: Alternative IDEs provide specialized tools for Red Hat CodeReady Workspaces. For example, a Jupyter notebook for data analysis. Alternate IDEs can be based on Eclipse Theia or any other web IDE. The default IDE in Red Hat CodeReady Workspaces is Che-Theia. Che-Theia plug-ins add capabilities to the Che-Theia IDE. They rely on plug-in APIs that are compatible with Visual Studio Code. The plug-ins are isolated from the IDE itself. They can be packaged as files or as containers to provide their own dependencies. Stacks are pre-configured CodeReady Workspaces workspaces with a dedicated set of tools, which cover different developer personas. For example, it is possible to pre-configure a workbench for a tester with only the tools needed for their purposes. Figure 4.1. CodeReady Workspaces extensibility Extending Red Hat CodeReady Workspaces can be done entirely using Red Hat CodeReady Workspaces. Since version 7, Red Hat CodeReady Workspaces provides a self-hosting mode. What is a Che-Theia plug-in Using alternative IDEs in CodeReady Workspaces Using a Visual Studio Code extension in CodeReady Workspaces 4.1. What is a Che-Theia plug-in A Che-Theia plug-in is an extension of the development environment isolated from the IDE. Plug-ins can be packaged as files or containers to provide their own dependencies. Extending Che-Theia using plug-ins can enable the following capabilities: Language support: Extend the supported languages by relying on the Language Server Protocol . Debuggers: Extend debugging capabilities with the Debug Adapter Protocol . Development Tools: Integrate your favorite linters, and as testing and performance tools. Menus, panels, and commands: Add your own items to the IDE components. Themes: Build custom themes, extend the UI, or customize icon themes. Snippets, formatters, and syntax highlighting: Enhance comfort of use with supported programming languages. Keybindings: Add new keymaps and popular keybindings to make the environment feel natural. 4.1.1. Features and benefits of Che-Theia plug-ins Features Description Benefits Fast Loading Plug-ins are loaded at runtime and are already compiled. IDE is loading the plug-in code. Avoid any compilation time. Avoid post-installation steps. Secure Loading Plug-ins are loaded separately from the IDE. The IDE stays always in a usable state. Plug-ins do not break the whole IDE if it has bugs. Handle network issue. Tools Dependencies Dependencies for the plug-in are packaged with the plug-in in its own container. No-installation for tools. Dependencies running into container. Code Isolation Guarantee that plug-ins cannot block the main functions of the IDE like opening a file or typing Plug-ins are running into separate threads. Avoid dependencies mismatch. VS Code Extension Compatibility Extend the capabilities of the IDE with existing VS Code Extensions. Target multiple platform. Allow easy discovery of Visual Studio Code Extension with required installation. 4.1.2. Che-Theia plug-in concept in detail Red Hat CodeReady Workspaces provides a default web IDE for workspaces: Che-Theia. It is based on Eclipse Theia. It is a slightly different version than the plain Eclipse Theia one because there are functionalities that have been added based on the nature of the Red Hat CodeReady Workspaces workspaces. This version of Eclipse Theia for CodeReady Workspaces is called Che-Theia . You can extend the IDE provided with Red Hat CodeReady Workspaces by building a Che-Theia plug-in . Che-Theia plug-ins are compatible with any other Eclipse Theia-based IDE. 4.1.2.1. Client-side and server-side Che-Theia plug-ins The Che-Theia editor plug-ins let you add languages, debuggers, and tools to your installation to support your development workflow. Plug-ins run when the editor completes loading. If a Che-Theia plug-in fails, the main Che-Theia editor continues to work. Che-Theia plug-ins run either on the client side or on the server side. This is a scheme of the client and server-side plug-in concept: Figure 4.2. Client and server-side Che-Theia plug-ins The same Che-Theia plug-in API is exposed to plug-ins running on the client side (Web Worker) or the server side (Node.js). 4.1.2.2. Che-Theia plug-in APIs For the purpose of providing tool isolation and easy extensibility in Red Hat CodeReady Workspaces, the Che-Theia IDE has a set of plug-in APIs. The APIs are compatible with Visual Studio Code extension APIs. Usually, Che-Theia can run VS Code extensions as its own plug-ins. When developing a plug-in that depends on or interacts with components of CodeReady Workspaces workspaces (containers, preferences, factories), use the CodeReady Workspaces APIs embedded in Che-Theia. 4.1.2.3. Che-Theia plug-in capabilities Che-Theia plug-ins have the following capabilities: Plug-in Description Repository CodeReady Workspaces Extended Tasks Handles the CodeReady Workspaces commands and provides the ability to start those into a specific container of the workspace. CodeReady Workspaces Extended Terminal Allows to provide terminal for any of the containers of the workspace. CodeReady Workspaces Factory Handles the Red Hat CodeReady Workspaces Factories CodeReady Workspaces Container Provides a container view that shows all the containers that are running in the workspace and allows to interact with them. Containers plugins Dashboard Integrates the IDE with the Dashboard and facilitate the navigation. CodeReady Workspaces APIs Extends the IDE APIs to allow interacting with CodeReady Workspaces-specific components (workspaces, preferences). 4.1.2.4. VS Code extensions and Eclipse Theia plug-ins A Che-Theia plug-in can be based on a VS Code extension or an Eclipse Theia plug-in. A Visual Studio Code extension To repackage a VS Code extension as a Che-Theia plug-in with its own set of dependencies, package the dependencies into a container. This ensures that Red Hat CodeReady Workspaces users do not need to install the dependencies when using the extension. See Using a Visual Studio Code extension in CodeReady Workspaces . An Eclipse Theia plug-in You can build a Che-Theia plug-in by implementing an Eclipse Theia plug-in and packaging it to Red Hat CodeReady Workspaces. Additional resources Section 4.1.5, "Embedded and remote Che-Theia plug-ins" 4.1.3. Che-Theia plug-in metadata Che-Theia plug-in metadata is information about individual plug-ins for the plug-in registry. The Che-Theia plug-in metadata, for each specific plug-in, is defined in a meta.yaml file. The che-plugin-registry repository contains . Table 4.1. meta.yml apiVersion API version (`v2`and higher) category Available: Language , Other description Description (a phrase) displayName Display name firstPublicationDate Date in the form "YYYY-MM-DD" Example: "2019-12-02" icon URL of an SVG icon name Name (no spaces allowed) publisher Name of the publisher repository URL of the source repository title Title (long) type Che Plugin , VS Code extension version Version information, for example: 7.5.1 spec Specifications (see underneath) Table 4.2. spec attributes endpoints Plug-in endpoints containers Sidecar containers for the plug-in. Che Plugin and VS Code extension supports only one container initContainers Sidecar init containers for the plug-in workspaceEnv Environment variables for the workspace extensions Optional attribute required for VS Code and Che-Theia plug-ins. A list of URLs to plug-in artefacts, such as .vsix or .theia files Table 4.3. spec.containers. Notice: spec.initContainers has absolutely the same container definition. name Sidecar container name image Absolute or relative container image URL memoryLimit OpenShift memory limit string, for example 512Mi memoryRequest OpenShift memory request string, for example 512Mi cpuLimit OpenShift CPU limit string, for example 2500m cpuRequest OpenShift CPU request string, for example 125m env List of environment variables to set in the sidecar command String array definition of the root process command in the container args String array arguments for the root process command in the container volumes Volumes required by the plug-in ports Ports exposed by the plug-in (on the container) commands Development commands available to the plug-in container mountSources Boolean flag to bound volume with source code /projects to the plug-in container Table 4.4. spec.containers.env (and spec.initContainers.env) attributes. Notice: workspaceEnv has absolutely the same attributes name Environment variable name value Environment variable value Table 4.5. spec.containers.volumes (and spec.initContainers.volumes) attributes mountPath Path to the volume in the container name Volume name ephemeral If true, the volume is ephemeral, otherwise the volume is persisted Table 4.6. spec.containers.ports (and spec.initContainers.ports) attributes exposedPort Exposed port Table 4.7. spec.containers.commands (and spec.initContainers.commands) attributes name Command name workingDir Command working directory command String array that defines the development command Table 4.8. spec.endpoints attributes name Name (no spaces allowed) public true , false targetPort Target port attributes Endpoint attributes Table 4.9. spec.endpoints.attributes attributes protocol Protocol, example: ws type ide , ide-dev discoverable true , false secure true , false . If true , then the endpoint is assumed to listen solely on 127.0.0.1 and is exposed using a JWT proxy cookiesAuthEnabled true , false Example meta.yaml for a Che-Theia plug-in: the CodeReady Workspaces machine-exec Service apiVersion: v2 category: Other description: Che Plugin with che-machine-exec service to provide creation terminal or tasks for Red Hat CodeReady Workspaces workspace containers displayName: CodeReady Workspaces machine-exec Service firstPublicationDate: "2019-12-02" icon: https://www.eclipse.org/che/images/logo-eclipseche.svg name: che-machine-exec-plug-in publisher: eclipse repository: https://github.com/eclipse/che-machine-exec/ title: Che machine-exec Service Plugin type: Che Plugin version: 7.5.1 spec: endpoints: - name: "che-machine-exec" public: true targetPort: 4444 attributes: protocol: ws type: terminal discoverable: false secure: true cookiesAuthEnabled: true containers: - name: che-machine-exec image: "quay.io/eclipse/che-machine-exec:7.5.1" ports: - exposedPort: 4444 Example meta.yaml for a VisualStudio Code extension: the AsciiDoc support extension apiVersion: v2 category: Language description: This extension provides a live preview, syntax highlighting and snippets for the AsciiDoc format using Asciidoctor flavor displayName: AsciiDoc support firstPublicationDate: "2019-12-02" icon: https://www.eclipse.org/che/images/logo-eclipseche.svg name: vscode-asciidoctor publisher: joaompinto repository: https://github.com/asciidoctor/asciidoctor-vscode title: AsciiDoctor Plug-in type: VS Code extension version: 2.7.7 spec: extensions: - https://github.com/asciidoctor/asciidoctor-vscode/releases/download/v2.7.7/asciidoctor-vscode-2.7.7.vsix 4.1.4. Che-Theia plug-in lifecycle When a user is starting a workspace, the following procedure is followed: CodeReady Workspaces master checks for plug-ins to start from the workspace definition. Plug-in metadata is retrieved, and the type of each plug-in is recognized. A broker is selected according to the plug-in type. The broker processes the installation and deployment of the plug-in (the installation process is different for each broker). Note Different types of plug-ins exist. A broker ensures all installation requirements are met for a plug-in to deploy correctly. Figure 4.3. Che-Theia plug-in lifecycle Before a CodeReady Workspaces workspace is launched, CodeReady Workspaces master starts containers for the workspace: The Che-Theia plug-in broker extracts the plug-in (from the .theia file) to get the sidecar containers that the plug-in needs. The broker sends the appropriate container information to CodeReady Workspaces master. The broker copies the Che-Theia plug-in to a volume to have it available for the Che-Theia editor container. CodeReady Workspaces workspace master then starts all the containers of the workspace. Che-Theia is started in its own container and checks the correct folder to load the plug-ins. Che-Theia plug-in lifecycle: When a user is opening a browser tab or window with Che-Theia, Che-Theia starts a new plug-in session (browser or remote TODO: 'what is remote in this context?' ). Every Che-Theia plug-in is notified that a new session has been started (the start() function of the plug-in triggered). A Che-Theia plug-in session is running and interacting with the Che-Theia back end and frontend. When the user is closing the browser tab or there is a timeout, every plug-in is notified (the stop() function of the plug-in triggered). 4.1.5. Embedded and remote Che-Theia plug-ins Developer workspaces in Red Hat CodeReady Workspaces provide all dependencies needed to work on a project. The application includes the dependencies needed by all the tools and plug-ins used. There are two different ways a Che-Theia plug-in can run. This is based on the dependencies that are needed for the plug-in: embedded (or local) and remote . 4.1.5.1. Embedded (or local) plug-ins The plug-in does not have specific dependencies - it only uses a Node.js runtime, and it runs in the same container as the IDE. The plug-in is injected into the IDE. Examples: Code linting New set of commands New UI components To include a Che-Theia plug-in as embedded, define a URL to the plug-in binary file (the .theia archive) in the meta.yaml file. In the case of a VS Code extension, provide the extension ID from the Visual Studio Code marketplace (see Using a Visual Studio Code extension in CodeReady Workspaces ). When starting a workspace, CodeReady Workspaces downloads and unpacks the plug-in binaries and includes them in the Che-Theia editor container. The Che-Theia editor initializes the plug-ins when it starts. Figure 4.4. Local Che-Theia plug-in 4.1.5.2. Remote plug-ins The plug-in relies on dependencies or it has a back end. It runs in its own sidecar container, and all dependencies are packaged in the container. A remote Che-Theia plug-in consist of two parts: Che-Theia plug-in or VS Code extension binaries. The definition in the meta.yaml file is the same as for embedded plug-ins. Container image definition, for example, eclipse/che-theia-dev:nightly . From this image, CodeReady Workspaces creates a separate container inside a workspace. Examples: Java Language Server Python Language Server When starting a workspace, CodeReady Workspaces creates a container from the plug-in image, downloads and unpacks the plug-in binaries, and includes them in the created container. The Che-Theia editor connects to the remote plug-ins when it starts. Figure 4.5. Remote Che-Theia plug-in 4.1.5.3. Comparison matrix When a Che-Theia plug-in (or a VS Code extension) does not need extra dependencies inside its container, it is an embedded plug-in. A container with extra dependencies that includes a plug-in is a remote plug-in. Table 4.10. Che-Theia plug-in comparison matrix: embedded vs remote Configure RAM per plug-in Environment dependencies Create separated container Remote TRUE Plug-in uses dependencies defined in the remote container. TRUE Embedded FALSE (users can configure RAM for the whole editor container, but not per plug-in) Plug-in uses dependencies from the editor container; if container does not include these dependencies, the plug-in fails or does not function as expected. FALSE Depending on your use case and the capabilities provided by your plug-in, select one of the described running modes. 4.1.6. Remote plug-in endpoint Red Hat CodeReady Workspaces has a remote plug-in endpoint service to start VS Code Extensions and Che-Theia plug-ins in separate containers. Red Hat CodeReady Workspaces injects the remote plug-in endpoint binaries into each remote plug-in container. This service starts remote extensions and plug-ins defined in the plug-in meta.yaml file and connects them to the Che-Theia editor container. The remote plug-in endpoint creates a plug-in API proxy between the remote plug-in container and the Che-Theia editor container. The remote plug-in endpoint is also an interceptor for some plug-in API parts, which it launches inside a remote sidecar container rather than an editor container. Examples: terminal API, debug API. The remote plug-in endpoint executable command is stored in the environment variable of the remote plug-in container: PLUGIN_REMOTE_ENDPOINT_EXECUTABLE . Red Hat CodeReady Workspaces provides two ways to start the remote plug-in endpoint with a sidecar image: Defining a launch remote plug-in endpoint using a Dockerfile. To use this method, patch an original image and rebuild it. Defining a launch remote plug-in endpoint in the plug-in meta.yaml file. Use this method to avoid patching an original image. 4.1.6.1. Defining a launch remote plug-in endpoint using Dockerfile To start a remote plug-in endpoint, use the PLUGIN_REMOTE_ENDPOINT_EXECUTABLE environment variable in the Dockerfile. Procedure Start a remote plug-in endpoint using the CMD command in the Dockerfile: Dockerfile example Start a remote plug-in endpoint using the ENTRYPOINT command in the Dockerfile: Dockerfile example 4.1.6.1.1. Using a wrapper script Some images use a wrapper script to configure permissions. The script is defined it in the ENTRYPOINT command of the Dockerfile to configure permissions inside the container, and it script executes a main process defined in the CMD command of the Dockerfile. Red Hat CodeReady Workspaces uses such images with a wrapper script to provide permission configurations on different infrastructures with advanced security, for example on OpenShift. Example of a wrapper script: #!/bin/sh set -e export USER_ID=USD(id -u) export GROUP_ID=USD(id -g) if ! whoami >/dev/null 2>&1; then echo "USD{USER_NAME:-user}:x:USD{USER_ID}:0:USD{USER_NAME:-user} user:USD{HOME}:/bin/sh" >> /etc/passwd fi # Grant access to projects volume in case of non root user with sudo rights if [ "USD{USER_ID}" -ne 0 ] && command -v sudo >/dev/null 2>&1 && sudo -n true > /dev/null 2>&1; then sudo chown "USD{USER_ID}:USD{GROUP_ID}" /projects fi exec "USD@" Example of a Dockerfile with a wrapper script: Dockerfile example In this example, the container launches the /entrypoint.sh script defined in the ENTRYPOINT command of the Dockerfile. The script configures the permissions and executes the command using exec USD@ . CMD is the argument for ENTRYPOINT , and the exec USD@ command calls USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} . A remote plug-in endpoint then starts in the container after permission configuration. 4.1.6.2. Defining a launch remote plug-in endpoint in a meta.yaml file Use this method to re-use images to start remote a plug-in endpoint without modifications. Procedure Modify the plug-in meta.yaml file properties command and args : command - Red Hat CodeReady Workspaces uses to override Dockerfile#ENTRYPOINT . args - Red Hat CodeReady Workspaces uses to override Dockerfile#CMD . Example of a YAML file with the command and args properties modified: apiVersion: v2 category: Language description: "Typescript language features" displayName: Typescript firstPublicationDate: "2019-10-28" icon: "https://www.eclipse.org/che/images/logo-eclipseche.svg" name: typescript publisher: che-incubator repository: "https://github.com/Microsoft/vscode" title: "Typescript language features" type: "VS Code extension" version: remote-bin-with-override-entrypoint spec: containers: - image: "example/fedora-for-ts-remote-plugin-without-endpoint:latest" memoryLimit: 512Mi name: vscode-typescript command: - sh - -c args: - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - "https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix" Modify args instead of command to use an image with a wrapper script pattern and to keep a call of the entrypoint.sh script: apiVersion: v2 category: Language description: "Typescript language features" displayName: Typescript firstPublicationDate: "2019-10-28" icon: "https://www.eclipse.org/che/images/logo-eclipseche.svg" name: typescript publisher: che-incubator repository: "https://github.com/Microsoft/vscode" title: "Typescript language features" type: "VS Code extension" version: remote-bin-with-override-entrypoint spec: containers: - image: "example/fedora-for-ts-remote-plugin-without-endpoint:latest" memoryLimit: 512Mi name: vscode-typescript args: - sh - -c - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - "https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix" Red Hat CodeReady Workspaces calls the entrypoint.sh wrapper script defined in the ENTRYPOINT command of the Dockerfile. The script executes [ 'sh', '-c", ' USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}' ] using the exec "USD@" command. Note To execute a service when starting the container and also to start a remote plug-in endpoint, use meta.yaml with modified command and args properties. Start the service, detach the process, and start the remote plug-in endpoint, and they then work in parallel. 4.2. Using alternative IDEs in CodeReady Workspaces Extending Red Hat CodeReady Workspaces developer workspaces using different IDEs (integrated development environments) enables: Re-purposing the environment for different use cases. Providing a dedicated custom IDE for specific tools. Providing different perspectives for individual users or groups of users. Red Hat CodeReady Workspaces provides a default web IDE to be used with the developer workspaces. This IDE is completely decoupled. You can bring your own custom IDE for Red Hat CodeReady Workspaces: Built from Eclipse Theia , which is a framework to build web IDEs. Example: Sirius on the web . Completely different web IDEs , such as Jupyter, Eclipse Dirigible, or others. Example: Jupyter in Red Hat CodeReady Workspaces workspaces . Bringing custom IDE built from Eclipse Theia Creating your own custom IDE based on Eclipse Theia. Adding CodeReady Workspaces-specific tools to your custom IDE. Packaging your custom IDE into the available editors for CodeReady Workspaces. Bringing your completely different web IDE into CodeReady Workspaces Packaging your custom IDE into the available editors for CodeReady Workspaces. 4.3. Using a Visual Studio Code extension in CodeReady Workspaces Starting with Red Hat CodeReady Workspaces 2.1, Visual Studio Code (VS Code) extensions can be installed to extend the functionality of a CodeReady Workspaces workspace. VS Code extensions can run in the Che-Theia editor container, or they can be packaged in their own isolated and pre-configured containers with their prerequisites. This document describes: Use of a VS Code extension in CodeReady Workspaces with workspaces. CodeReady Workspaces Plug-ins panel. How to publish a VS Code extension in the CodeReady Workspaces plug-in registry (to share the extension with other CodeReady Workspaces users). The extension-hosting sidecar container and the use of the extension in a devfile are optional for this. How to review the compatibility of the VS Code extensions to be informed whether a specific API is supported or has not been implemented yet. 4.3.1. Publishing a VS Code extension into the CodeReady Workspaces plug-in registry The user of CodeReady Workspaces can use a workspace devfile to use any plug-in, also known as Visual Studio Code (VS Code) extension. This plug-in can be added to the plug-in registry, then easily reused by anyone in the same organization with access to that workspaces installation. Some plug-ins need a runtime dedicated container for code compilation. This fact makes those plug-ins a combination of a runtime sidecar container and a VS Code extension. The following section describes the portability of a plug-in configuration and associating an extension with a runtime container that the plug-in needs. 4.3.1.1. Writing a meta.yaml file and adding it to a plug-in registry The plug-in meta information is required to publish a VS Code extension in an Red Hat CodeReady Workspaces plug-in registry. This meta information is provided as a meta.yaml file. This section describes how to create a meta.yaml file for an extension. Procedure Create a meta.yaml file in the following plug-in registry directory: <apiVersion> /plugins/ <publisher> / <plug-inName> / <plug-inVersion> / . Edit the meta.yaml file and provide the necessary information. The configuration file must adhere to the following structure: apiVersion: v2 1 publisher: myorg 2 name: my-vscode-ext 3 version: 1.7.2 4 type: value 5 displayName: 6 title: 7 description: 8 icon: https://www.eclipse.org/che/images/logo-eclipseche.svg 9 repository: 10 category: 11 spec: containers: 12 - image: 13 memoryLimit: 14 memoryRequest: 15 cpuLimit: 16 cpuRequest: 17 extensions: 18 - https://github.com/redhat-developer/vscode-yaml/releases/download/0.4.0/redhat.vscode-yaml-0.4.0.vsix - vscode:extension/SonarSource.sonarlint-vscode 1 Version of the file structure. 2 Name of the plug-in publisher. Must be the same as the publisher in the path. 3 Name of the plug-in. Must be the same as in path. 4 Version of the plug-in. Must be the same as in path. 5 Type of the plug-in. Possible values: Che Plugin , Che Editor , Theia plugin , VS Code extension . 6 A short name of the plug-in. 7 Title of the plug-in. 8 A brief explanation of the plug-in and what it does. 9 The link to the plug-in logo. 10 Optional. The link to the source-code repository of the plug-in. 11 Defines the category that this plug-in belongs to. Should be one of the following: Editor , Debugger , Formatter , Language , Linter , Snippet , Theme , or Other . 12 If this section is omitted, the VS Code extension is added into the Che-Theia IDE container. 13 The Docker image from which the sidecar container will be started. Example: eclipse/che-theia-endpoint-runtime: . 14 The maximum RAM which is available for the sidecar container. Example: "512Mi". This value might be overridden by the user in the component configuration. 15 The RAM which is given for the sidecar container by default. Example: "256Mi". This value might be overridden by the user in the component configuration. 16 The maximum CPU amount in cores or millicores (suffixed with "m") which is available for the sidecar container. Examples: "500m", "2". This value might be overridden by the user in the component configuration. 17 The CPU amount in cores or millicores (suffixed with "m") which is given for the sidecar container by default. Example: "125m". This value might be overridden by the user in the component configuration. 18 A list of VS Code extensions run in this sidecar container. 4.3.2. Adding a plug-in registry VS Code extension to a workspace When the required VS Code extension is added into a CodeReady Workspaces plug-in registry, the user can add it into the workspace through the CodeReady Workspaces Plugins panel or through the workspace configuration. 4.3.2.1. Adding a VS Code extension using the CodeReady Workspaces Plugins panel Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts Procedure To add a VS Code extension using the CodeReady Workspaces Plugins panel: Open the CodeReady Workspaces Plugins panel by pressing CTRL+SHIFT+J or navigate to View/Plugins . Change the current registry to the registry in which the VS Code extension was added. In the search bar, click the Menu button and then click Change Registry to choose the registry from the list. If the required registry is not in the list, add it using the Add Registry menu option. The registry link points to the plugins segment of the registry, for example: https://my-registry.com/v3/plugins/index.json . Search for the required plug-in using the filter, and then click the Install button. Restart the workspace for the changes to take effect. 4.3.2.2. Adding a VS Code extension using the workspace configuration Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure To add a VS Code extension using the workspace configuration: Click the Workspaces tab on the Dashboard and select the workspace in which you want to add the plug-in. The Workspace <workspace-name> window is opened showing the details of the workspace. Click the devfile tab. Locate the components section, and add a new entry with the following structure: - type: chePlugin id: 1 1 Link to the meta.yaml file in your registry, for example, https://my-plug-in-registry/v3/plugins/ <publisher> / <plug-inName> / <plug-inVersion> /meta.yaml CodeReady Workspaces automatically adds the other fields to the new component. Alternatively, you can link to a meta.yaml file hosted on GitHub, using the dedicated reference field. - type: chePlugin reference: 1 1 https://raw.githubusercontent.com/ <username> / <registryRepository> /v3/plugins/ <publisher> / <plug-inName> / <plug-inVersion> /meta.yaml Restart the workspace for the changes to take effect. 4.3.3. Choosing the sidecar image CodeReady Workspaces plug-ins are special services that extend the CodeReady Workspaces workspace capabilities. CodeReady Workspaces plug-ins are packaged as containers. The containers that the plug-ins are packaged into run as sidecars of the CodeReady Workspaces workspace editor and augment its capabilities. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . Procedure To choose a sidecar image: If the VS Code extension does not have any external dependencies, use eclipse/che-theia-endpoint-runtime: as a sidecar container image for the extension. Note In addition to the eclipse/che-theia-endpoint-runtime: base image, the following ready-to-use sidecar images that include language-specific dependencies are available: eclipse/che-remote-plugin-runner-java8 eclipse/che-remote-plugin-runner-java11 eclipse/che-remote-plugin-go-1.10.7 eclipse/che-remote-plugin-python-3.7.3 eclipse/che-remote-plugin-dotnet-2.2.105 eclipse/che-remote-plugin-php7 eclipse/che-remote-plugin-kubernetes-tooling-1.0.0 eclipse/che-remote-plugin-kubernetes-tooling-0.1.17 eclipse/che-remote-plugin-openshift-connector-0.0.17 eclipse/che-remote-plugin-openshift-connector-0.0.21 For a VS Code extension with external dependencies not found in one of the ready-to-use images, use a container image based on the eclipse/che-theia-endpoint-runtime: image that contains the dependencies. Example Base the FROM directive on FROM eclipse/che-theia-endpoint-runtime: . This is required because the base image contains tools for running the remote VS Code extension and for communicating between the sidecar and the Che-Theia editor. This way, the VS Code extension operates as if it was not remote. 4.3.4. Verifying the VS Code extension API compatibility level Che-Theia does not fully support the VS Code extensions API. The vscode-theia-comparator is used to analyze the compatibility between the Che-Theia plug-in API and the VS Code extension API. This tool runs in a nightly cycle, and the results are published on the vscode-theia-comparator GitHub page. Prerequisites Personal GitHub access token. For information about creating a personal access token for the command line see Creating a personal access token for the command line . A GitHub access token is required to increase the GitHub download limit for your IP address. Procedure To run the vscode-theia comparator manually: Clone the vscode-theia-comparator repository, and build it using the yarn command. Set the GITHUB_TOKEN environment variable to your token. Execute the yarn run generate command to generate a report. Open the out/status.html file to view the report. 4.4. Adding tools to CodeReady Workspaces after creating a workspace When installed in the workspace, CodeReady Workspaces plug-ins bring new capabilities to the CodeReady Workspaces. Plug-ins consist of a Che-Theia plug-in, metadata, and a hosting container. These plug-ins may provide the following capabilities: Integrating with other systems, including OpenShift and OpenShift. Automating some developer tasks, such as formatting, refactoring, and running automated tests. Communicating with multiple databases directly from the IDE. Enhanced code navigation, auto-completion and error highlighting. This chapter provides basic information about CodeReady Workspaces plug-ins installation, enabling, and use in CodeReady Workspaces workspaces. Section 4.4.1, "Additional tools in the CodeReady Workspaces workspace" Section 4.4.2, "Adding language support plug-in to the CodeReady Workspaces workspace" 4.4.1. Additional tools in the CodeReady Workspaces workspace CodeReady Workspaces plug-ins are extensions to the Che-Theia IDE that come bundled with a container image that contains their native prerequisites (for example, the OpenShift Connector plug-in needs the oc command installed). A Che Plugin is a list of Che-Theia plug-ins together about a Linux container that the plug-in requires to run in. It can also include metadata to define the description, categorization tags, and an icon. CodeReady Workspaces provides a registry of plug-ins available for installation into the user's workspace. Because many VS Code extensions can run inside the Che-Theia IDE, they can be packaged as CodeReady Workspaces plug-ins by combining them with a runtime or a sidecar container. Users can choose from many more plug-ins that are provided out of the box. From the Dashboard, plug-ins in the registry can be added from the Plugin tab or by adding them into a devfile. The devfile can also be used for further configuration of the plug-in, such as to define memory or CPU usage. Alternatively, plug-ins can be installed from CodeReady Workspaces by pressing Ctrl + Shift + J or by navigating to View Plugins . Additional resources Adding components to a devfile 4.4.2. Adding language support plug-in to the CodeReady Workspaces workspace This procedure describes adding a tool to an already existing workspace, by enabling a dedicated plug-in from the Dashboard. To add tools that are available as plug-ins into a CodeReady Workspaces workspace, use one of the following methods: Enable the plug-in from the Dashboard Plugin tab. Edit the workspace devfile from the Dashboard Devfile tab. This procedure uses the Java language support plug-in as an example. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined in this instance of Red Hat CodeReady Workspaces; see: Creating and configuring a new CodeReady Workspaces workspace Creating a workspace from User Dashboard The workspace must be in a stopped state. To do so: Navigate to the CodeReady Workspaces Dashboard. See Navigating CodeReady Workspaces using the Dashboard . In the Dashboard , click the Workspaces menu to open the workspaces list and locate the workspace. On the same row with the displayed workspace, on the right side of the screen, click the Stop button to stop the workspace. Wait a few seconds for the workspace to stop, then configure the workspace by clicking on it. Procedure To add the plug-in from the Plug-in registry to an already existing CodeReady Workspaces workspace, use one of the following methods: Installing the plug-in from the Plugin tab. Navigate to the Plugin tab. The list of plug-ins, installed or possible to install, is displayed. Enable the desired plug-in, for example, the Language Support for Java 11, by using the * Enable* slide-toggle. The plug-in source code is added to the workspace devfile, and the plug-in is now enabled. On the bottom right side of the screen, save the changes by clicking the Save button. + Once the changes are saved, the workspace is restarted. Installing the plug-in by adding content to the devfile. Navigate to the Devfile tab. The devfile structure is displayed. Locate the component section of the devfile and add the following lines to add the Java language v8 in to the workspace: - id: redhat/java8/latest type: chePlugin See the example of the final result: components: - id: redhat/php/latest memoryLimit: 1Gi type: chePlugin - id: redhat/php-debugger/latest memoryLimit: 256Mi type: chePlugin - mountSources: true endpoints: - name: 8080/tcp port: 8080 memoryLimit: 512Mi type: dockerimage volumes: - name: composer containerPath: /home/user/.composer - name: symfony containerPath: /home/user/.symfony alias: php image: 'quay.io/eclipse/che-php-7:nightly' - id: redhat/java8/latest type: chePlugin Additional resources Devfile specifications | [
"apiVersion: v2 category: Other description: Che Plugin with che-machine-exec service to provide creation terminal or tasks for Red Hat CodeReady Workspaces workspace containers displayName: CodeReady Workspaces machine-exec Service firstPublicationDate: \"2019-12-02\" icon: https://www.eclipse.org/che/images/logo-eclipseche.svg name: che-machine-exec-plug-in publisher: eclipse repository: https://github.com/eclipse/che-machine-exec/ title: Che machine-exec Service Plugin type: Che Plugin version: 7.5.1 spec: endpoints: - name: \"che-machine-exec\" public: true targetPort: 4444 attributes: protocol: ws type: terminal discoverable: false secure: true cookiesAuthEnabled: true containers: - name: che-machine-exec image: \"quay.io/eclipse/che-machine-exec:7.5.1\" ports: - exposedPort: 4444",
"apiVersion: v2 category: Language description: This extension provides a live preview, syntax highlighting and snippets for the AsciiDoc format using Asciidoctor flavor displayName: AsciiDoc support firstPublicationDate: \"2019-12-02\" icon: https://www.eclipse.org/che/images/logo-eclipseche.svg name: vscode-asciidoctor publisher: joaompinto repository: https://github.com/asciidoctor/asciidoctor-vscode title: AsciiDoctor Plug-in type: VS Code extension version: 2.7.7 spec: extensions: - https://github.com/asciidoctor/asciidoctor-vscode/releases/download/v2.7.7/asciidoctor-vscode-2.7.7.vsix",
"FROM fedora:30 RUN dnf update -y && dnf install -y nodejs htop && node -v RUN mkdir /home/user ENV HOME=/home/user RUN mkdir /projects && chmod -R g+rwX /projects && chmod -R g+rwX \"USD{HOME}\" CMD USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}",
"FROM fedora:30 RUN dnf update -y && dnf install -y nodejs htop && node -v RUN mkdir /home/user ENV HOME=/home/user RUN mkdir /projects && chmod -R g+rwX /projects && chmod -R g+rwX \"USD{HOME}\" ENTRYPOINT USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}",
"#!/bin/sh set -e export USER_ID=USD(id -u) export GROUP_ID=USD(id -g) if ! whoami >/dev/null 2>&1; then echo \"USD{USER_NAME:-user}:x:USD{USER_ID}:0:USD{USER_NAME:-user} user:USD{HOME}:/bin/sh\" >> /etc/passwd fi Grant access to projects volume in case of non root user with sudo rights if [ \"USD{USER_ID}\" -ne 0 ] && command -v sudo >/dev/null 2>&1 && sudo -n true > /dev/null 2>&1; then sudo chown \"USD{USER_ID}:USD{GROUP_ID}\" /projects fi exec \"USD@\"",
"FROM alpine:3.10.2 ENV HOME=/home/theia RUN mkdir /projects USD{HOME} && # Change permissions to let any arbitrary user for f in \"USD{HOME}\" \"/etc/passwd\" \"/projects\"; do echo \"Changing permissions on USD{f}\" && chgrp -R 0 USD{f} && chmod -R g+rwX USD{f}; done ADD entrypoint.sh /entrypoint.sh ENTRYPOINT [ \"/entrypoint.sh\" ] CMD USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE}",
"apiVersion: v2 category: Language description: \"Typescript language features\" displayName: Typescript firstPublicationDate: \"2019-10-28\" icon: \"https://www.eclipse.org/che/images/logo-eclipseche.svg\" name: typescript publisher: che-incubator repository: \"https://github.com/Microsoft/vscode\" title: \"Typescript language features\" type: \"VS Code extension\" version: remote-bin-with-override-entrypoint spec: containers: - image: \"example/fedora-for-ts-remote-plugin-without-endpoint:latest\" memoryLimit: 512Mi name: vscode-typescript command: - sh - -c args: - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - \"https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix\"",
"apiVersion: v2 category: Language description: \"Typescript language features\" displayName: Typescript firstPublicationDate: \"2019-10-28\" icon: \"https://www.eclipse.org/che/images/logo-eclipseche.svg\" name: typescript publisher: che-incubator repository: \"https://github.com/Microsoft/vscode\" title: \"Typescript language features\" type: \"VS Code extension\" version: remote-bin-with-override-entrypoint spec: containers: - image: \"example/fedora-for-ts-remote-plugin-without-endpoint:latest\" memoryLimit: 512Mi name: vscode-typescript args: - sh - -c - USD{PLUGIN_REMOTE_ENDPOINT_EXECUTABLE} extensions: - \"https://github.com/che-incubator/ms-code.typescript/releases/download/v1.35.1/che-typescript-language-1.35.1.vsix\"",
"apiVersion: v2 1 publisher: myorg 2 name: my-vscode-ext 3 version: 1.7.2 4 type: value 5 displayName: 6 title: 7 description: 8 icon: https://www.eclipse.org/che/images/logo-eclipseche.svg 9 repository: 10 category: 11 spec: containers: 12 - image: 13 memoryLimit: 14 memoryRequest: 15 cpuLimit: 16 cpuRequest: 17 extensions: 18 - https://github.com/redhat-developer/vscode-yaml/releases/download/0.4.0/redhat.vscode-yaml-0.4.0.vsix - vscode:extension/SonarSource.sonarlint-vscode",
"- type: chePlugin id: 1",
"- type: chePlugin reference: 1",
"- id: redhat/java8/latest type: chePlugin",
"components: - id: redhat/php/latest memoryLimit: 1Gi type: chePlugin - id: redhat/php-debugger/latest memoryLimit: 256Mi type: chePlugin - mountSources: true endpoints: - name: 8080/tcp port: 8080 memoryLimit: 512Mi type: dockerimage volumes: - name: composer containerPath: /home/user/.composer - name: symfony containerPath: /home/user/.symfony alias: php image: 'quay.io/eclipse/che-php-7:nightly' - id: redhat/java8/latest type: chePlugin"
] | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/end-user_guide/customizing-developer-environments_crw |
8.22. biosdevname | 8.22. biosdevname 8.22.1. RHBA-2014:1459 - biosdevname bug fix and enhancement update Updated biosdevname packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The biosdevname packages contain an optional convention for naming network interfaces; it assigns names to network interfaces based on their physical location. Biosdevname is disabled by default, except for a limited set of Dell PowerEdge, C Series, and Precision Workstation systems. Note The biosdevname packages have been upgraded to upstream version 0.5.1, which provides a number of bug fixes and enhancements over the version. (BZ# 1053492 ) Users of biosdevname are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/biosdevname |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/cli_guide/making-open-source-more-inclusive |
Chapter 6. Device drivers | Chapter 6. Device drivers 6.1. New drivers Table 6.1. Cryptographic drivers Description Name Limited to architectures IAA Compression Accelerator Crypto Driver iaa_crypto AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 intel_qat AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 qat_4xxx AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 qat_c3xxx AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 qat_c3xxxvf AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 qat_c62x AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 qat_c62xvf AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 qat_dh895xcc AMD and Intel 64-bit architectures Intel(R) QuickAssist Technology - 0.6.0 qat_dh895xccvf AMD and Intel 64-bit architectures Table 6.2. Network drivers Description Name Limited to architectures bcm-phy-ptp 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures mt7925-common 64-bit ARM architecture, AMD and Intel 64-bit architectures mt7925e 64-bit ARM architecture, AMD and Intel 64-bit architectures mt792x-lib 64-bit ARM architecture, AMD and Intel 64-bit architectures CAN bus driver for Bosch M_CAN controller on PCI bus m_can_pci IBM Power Systems, AMD and Intel 64-bit architectures CAN bus driver for Bosch M_CAN controller m_can IBM Power Systems, AMD and Intel 64-bit architectures CAN driver for 8 devices USB2CAN interfaces usb_8dev IBM Power Systems, AMD and Intel 64-bit architectures CAN driver for EMS Dr. Thomas Wuensche CAN/USB interfaces ems_usb IBM Power Systems, AMD and Intel 64-bit architectures CAN driver for Kvaser CAN/USB devices kvaser_usb IBM Power Systems, AMD and Intel 64-bit architectures CAN driver for PEAK-System USB adapters peak_usb IBM Power Systems, AMD and Intel 64-bit architectures Intel(R) Infrastructure Data Path Function Linux Driver idpf 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures Marvell 88Q2XXX 100/1000BASE-T1 Automotive Ethernet PHY driver marvell-88q2xxx 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures Marvell Octeon EndPoint NIC Driver octeon_ep 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures Microchip 251x/25625 CAN driver mcp251x AMD and Intel 64-bit architectures Microchip MCP251xFD Family CAN controller driver mcp251xfd AMD and Intel 64-bit architectures NXP imx8 DWMAC Specific Glue layer dwmac-imx 64-bit ARM architecture bcm-phy-ptp 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures Realtek 802.11ax wireless 8852C driver rtw89_8852c 64-bit ARM architecture, AMD and Intel 64-bit architectures Realtek 802.11ax wireless 8852CE driver rtw89_8852ce 64-bit ARM architecture, AMD and Intel 64-bit architectures serial line CAN interface slcan IBM Power Systems, AMD and Intel 64-bit architectures Socket-CAN driver for PEAK PCAN PCIe/M.2 FD family cards peak_pciefd IBM Power Systems, AMD and Intel 64-bit architectures bcm-phy-ptp 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures mt7925-common 64-bit ARM architecture, AMD and Intel 64-bit architectures mt7925e 64-bit ARM architecture, AMD and Intel 64-bit architectures mt792x-lib 64-bit ARM architecture, AMD and Intel 64-bit architectures Table 6.3. Platform drivers Description Name Limited to architectures AMD HSMP Platform Interface Driver - 2.0 amd_hsmp AMD and Intel 64-bit architectures AMD Platform Management Framework Driver amd-pmf AMD and Intel 64-bit architectures Intel TPMI enumeration module intel_vsec_tpmi AMD and Intel 64-bit architectures Intel TPMI SST Driver isst_tpmi AMD and Intel 64-bit architectures Intel TPMI UFS Driver intel-uncore-frequency-tpmi AMD and Intel 64-bit architectures Intel Uncore Frequency Common Module intel-uncore-frequency-common AMD and Intel 64-bit architectures Intel Uncore Frequency Limits Driver intel-uncore-frequency AMD and Intel 64-bit architectures Intel WMI Thunderbolt force power driver intel-wmi-thunderbolt AMD and Intel 64-bit architectures Mellanox PMC driver mlxbf-pmc 64-bit ARM architecture intel-hid AMD and Intel 64-bit architectures isst_tpmi_core AMD and Intel 64-bit architectures Table 6.4. Graphics drivers and miscellaneous drivers Description Name Limited to architectures AMD XCP Platform Devices amdxcp 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures DRM execution context drm_exec Range suballocator helper drm_suballoc_helper 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures regmap-ram 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures regmap-raw-ram 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures regmap-ram 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures regmap-raw-ram 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures regmap-ram 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures regmap-raw-ram 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures Arm FF-A interface driver ffa-module 64-bit ARM architecture NVIDIA BlueField-3 GPIO Driver gpio-mlxbf3 64-bit ARM architecture I/O Address Space Management for passthrough devices iommufd CS42L43 Core Driver cs42l43 AMD and Intel 64-bit architectures CS42L43 SoundWire Driver cs42l43-sdw AMD and Intel 64-bit architectures MEI GSC Proxy mei_gsc_proxy AMD and Intel 64-bit architectures pwrseq_emmc 64-bit ARM architecture pwrseq_simple 64-bit ARM architecture SDHCI platform driver for Synopsys DWC MSHC sdhci-of-dwcmshc 64-bit ARM architecture arm_cspmu_module 64-bit ARM architecture NVIDIA pinctrl driver pinctrl-mlxbf3 64-bit ARM architecture NXP i.MX93 power domain driver imx93-pd 64-bit ARM architecture Intel RAPL TPMI Driver intel_rapl_tpmi AMD and Intel 64-bit architectures Mellanox BlueField power driver pwr-mlxbf 64-bit ARM architecture NXP i.MX93 src driver imx93-src 64-bit ARM architecture Provide Trusted Security Module attestation reports via configfs tsm AMD and Intel 64-bit architectures 6.2. Updated drivers Table 6.5. Storage driver updates Description Name Current version Limited to architectures Broadcom MegaRAID SAS Driver megaraid_sas 07.727.03.00-rc1 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures Driver for Microchip Smart Family Controller smartpqi 2.1.24-046 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures Emulex LightPulse Fibre Channel SCSI driver lpfc 0:14.2.0.16 64-bit ARM architecture, IBM Power Systems, AMD and Intel 64-bit architectures MPI3 Storage Controller Device Driver mpi3mr 8.5.0.0.50 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.5_release_notes/device_drivers |
13.2. Types | 13.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following example creates a new file in the /var/www/html/ directory, and shows the file inheriting the httpd_sys_content_t type from its parent directory ( /var/www/html/ ): Enter the following command to view the SELinux context of /var/www/html/ : This shows /var/www/html/ is labeled with the httpd_sys_content_t type. Create a new file by using the touch utility as root: Enter the following command to view the SELinux context: The ls -Z command shows file1 labeled with the httpd_sys_content_t type. SELinux allows httpd to read files labeled with this type, but not write to them, even if Linux permissions allow write access. SELinux policy defines what types a process running in the httpd_t domain (where httpd runs) can read and write to. This helps prevent processes from accessing files intended for use by another process. For example, httpd can access files labeled with the httpd_sys_content_t type (intended for the Apache HTTP Server), but by default, cannot access files labeled with the samba_share_t type (intended for Samba). Also, files in user home directories are labeled with the user_home_t type: by default, this prevents httpd from reading or writing to files in user home directories. The following lists some of the types used with httpd . Different types allow you to configure flexible access: httpd_sys_content_t Use this type for static web content, such as .html files used by a static website. Files labeled with this type are accessible (read only) to httpd and scripts executed by httpd . By default, files and directories labeled with this type cannot be written to or modified by httpd or other processes. Note that by default, files created in or copied into the /var/www/html/ directory are labeled with the httpd_sys_content_t type. httpd_sys_script_exec_t Use this type for scripts you want httpd to execute. This type is commonly used for Common Gateway Interface (CGI) scripts in the /var/www/cgi-bin/ directory. By default, SELinux policy prevents httpd from executing CGI scripts. To allow this, label the scripts with the httpd_sys_script_exec_t type and enable the httpd_enable_cgi Boolean. Scripts labeled with httpd_sys_script_exec_t run in the httpd_sys_script_t domain when executed by httpd . The httpd_sys_script_t domain has access to other system domains, such as postgresql_t and mysqld_t . httpd_sys_rw_content_t Files labeled with this type can be written to by scripts labeled with the httpd_sys_script_exec_t type, but cannot be modified by scripts labeled with any other type. You must use the httpd_sys_rw_content_t type to label files that will be read from and written to by scripts labeled with the httpd_sys_script_exec_t type. httpd_sys_ra_content_t Files labeled with this type can be appended to by scripts labeled with the httpd_sys_script_exec_t type, but cannot be modified by scripts labeled with any other type. You must use the httpd_sys_ra_content_t type to label files that will be read from and appended to by scripts labeled with the httpd_sys_script_exec_t type. httpd_unconfined_script_exec_t Scripts labeled with this type run without SELinux protection. Only use this type for complex scripts, after exhausting all other options. It is better to use this type instead of disabling SELinux protection for httpd , or for the entire system. Note To see more of the available types for httpd, enter the following command: Procedure 13.1. Changing the SELinux Context The type for files and directories can be changed with the chcon command. Changes made with chcon do not survive a file system relabel or the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. The following example demonstrates creating a new directory and an index.html file for use by httpd , and labeling that file and directory to allow httpd access to them: Use the mkdir utility as root to create a top-level directory structure to store files to be used by httpd : Files and directories that do not match a pattern in file-context configuration may be labeled with the default_t type. This type is inaccessible to confined services: Enter the following command as root to change the type of the my/ directory and subdirectories, to a type accessible to httpd . Now, files created under /my/website/ inherit the httpd_sys_content_t type, rather than the default_t type, and are therefore accessible to httpd: See Section 4.7.1, "Temporary Changes: chcon" for further information about chcon . Use the semanage fcontext command ( semanage is provided by the policycoreutils-python package) to make label changes that survive a relabel and the restorecon command. This command adds changes to file-context configuration. Then, run restorecon , which reads file-context configuration, to apply the label change. The following example demonstrates creating a new directory and an index.html file for use by httpd , and persistently changing the label of that directory and file to allow httpd access to them: Use the mkdir utility as root to create a top-level directory structure to store files to be used by httpd : Enter the following command as root to add the label change to file-context configuration: The "/my(/.*)?" expression means the label change applies to the my/ directory and all files and directories under it. Use the touch utility as root to create a new file: Enter the following command as root to apply the label changes ( restorecon reads file-context configuration, which was modified by the semanage command in step 2): See Section 4.7.2, "Persistent Changes: semanage fcontext" for further information on semanage. | [
"~]USD ls -dZ /var/www/html drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html",
"~]# touch /var/www/html/file1",
"~]USD ls -Z /var/www/html/file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/file1",
"~]USD grep httpd /etc/selinux/targeted/contexts/files/file_contexts",
"~]# mkdir -p /my/website",
"~]USD ls -dZ /my drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /my",
"~]# chcon -R -t httpd_sys_content_t /my/ ~]# touch /my/website/index.html ~]# ls -Z /my/website/index.html -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /my/website/index.html",
"~]# mkdir -p /my/website",
"~]# semanage fcontext -a -t httpd_sys_content_t \"/my(/.*)?\"",
"~]# touch /my/website/index.html",
"~]# restorecon -R -v /my/ restorecon reset /my context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /my/website context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /my/website/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-the_apache_http_server-types |
1.3. Resource Controllers in Linux Kernel | 1.3. Resource Controllers in Linux Kernel A resource controller, also called a cgroup subsystem, represents a single resource, such as CPU time or memory. The Linux kernel provides a range of resource controllers, that are mounted automatically by systemd . Find the list of currently mounted resource controllers in /proc/cgroups , or use the lssubsys monitoring tool. In Red Hat Enterprise Linux 7, systemd mounts the following controllers by default: Available Controllers in Red Hat Enterprise Linux 7 blkio - sets limits on input/output access to and from block devices; cpu - uses the CPU scheduler to provide cgroup tasks access to the CPU. It is mounted together with the cpuacct controller on the same mount; cpuacct - creates automatic reports on CPU resources used by tasks in a cgroup. It is mounted together with the cpu controller on the same mount; cpuset - assigns individual CPUs (on a multicore system) and memory nodes to tasks in a cgroup; devices - allows or denies access to devices for tasks in a cgroup; freezer - suspends or resumes tasks in a cgroup; memory - sets limits on memory use by tasks in a cgroup and generates automatic reports on memory resources used by those tasks; net_cls - tags network packets with a class identifier ( classid ) that allows the Linux traffic controller (the tc command) to identify packets originating from a particular cgroup task. A subsystem of net_cls , the net_filter (iptables) can also use this tag to perform actions on such packets. The net_filter tags network sockets with a firewall identifier ( fwid ) that allows the Linux firewall (the iptables command) to identify packets (skb->sk) originating from a particular cgroup task; perf_event - enables monitoring cgroups with the perf tool; hugetlb - allows to use virtual memory pages of large sizes and to enforce resource limits on these pages. The Linux kernel exposes a wide range of tunable parameters for resource controllers that can be configured with systemd . See the kernel documentation (list of references in the Controller-Specific Kernel Documentation section) for detailed description of these parameters. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/br-resource_controllers_in_linux_kernel |
Preface | Preface Providing feedback on Red Hat documentation Red Hat appreciates your feedback on product documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to help the documentation team to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information other fields at their default values. In the Reporter field, enter your Jira user name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/introduction_to_connectivity_link/pr01 |
Chapter 1. Preface | Chapter 1. Preface 1.1. About Red Hat Gluster Storage Red Hat Gluster Storage is a software-only, scale-out storage solution that provides flexible and agile unstructured data storage for the enterprise. Red Hat Gluster Storage provides new opportunities to unify data storage and infrastructure, increase performance, and improve availability and manageability in order to meet a broader set of an organization's storage challenges and needs. The product can be installed and managed on-premises, or in a public cloud. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-platform_introduction |
Chapter 1. Welcome to Red Hat Advanced Cluster Management for Kubernetes | Chapter 1. Welcome to Red Hat Advanced Cluster Management for Kubernetes Kubernetes provides a platform for deploying and managing containers in a standard, consistent control plane. However, as application workloads move from development to production, they often require multiple fit-for-purpose Kubernetes clusters to support DevOps pipelines. Note: Use of this Red Hat product requires licensing and subscription agreement. Users, such as administrators and site reliability engineers, face challenges as they work across a range of environments, including multiple data centers, private clouds, and public clouds that run Kubernetes clusters. Red Hat Advanced Cluster Management for Kubernetes provides the tools and capabilities to address these common challenges. Red Hat Advanced Cluster Management for Kubernetes provides end-to-end management visibility and control to manage your Kubernetes environment. Take control of your application modernization program with management capabilities for cluster creation, application lifecycle, and provide security and compliance for all of them across hybrid cloud environments. Clusters and applications are all visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet. The Welcome page from the Red Hat Advanced Cluster Management for Kubernetes console has a header that displays the Applications switcher to return to Red Hat OpenShift Container Platform and more. The tiles describe the main functions of the product and link to important console pages. For more information, see the Console overview . With Red Hat Advanced Cluster Management for Kubernetes: Work across a range of environments, including multiple data centers, private clouds and public clouds that run Kubernetes clusters. Easily create Kubernetes clusters and offer cluster lifecycle management in a single console. Enforce policies at the target clusters using Kubernetes-supported custom resource definitions. Deploy and maintain day-two operations of business applications distributed across your cluster landscape. This guide assumes that users are familiar with Kubernetes concepts and terminology. For more information about Kubernetes concepts, see Kubernetes Documentation . Access the Red Hat Advanced Cluster Management 2.11 Support Matrix to learn about hub cluster and managed cluster requirements and support. See the following documentation for information about the product: Multicluster architecture Glossary of terms 1.1. Multicluster architecture Red Hat Advanced Cluster Management for Kubernetes consists of several multicluster components, which are used to access and manage your clusters. Learn more about the architecture in the following sections, then follow the links to more detailed documentation. Access the Red Hat Advanced Cluster Management 2.11 Support Matrix to learn about hub cluster and managed cluster requirements and support. See the following high-level multicluster terms and components: Hub cluster Managed cluster Cluster lifecycle Application lifecycle Governance Observability References 1.1.1. Hub cluster The hub cluster is the common term that is used to define the central controller that runs in a Red Hat Advanced Cluster Management for Kubernetes cluster. From the hub cluster, you can access the console and product components, as well as the Red Hat Advanced Cluster Management APIs. You can also use the console to search resources across clusters and view your topology. Additionally, you can enable observability on your hub cluster to monitor metrics from your managed clusters across your cloud providers. The Red Hat Advanced Cluster Management hub cluster uses the MultiClusterHub operator to manage, upgrade, and install hub cluster components and runs in the open-cluster-management namespace. The hub cluster aggregates information from multiple clusters by using an asynchronous work request model and search collectors. The hub cluster maintains the state of clusters and applications that run on it. The local cluster is the term used to define a hub cluster that is also a managed cluster, discussed in the following sections. 1.1.2. Managed cluster The managed cluster is the term that is used to define additional clusters that are managed by the hub cluster. The connection between the two is completed by using the klusterlet , which is the agent that is installed on the managed cluster. The managed cluster receives and applies requests from the hub cluster and enables it to service cluster lifecycle, application lifecycle, governance, and observability on the managed cluster. For example, managed clusters send metrics to the hub cluster if the observability service is enabled. See Observing environments to receive metrics and optimize the health of all managed clusters. 1.1.3. Cluster lifecycle Red Hat Advanced Cluster Management cluster lifecycle defines the process of creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers. The cluster lifecycle function is provided by the multicluster engine for Kubernetes operator, which is installed automatically with Red Hat Advanced Cluster Management. See Cluster lifecycle introduction for general information about the cluster lifecycle function. From the hub cluster console, you can view an aggregation of all cluster health statuses, or view individual health metrics of many Kubernetes clusters. Additionally, you can upgrade managed OpenShift Container Platform clusters individually or in bulk, as well as destroy any OpenShift Container Platform clusters that you created using your hub cluster. From the console, you can also hibernate, resume, and detach clusters. 1.1.4. Application lifecycle Red Hat Advanced Cluster Management Application lifecycle defines the processes that are used to manage application resources on your managed clusters. A multicluster application allows you to deploy resources on multiple managed clusters, as well as maintain full control of Kubernetes resource updates for all aspects of the application with high availability. A multicluster application uses the Kubernetes specification, but provides additional automation of the deployment and lifecycle management of resources. Ansible Automation Platform jobs allow you to automate tasks. You can also set up a continuous GitOps environment to automate application consistency across clusters in development, staging, and production environments. See Managing applications for more application topics. 1.1.5. Governance Governance enables you to define policies that either enforce security compliance, or inform you of changes that violate the configured compliance requirements for your environment. Using dynamic policy templates, you can manage the policies and compliance requirements across all of your management clusters from a central interface. For more information, see the Security overview . Additionally, learn about access requirements from the Role-based access control documentation. After you configure a Red Hat Advanced Cluster Management hub cluster and a managed cluster, you can view and create policies with the Red Hat Advanced Cluster Management policy framework. You can visit the policy-collection open community to see what policies community members created and contributed, as well as contribute your own policies for others to use. 1.1.6. Observability The Observability component collects and reports the status and health of the OpenShift Container Platform version 4.x or later, managed clusters to the hub cluster, which are visible from the Grafana dashboard. You can create custom alerts to inform you of problems with your managed clusters. Because it requires configured persistent storage, Observability must be enabled after the Red Hat Advanced Cluster Management installation. For more information about Observability, see Observing environments introduction . 1.1.7. References Learn more about the release from the Release notes . See the product Installing and upgrading section to prepare your cluster and get configuration information. See Cluster lifecycle overview for more information about the operator that provides the cluster lifecycle features. 1.2. Glossary of terms Red Hat Advanced Cluster Management for Kubernetes consists of several multicluster components that are defined in the following sections. Additionally, some common Kubernetes terms are used within the product. Terms are listed alphabetically. 1.2.1. Relevant standardized glossaries Kubernetes glossary 1.2.2. Red Hat Advanced Cluster Management for Kubernetes terms 1.2.2.1. Application lifecycle The processes that are used to manage application resources on your managed clusters. A multicluster application uses a Kubernetes specification, but with additional automation of the deployment and lifecycle management of resources to individual clusters. 1.2.2.2. Channel A custom resource definition that references repositories where Kubernetes resources are stored, such as Git repositories, Helm chart repositories, ObjectStore repositories, or namespaces templates on the hub cluster. Channels support multiple subscriptions from multiple targets. 1.2.2.3. Cluster lifecycle Defines the process of creating, importing, and managing clusters across public and private clouds. 1.2.2.4. Console The graphical user interface for Red Hat Advanced Cluster Management for Kubernetes. 1.2.2.5. Deployable A resource that retrieves the output of a build, packages the output with configuration properties, and installs the package in a pre-defined location so that it can be tested or run. 1.2.2.6. Governance The Red Hat Advanced Cluster Management for Kubernetes processes used to manage security and compliance. 1.2.2.7. Hosted cluster An OpenShift Container Platform API endpoint that is managed by HyperShift. 1.2.2.8. Hosted cluster infrastructure Resources that exist in the customer cloud account, including network, compute, storage, and so on. 1.2.2.9. Hosted control plane An OpenShift Container Platform control plane that is running on the hosting service cluster, which is exposed by the API endpoint of a hosted cluster. The component parts of a control plane include etcd , apiserver , kube-controller-manager , vpn , and other components. 1.2.2.10. Hosted control plane infrastructure Resources on the management cluster or external cloud provider that are prerequisites to running hosted control plane processes. 1.2.2.11. Hosting service cluster An OpenShift Container Platform cluster that hosts the HyperShift operator and zero-to-many hosted clusters. 1.2.2.12. Hosted service cluster infrastructure Resources of the hosting service cluster, including network, compute, storage, and so on. 1.2.2.13. Hub cluster The central controller that runs in a Red Hat Advanced Cluster Management for Kubernetes cluster. From the hub cluster, you can access the console and components found on that console, as well as APIs. 1.2.2.14. klusterlet The agent that contains two controllers on the managed cluster that initiates a connection to the Red Hat Advanced Cluster Management for Kubernetes hub cluster. 1.2.2.15. Klusterlet add-on Specialized controller on the Klusterlet that provides additional management capability. 1.2.2.16. Managed cluster Created and imported clusters are managed by the klusterlet agent and its add-ons, which initiates a connection to the Red Hat Advanced Cluster Management for Kubernetes hub cluster. 1.2.2.17. Placement binding A resource that binds a placement to a policy. 1.2.2.18. Placement policy A policy that defines where the application components are deployed and how many replicas there are. 1.2.2.19. Subscriptions A resource that identifies the Kubernetes resources within channels (resource repositories), then places the Kubernetes resource on the target clusters. | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.11/html/about/welcome-to-red-hat-advanced-cluster-management-for-kubernetes |
34.2. Removing Red Hat Enterprise Linux from IBM Z | 34.2. Removing Red Hat Enterprise Linux from IBM Z If you want to delete the existing operating system data, first, if any Linux disks contain sensitive data, ensure that you destroy the data according to your security policy. To proceed you can consider these options: Overwrite the disks with a new installation. Make the DASD or SCSI disk where Linux was installed visible from another system, then delete the data. However, this might require special privileges. Ask your system administrator for advice. You can use Linux commands such as dasdfmt (DASD only), parted , mke2fs or dd . For more details about the commands, see the respective man pages. 34.2.1. Running a Different Operating System on Your z/VM Guest or LPAR If you want to boot from a DASD or SCSI disk different from where the currently installed system resides under a z/VM guest virtual machine or an LPAR, shut down the Red Hat Enterprise Linux installed and use the desired disk, where another Linux instance is installed, to boot from. This leaves the contents of the installed system unchanged. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-uninstall-rhel-s390 |
Installing an on-premise cluster with the Agent-based Installer | Installing an on-premise cluster with the Agent-based Installer OpenShift Container Platform 4.14 Installing an on-premise OpenShift Container Platform cluster with the Agent-based Installer Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_an_on-premise_cluster_with_the_agent-based_installer/index |
Chapter 15. The Football Quickstart Endpoint Examples | Chapter 15. The Football Quickstart Endpoint Examples The Football application is a simple example to illustrate the use of Red Hat JBoss Data Grid endpoints, namely Hot Rod, REST, and Memcached. Each example shows one of these protocols used to connect to JBoss Data Grid to remotely store, retrieve, and remove data from caches. Each application is a variation of a simple football team manager utility as a console application. Features The following features are available with the example Football Manager application: Add a team Add players Remove all entities (teams and players) Listing all teams and players Location JBoss Data Grid's Football quickstart can be found at the following locations: jboss-datagrid-{VERSION}-quickstarts/rest-endpoint jboss-datagrid-{VERSION}-quickstarts/hotrod-endpoint jboss-datagrid-{VERSION}-quickstarts/memcached-endpoint Report a bug 15.1. Quickstart Prerequisites The prerequisites for this quickstart are as follows: Java 6.0 (Java SDK 1.6) or better JBoss Enterprise Application Platform 6.x or JBoss Enterprise Web Server 2.x Maven 3.0 or better Configure the Maven Repository. For details, see Chapter 3, Install and Use the Maven Repositories Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-The_Football_Quickstart_Endpoint_Examples |
7.203. strace | 7.203. strace 7.203.1. RHBA-2015:1308 - strace bug fix and enhancement update Updated strace packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The strace utility intercepts and records the system calls that are made and received by a running process and prints a record of each system call, its arguments, and its return value to standard error output or a file. It is often used for problem diagnoses, debugging, and for instructional purposes. Note The strace packages have been upgraded to upstream version 4.8, which provides a number of bug fixes and enhancements over the version. (BZ# 919101 , BZ# 1056828 ) Bug Fixes BZ# 919101 , BZ# 1056828 This update adds several new command-line options: "-y" to print file descriptor paths, "-P" to filter system calls based on the file descriptor paths, and "-I" to control how interactive strace is. A new command-line utility, strace-log-merge, has been added. This utility can be used to merge timestamped strace output into a single file. The strace utility now uses optimized interfaces to extract data from the traced process for better performance. The strace utility now provides improved support for decoding of arguments for various system calls. In addition, a number of new system calls are supported. BZ# 877193 Previously, the strace utility incorrectly handled the return value from the shmat() system call. Consequently, the return value displayed was "?" instead of the address of the attached shared memory segment. This bug has been fixed, and strace now displays the correct return value for the shmat() system calls. Users of strace are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-strace |
8.139. papi | 8.139. papi 8.139.1. RHBA-2013:1587 - papi bug fix and enhancement update Updated papi packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. PAPI (Performance Application Programming Interface) is a software library that provides access to the processor's performance-monitoring hardware. This allows developers to track performance-related events, such as cache misses, instructions retired, and clock cycles, to better understand the performance issues of the software. Note The papi packages have been upgraded to upstream version 5.1.1, which provides a number of bug fixes and enhancements over the version, including support for Intel Xeon Processor E5-XXXX v2 architecture. (BZ# 831751 ) Bug Fixes BZ# 740909 Due to missing dependencies in the makefile, a parallel rebuild of the PAPI library failed. With this update, new rules have been added to the makefile to address this problem. As a result, PAPI can be successfully rebuild in the described scenario. BZ# 785258 Previously, when Hyper-threading was enabled on the Intel Xeon Processor E5-XXXX node, the PAPI libary could not configure the performance-monitoring hardware to count floating-point operations. This bug has been fixed and the aforementioned error no longer occurs. BZ# 883475 Due to an incorrect ldconfig setting in the papi.spec file, papi failed to be rebuilt from the srpm file when the process was executed by the root user. With this update, the underlying source code has been modified to fix this bug. BZ# 883766 Previously, the papi package failed to be built from the srpm file when a version of papi was installed. During the build, the new version of papi attempted to link to the libpfm.so file of the previously installed papi-devel package, which caused papi to terminate unexpectedly. With this update, a patch has been introduced to reorder the sequence of file linking during the build, so that the locally built files are used first. As a result, papi is built correctly with version installed. Enhancements BZ# 726798 , BZ# 831751 , BZ# 947622 Support for the Intel Xeon Processor E5-XXXX and Intel Xeon Processor E5-XXXX architectures has been added to the PAPI library. BZ# 743648 Support for access to various energy and performance registers through PAPI has been added. BZ# 785975 With this update, several minor grammatical errors have been corrected in the PAPI interface. BZ# 866590 The papi-static subpackage has been added to provide the libraries for static linking. BZ# 910163 The papi-testsuite subpackage has been added to allow testing of papi. All users of papi are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/papi |
4.3. Growing a File System on a Logical Volume | 4.3. Growing a File System on a Logical Volume To grow a file system on a logical volume, perform the following steps: Make a new physical volume. Extend the volume group that contains the logical volume with the file system you are growing to include the new physical volume. Extend the logical volume to include the new physical volume. Grow the file system. If you have sufficient unallocated space in the volume group, you can use that space to extend the logical volume instead of performing steps 1 and 2. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/fsgrow_overview |
Chapter 16. Configuring the overcloud with Ansible | Chapter 16. Configuring the overcloud with Ansible Ansible is the main method to apply the overcloud configuration. This chapter provides information about how to interact with the overcloud Ansible configuration. Although director generates the Ansible playbooks automatically, it is a good idea to familiarize yourself with Ansible syntax. For more information about using Ansible, see https://docs.ansible.com/ . Note Ansible also uses the concept of roles, which are different to OpenStack Platform director roles. Ansible roles form reusable components of playbooks, whereas director roles contain mappings of OpenStack services to node types. 16.1. Ansible-based overcloud configuration (config-download) The config-download feature is the method that director uses to configure the overcloud. Director uses config-download in conjunction with OpenStack Orchestration (heat) to generate the software configuration and apply the configuration to each overcloud node. Although heat creates all deployment data from SoftwareDeployment resources to perform the overcloud installation and configuration, heat does not apply any of the configuration. Heat only provides the configuration data through the heat API. As a result, when you run the openstack overcloud deploy command, the following process occurs: Director creates a new deployment plan based on openstack-tripleo-heat-templates and includes any environment files and parameters to customize the plan. Director uses heat to interpret the deployment plan and create the overcloud stack and all descendant resources. This includes provisioning nodes with the OpenStack Bare Metal service (ironic). Heat also creates the software configuration from the deployment plan. Director compiles the Ansible playbooks from this software configuration. Director generates a temporary user ( tripleo-admin ) on the overcloud nodes specifically for Ansible SSH access. Director downloads the heat software configuration and generates a set of Ansible playbooks using heat outputs. Director applies the Ansible playbooks to the overcloud nodes using ansible-playbook . 16.2. config-download working directory The ansible-playbook command creates an Ansible project directory, default name ~/config-download/overcloud . This project directory stores downloaded software configuration from heat. It includes all Ansible-related files which you need to run ansible-playbook to configure the overcloud. The contents of the directory include: tripleo-ansible-inventory.yaml - Ansible inventory file containing hosts and vars for all the overcloud nodes. ansible.log - Log file from the most recent run of ansible-playbook . ansible.cfg - Configuration file used when running ansible-playbook . ansible-playbook-command.sh - Executable script used to rerun ansible-playbook . ssh_private_key - Private ssh key used to access the overcloud nodes. Reproducing ansible-playbook After the project directory is created, run the ansible-playbook-command.sh command to reproduce the deployment. You can run the script with additional arguments, such as check mode --check , limiting hosts --limit , and overriding variables -e . 16.3. Checking config-download log During the config-download process, Ansible creates a log file, named ansible.log , in the /home/stack directory on the undercloud. Procedure View the log with the less command: 16.4. Performing Git operations on the working directory The config-download working directory is a local Git repository. Every time a deployment operation runs, director adds a Git commit to the working directory with the relevant changes. You can perform Git operations to view configuration for the deployment at different stages and compare the configuration with different deployments. Be aware of the limitations of the working directory. For example, if you use Git to revert to a version of the config-download working directory, this action affects only the configuration in the working directory. It does not affect the following configurations: The overcloud data schema: Applying a version of the working directory software configuration does not undo data migration and schema changes. The hardware layout of the overcloud: Reverting to software configuration does not undo changes related to overcloud hardware, such as scaling up or down. The heat stack: Reverting to earlier revisions of the working directory has no effect on the configuration stored in the heat stack. The heat stack creates a new version of the software configuration that applies to the overcloud. To make permanent changes to the overcloud, modify the environment files applied to the overcloud stack before you rerun the openstack overcloud deploy command. Complete the following steps to compare different commits of the config-download working directory. Procedure Change to the config-download working directory for your overcloud, usually named overcloud : Run the git log command to list the commits in your working directory. You can also format the log output to show the date: By default, the most recent commit appears first. Run the git diff command against two commit hashes to see all changes between the deployments: 16.5. Deployment methods that use config-download There are four main methods that use config-download in the context of an overcloud deployment: Standard deployment Run the openstack overcloud deploy command to automatically run the configuration stage after the provisioning stage. This is the default method when you run the openstack overcloud deploy command. Separate provisioning and configuration Run the openstack overcloud deploy command with specific options to separate the provisioning and configuration stages. Run the ansible-playbook-command.sh script after a deployment Run the openstack overcloud deploy command with combined or separate provisioning and configuration stages, then run the ansible-playbook-command.sh script supplied in the config-download working directory to re-apply the configuration stage. Provision nodes, manually create config-download, and run Ansible Run the openstack overcloud deploy command with a specific option to provision nodes, then run the ansible-playbook command with the deploy_steps_playbook.yaml playbook. 16.6. Running config-download on a standard deployment The default method for executing config-download is to run the openstack overcloud deploy command. This method suits most environments. Prerequisites A successful undercloud installation. Overcloud nodes ready for deployment. Heat environment files that are relevant to your specific overcloud customization. Procedure Log in to the undercloud host as the stack user. Source the stackrc file: Run the deployment command. Include any environment files that you require for your overcloud: Wait until the deployment process completes. During the deployment process, director generates the config-download files in a ~/config-download/overcloud working directory. After the deployment process finishes, view the Ansible playbooks in the working directory to see the tasks director executed to configure the overcloud. 16.7. Running config-download with separate provisioning and configuration The openstack overcloud deploy command runs the heat-based provisioning process and then the config-download configuration process. You can also run the deployment command to execute each process individually. Use this method to provision your overcloud nodes as a distinct process so that you can perform any manual pre-configuration tasks on the nodes before you run the overcloud configuration process. Prerequisites A successful undercloud installation. Overcloud nodes ready for deployment. Heat environment files that are relevant to your specific overcloud customization. Procedure Log in to the undercloud host as the stack user. Source the stackrc file: Run the deployment command with the --stack-only option. Include any environment files you require for your overcloud: Wait until the provisioning process completes. Enable SSH access from the undercloud to the overcloud for the tripleo-admin user. The config-download process uses the tripleo-admin user to perform the Ansible-based configuration: Perform any manual pre-configuration tasks on nodes. If you use Ansible for configuration, use the tripleo-admin user to access the nodes. Run the deployment command with the --config-download-only option. Include any environment files required for your overcloud: Wait until the configuration process completes. During the configuration stage, director generates the config-download files in a ~/config-download/overcloud working directory. After the deployment process finishes, view the Ansible playbooks in the working directory to see the tasks director executed to configure the overcloud. 16.8. Running config-download with the ansible-playbook-command.sh script When you deploy the overcloud, either with the standard method or a separate provisioning and configuration process, director generates a working directory in ~/config-download/overcloud . This directory contains the playbooks and scripts necessary to run the configuration process again. Prerequisites An overcloud deployed with the one of the following methods: Standard method that combines provisioning and configuration process. Separate provisioning and configuration processes. Procedure Log in to the undercloud host as the stack user. Run the ansible-playbook-command.sh script. You can pass additional Ansible arguments to this script, which are then passed unchanged to the ansible-playbook command. This makes it possible to take advantage of Ansible features, such as check mode ( --check ), limiting hosts ( --limit ), or overriding variables ( -e ). For example: Warning When --limit is used to deploy at scale, only hosts included in the execution are added to the SSH known_hosts file across the nodes. Therefore, some operations, such as live migration, may not work across nodes that are not in the known_hosts file. Note To ensure that the /etc/hosts file, on all nodes, is up-to-date, run the following command as the stack user: Wait until the configuration process completes. Additional information The working directory contains a playbook called deploy_steps_playbook.yaml , which manages the overcloud configuration tasks. To view this playbook, run the following command: The playbook uses various task files contained in the working directory. Some task files are common to all OpenStack Platform roles and some are specific to certain OpenStack Platform roles and servers. The working directory also contains sub-directories that correspond to each role that you define in your overcloud roles_data file. For example: Each OpenStack Platform role directory also contains sub-directories for individual servers of that role type. The directories use the composable role hostname format: The Ansible tasks in deploy_steps_playbook.yaml are tagged. To see the full list of tags, use the CLI option --list-tags with ansible-playbook : Then apply tagged configuration using the --tags , --skip-tags , or --start-at-task with the ansible-playbook-command.sh script: When you run the config-download playbooks against the overcloud, you might receive a message regarding the SSH fingerprint for each host. To avoid these messages, include --ssh-common-args="-o StrictHostKeyChecking=no" when you run the ansible-playbook-command.sh script: 16.9. Running config-download with manually created playbooks You can create your own config-download files outside of the standard workflow. For example, you can run the openstack overcloud deploy command with the --stack-only option to provision the nodes, and then manually apply the Ansible configuration separately. Prerequisites A successful undercloud installation. Overcloud nodes ready for deployment. Heat environment files that are relevant to your specific overcloud customization. Procedure Log in to the undercloud host as the stack user. Source the stackrc file: Run the deployment command with the --stack-only option. Include any environment files required for your overcloud: Wait until the provisioning process completes. Enable SSH access from the undercloud to the overcloud for the tripleo-admin user. The config-download process uses the tripleo-admin user to perform the Ansible-based configuration: Generate the config-download files: --stack specifies the name of the overcloud. --stack-only ensures that the command only deploys the heat stack and skips any software configuration. --config-dir specifies the location of the config-download files. Change to the directory that contains your config-download files: Generate a static inventory file: Replace <overcloud> with the name of your overcloud. Use the ~/overcloud-deploy/overcloud/config-download/overcloud files and the static inventory file to perform a configuration. To execute the deployment playbook, run the ansible-playbook command: Note When you run the config-download/overcloud playbooks against the overcloud, you might receive a message regarding the SSH fingerprint for each host. To avoid these messages, include --ssh-common-args="-o StrictHostKeyChecking=no" in your ansible-playbook command: Wait until the configuration process completes. Generate an overcloudrc file manually from the ansible-based configuration: Manually set the deployment status to success: Replace <overcloud> with the name of your overcloud. Note The ~/overcloud-deploy/overcloud/config-download/overcloud/ directory contains a playbook called deploy_steps_playbook.yaml . The playbook uses various task files contained in the working directory. Some task files are common to all Red Hat OpenStack Platform (RHOSP) roles and some are specific to certain RHOSP roles and servers. The ~/overcloud-deploy/overcloud/config-download/overcloud/ directory also contains sub-directories that correspond to each role that you define in your overcloud roles_data file. Each RHOSP role directory also contains sub-directories for individual servers of that role type. The directories use the composable role hostname format, for example Controller/overcloud-controller-0 . The Ansible tasks in deploy_steps_playbook.yaml are tagged. To see the full list of tags, use the CLI option --list-tags with ansible-playbook : You can apply tagged configuration using the --tags , --skip-tags , or --start-at-task with the ansible-playbook-command.sh script: 16.10. Limitations of config-download The config-download feature has some limitations: When you use ansible-playbook CLI arguments such as --tags , --skip-tags , or --start-at-task , do not run or apply deployment configuration out of order. These CLI arguments are a convenient way to rerun previously failed tasks or to iterate over an initial deployment. However, to guarantee a consistent deployment, you must run all tasks from deploy_steps_playbook.yaml in order. You can not use the --start-at-task arguments for certain tasks that use a variable in the task name. For example, the --start-at-task arguments does not work for the following Ansible task: If your overcloud deployment includes a director-deployed Ceph Storage cluster, you cannot skip step1 tasks when you use the --check option unless you also skip external_deploy_steps tasks. You can set the number of parallel Ansible tasks with the --forks option. However, the performance of config-download operations degrades after 25 parallel tasks. For this reason, do not exceed 25 with the --forks option. 16.11. config-download top level files The following file are important top level files within a config-download working directory. Ansible configuration and execution The following files are specific to configuring and executing Ansible within the config-download working directory. ansible.cfg Configuration file used when running ansible-playbook . ansible.log Log file from the last run of ansible-playbook . ansible-errors.json JSON structured file that contains any deployment errors. ansible-playbook-command.sh Executable script to rerun the ansible-playbook command from the last deployment operation. ssh_private_key Private SSH key that Ansible uses to access the overcloud nodes. tripleo-ansible-inventory.yaml Ansible inventory file that contains hosts and variables for all the overcloud nodes. overcloud-config.tar.gz Archive of the working directory. Playbooks The following files are playbooks within the config-download working directory. deploy_steps_playbook.yaml Main deployment steps. This playbook performs the main configuration operations for your overcloud. pre_upgrade_rolling_steps_playbook.yaml Pre upgrade steps for major upgrade upgrade_steps_playbook.yaml Major upgrade steps. post_upgrade_steps_playbook.yaml Post upgrade steps for major upgrade. update_steps_playbook.yaml Minor update steps. fast_forward_upgrade_playbook.yaml Fast forward upgrade tasks. Use this playbook only when you want to upgrade from one long-life version of Red Hat OpenStack Platform to the . 16.12. config-download tags The playbooks use tagged tasks to control the tasks that they apply to the overcloud. Use tags with the ansible-playbook CLI arguments --tags or --skip-tags to control which tasks to execute. The following list contains information about the tags that are enabled by default: facts Fact gathering operations. common_roles Ansible roles common to all nodes. overcloud All plays for overcloud deployment. pre_deploy_steps Deployments that happen before the deploy_steps operations. host_prep_steps Host preparation steps. deploy_steps Deployment steps. post_deploy_steps Steps that happen after the deploy_steps operations. external All external deployment tasks. external_deploy_steps External deployment tasks that run on the undercloud only. 16.13. config-download deployment steps The deploy_steps_playbook.yaml playbook configures the overcloud. This playbook applies all software configuration that is necessary to deploy a full overcloud based on the overcloud deployment plan. This section contains a summary of the different Ansible plays used within this playbook. The play names in this section are the same names that are used within the playbook and that are displayed in the ansible-playbook output. This section also contains information about the Ansible tags that are set on each play. Gather facts from undercloud Fact gathering for the undercloud node. Tags: facts Gather facts from overcloud Fact gathering for the overcloud nodes. Tags: facts Load global variables Loads all variables from global_vars.yaml . Tags: always Common roles for TripleO servers Applies common Ansible roles to all overcloud nodes, including tripleo-bootstrap for installing bootstrap packages, and tripleo-ssh-known-hosts for configuring ssh known hosts. Tags: common_roles Overcloud deploy step tasks for step 0 Applies tasks from the deploy_steps_tasks template interface. Tags: overcloud , deploy_steps Server deployments Applies server-specific heat deployments for configuration such as networking and hieradata. Includes NetworkDeployment, <Role>Deployment, <Role>AllNodesDeployment, etc. Tags: overcloud , pre_deploy_steps Host prep steps Applies tasks from the host_prep_steps template interface. Tags: overcloud , host_prep_steps External deployment step [1,2,3,4,5] Applies tasks from the external_deploy_steps_tasks template interface. Ansible runs these tasks only against the undercloud node. Tags: external , external_deploy_steps Overcloud deploy step tasks for [1,2,3,4,5] Applies tasks from the deploy_steps_tasks template interface. Tags: overcloud , deploy_steps Overcloud common deploy step tasks [1,2,3,4,5] Applies the common tasks performed at each step, including puppet host configuration, container-puppet.py , and tripleo-container-manage (container configuration and management). Tags: overcloud , deploy_steps Server Post Deployments Applies server specific heat deployments for configuration performed after the 5-step deployment process. Tags: overcloud , post_deploy_steps External deployment Post Deploy tasks Applies tasks from the external_post_deploy_steps_tasks template interface. Ansible runs these tasks only against the undercloud node. Tags: external , external_deploy_steps | [
"./ansible-playbook-command.sh",
"./ansible-playbook-command.sh --check",
"less ~/ansible.log",
"cd ~/config-download/overcloud",
"git log --format=format:\"%h%x09%cd%x09\" a7e9063 Mon Oct 8 21:17:52 2018 +1000 dfb9d12 Fri Oct 5 20:23:44 2018 +1000 d0a910b Wed Oct 3 19:30:16 2018 +1000",
"git diff a7e9063 dfb9d12",
"source ~/stackrc",
"openstack overcloud deploy --templates -e environment-file1.yaml -e environment-file2.yaml",
"source ~/stackrc",
"openstack overcloud deploy --templates -e environment-file1.yaml -e environment-file2.yaml --stack-only",
"openstack overcloud admin authorize",
"openstack overcloud deploy --templates -e environment-file1.yaml -e environment-file2.yaml --config-download-only",
"./ansible-playbook-command.sh --limit Controller",
"(undercloud)USD cd /home/stack/overcloud-deploy/overcloud/config-download/overcloud (undercloud)USD ANSIBLE_REMOTE_USER=\"tripleo-admin\" ansible allovercloud -i /home/stack/overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml -m include_role -a name=tripleo_hosts_entries -e @global_vars.yaml",
"less deploy_steps_playbook.yaml",
"ls Controller/",
"ls Controller/overcloud-controller-0",
"ansible-playbook -i tripleo-ansible-inventory.yaml --list-tags deploy_steps_playbook.yaml",
"./ansible-playbook-command.sh --tags overcloud",
"./ansible-playbook-command.sh --tags overcloud --ssh-common-args=\"-o StrictHostKeyChecking=no\"",
"source ~/stackrc",
"openstack overcloud deploy --templates -e environment-file1.yaml -e environment-file2.yaml --stack-only",
"openstack overcloud admin authorize",
"openstack overcloud deploy --stack overcloud --stack-only --config-dir ~/overcloud-deploy/overcloud/config-download/overcloud/",
"cd ~/config-download",
"tripleo-ansible-inventory --stack <overcloud> --ansible_ssh_user tripleo-admin --static-yaml-inventory inventory.yaml",
"ansible-playbook -i inventory.yaml -e gather_facts=true -e @global_vars.yaml --private-key ~/.ssh/id_rsa --become ~/overcloud-deploy/overcloud/config-download/overcloud/deploy_steps_playbook.yaml",
"ansible-playbook -i inventory.yaml -e gather_facts=true -e @global_vars.yaml --private-key ~/.ssh/id_rsa --ssh-common-args=\"-o StrictHostKeyChecking=no\" --become --tags overcloud ~/overcloud-deploy/overcloud/config-download/overcloud/deploy_steps_playbook.yaml",
"openstack action execution run --save-result --run-sync tripleo.deployment.overcloudrc '{\"container\":\"overcloud\"}' | jq -r '.[\"result\"][\"overcloudrc.v3\"]' > overcloudrc.v3",
"openstack workflow execution create tripleo.deployment.v1.set_deployment_status_success '{\"plan\": \"<overcloud>\"}'",
"ansible-playbook -i tripleo-ansible-inventory.yaml --list-tags deploy_steps_playbook.yaml",
"ansible-playbook -i inventory.yaml -e gather_facts=true -e @global_vars.yaml --private-key ~/.ssh/id_rsa --become --tags overcloud ~/overcloud-deploy/overcloud/config-download/overcloud/deploy_steps_playbook.yaml",
"- name: Run puppet host configuration for step {{ step }}"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/director_installation_and_usage/assembly_configuring-the-overcloud-with-ansible |
10.2.4.3. The mod_proxy Module | 10.2.4.3. The mod_proxy Module Proxy access control statements are now placed inside a <Proxy> block rather than a <Directory proxy:> . The caching functionality of the old mod_proxy has been split out into the following three modules: mod_cache mod_disk_cache mod_mem_cache These generally use directives similar to the older versions of the mod_proxy module, but it is advisable to verify each directive before migrating any cache settings. For more on this topic, refer to the following documentation on the Apache Software Foundation's website: http://httpd.apache.org/docs-2.0/mod/mod_proxy.html | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-httpd-v2-mig-mod-proxy |
Scalability and performance | Scalability and performance OpenShift Container Platform 4.10 Scaling your OpenShift Container Platform cluster and tuning performance in production environments Red Hat OpenShift Documentation Team | [
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"sudo podman run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf",
"sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Make File System on /dev/sdb DefaultDependencies=no BindsTo=dev-sdb.device After=dev-sdb.device var.mount [email protected] [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/sdb TimeoutSec=0 [Install] WantedBy=var-lib-containers.mount enabled: true name: [email protected] - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount - contents: | [Unit] Description=Sync etcd data if new mount is empty DefaultDependencies=no After=var-lib-etcd.mount var.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member ExecStart=/usr/sbin/setenforce 0 ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/ ExecStart=/usr/sbin/setenforce 1 TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: sync-var-lib-etcd-to-etcd.service - contents: | [Unit] Description=Restore recursive SELinux security contexts DefaultDependencies=no After=var-lib-etcd.mount Before=crio.service [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/restorecon -R /var/lib/etcd/ TimeoutSec=0 [Install] WantedBy=multi-user.target graphical.target enabled: true name: restorecon-var-lib-etcd.service",
"oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} ... output omitted",
"oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created",
"oc login -u USD{ADMIN} -p USD{ADMINPASSWORD} USD{API} [... output omitted ...]",
"oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 98-var-lib-etcd spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Mount /dev/sdb to /var/lib/etcd Before=local-fs.target [email protected] [email protected] var.mount [Mount] What=/dev/sdb Where=/var/lib/etcd Type=xfs Options=defaults,prjquota [Install] WantedBy=local-fs.target enabled: true name: var-lib-etcd.mount",
"oc replace -f etcd-mc.yml",
"etcd member has been defragmented: <member_name> , memberID: <member_id>",
"failed defrag on member: <member_name> , memberID: <member_id> : <error_message>",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none> etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none> etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table",
"Defaulting container name to etcdctl. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com",
"sh-4.4# unset ETCDCTL_ENDPOINTS",
"sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag",
"Finished defragmenting etcd member[https://localhost:2379]",
"sh-4.4# etcdctl endpoint status -w table --cluster",
"+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | | | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | 1 | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | | +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+",
"sh-4.4# etcdctl alarm list",
"memberID:12345678912345678912 alarm:NOSPACE",
"sh-4.4# etcdctl alarm disarm",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.23.0",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf",
"oc create -f enable-rfs.yaml",
"oc get mc",
"oc delete mc 50-enable-rfs",
"cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"oc create -f 05-master-kernelarg-hpav.yaml",
"oc create -f 05-worker-kernelarg-hpav.yaml",
"oc delete -f 05-master-kernelarg-hpav.yaml",
"oc delete -f 05-worker-kernelarg-hpav.yaml",
"<interface type=\"direct\"> <source network=\"net01\"/> <model type=\"virtio\"/> <driver ... queues=\"2\"/> </interface>",
"<domain> <iothreads>3</iothreads> 1 <devices> <disk type=\"block\" device=\"disk\"> 2 <driver ... iothread=\"2\"/> </disk> </devices> </domain>",
"<disk type=\"block\" device=\"disk\"> <driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"native\" iothread=\"1\"/> </disk>",
"<memballoon model=\"none\"/>",
"sysctl kernel.sched_migration_cost_ns=60000",
"kernel.sched_migration_cost_ns=60000",
"cgroup_controllers = [ \"cpu\", \"devices\", \"memory\", \"blkio\", \"cpuacct\" ]",
"systemctl restart libvirtd",
"echo 0 > /sys/module/kvm/parameters/halt_poll_ns",
"echo 80000 > /sys/module/kvm/parameters/halt_poll_ns",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;",
"oc get profile -n openshift-cluster-node-tuning-operator",
"NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m",
"profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress",
"oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\\n' | sed 's|^.*/||'",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: v1 kind: Namespace metadata: name: openshift-numaresources",
"oc create -f nro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: numaresources-operator namespace: openshift-numaresources spec: targetNamespaces: - openshift-numaresources",
"oc create -f nro-operatorgroup.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: numaresources-operator namespace: openshift-numaresources spec: channel: \"{product-version}\" name: numaresources-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f nro-sub.yaml",
"oc get csv -n openshift-numaresources",
"NAME DISPLAY VERSION REPLACES PHASE numaresources-operator.v4.10.0 NUMA Resources Operator 4.10.0 Succeeded",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: labels: cnf-worker-tuning: enabled machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" name: worker spec: machineConfigSelector: matchLabels: machineconfiguration.openshift.io/role: worker nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"oc create -f nro-machineconfig.yaml",
"apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesOperator metadata: name: numaresourcesoperator spec: nodeGroups: - machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1",
"oc create -f nrop.yaml",
"oc get numaresourcesoperators.nodetopology.openshift.io",
"NAME AGE numaresourcesoperator 10m",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cnf-worker-tuning spec: machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled kubeletConfig: cpuManagerPolicy: \"static\" 1 cpuManagerReconcilePeriod: \"5s\" reservedSystemCPUs: \"0,1\" memoryManagerPolicy: \"Static\" 2 evictionHard: memory.available: \"100Mi\" kubeReserved: memory: \"512Mi\" reservedMemory: - numaNode: 0 limits: memory: \"1124Mi\" systemReserved: memory: \"512Mi\" topologyManagerPolicy: \"single-numa-node\" 3 topologyManagerScope: \"pod\"",
"oc create -f nro-kubeletconfig.yaml",
"apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.10\"",
"oc create -f nro-scheduler.yaml",
"oc get all -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE pod/numaresources-controller-manager-7575848485-bns4s 1/1 Running 0 13m pod/numaresourcesoperator-worker-dvj4n 2/2 Running 0 16m pod/numaresourcesoperator-worker-lcg4t 2/2 Running 0 16m pod/secondary-scheduler-56994cf6cf-7qf4q 1/1 Running 0 16m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/numaresourcesoperator-worker 2 2 2 2 2 node-role.kubernetes.io/worker= 16m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/numaresources-controller-manager 1/1 1 1 13m deployment.apps/secondary-scheduler 1/1 1 1 16m NAME DESIRED CURRENT READY AGE replicaset.apps/numaresources-controller-manager-7575848485 1 1 1 13m replicaset.apps/secondary-scheduler-56994cf6cf 1 1 1 16m",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"topo-aware-scheduler",
"apiVersion: apps/v1 kind: Deployment metadata: name: numa-deployment-1 namespace: openshift-numaresources spec: replicas: 1 selector: matchLabels: app: test template: metadata: labels: app: test spec: schedulerName: topo-aware-scheduler 1 containers: - name: ctnr image: quay.io/openshifttest/hello-openshift:openshift imagePullPolicy: IfNotPresent resources: limits: memory: \"100Mi\" cpu: \"10\" requests: memory: \"100Mi\" cpu: \"10\" - name: ctnr2 image: registry.access.redhat.com/rhel:latest imagePullPolicy: IfNotPresent command: [\"/bin/sh\", \"-c\"] args: [ \"while true; do sleep 1h; done;\" ] resources: limits: memory: \"100Mi\" cpu: \"8\" requests: memory: \"100Mi\" cpu: \"8\"",
"oc create -f nro-deployment.yaml",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numa-deployment-1-56954b7b46-pfgw8 2/2 Running 0 129m numaresources-controller-manager-7575848485-bns4s 1/1 Running 0 15h numaresourcesoperator-worker-dvj4n 2/2 Running 0 18h numaresourcesoperator-worker-lcg4t 2/2 Running 0 16h secondary-scheduler-56994cf6cf-7qf4q 1/1 Running 0 18h",
"oc describe pod numa-deployment-1-56954b7b46-pfgw8 -n openshift-numaresources",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 130m topo-aware-scheduler Successfully assigned openshift-numaresources/numa-deployment-1-56954b7b46-pfgw8 to compute-0.example.com",
"oc describe noderesourcetopologies.topology.node.k8s.io",
"Zones: Costs: Name: node-0 Value: 10 Name: node-1 Value: 21 Name: node-0 Resources: Allocatable: 39 Available: 21 1 Capacity: 40 Name: cpu Allocatable: 6442450944 Available: 6442450944 Capacity: 6442450944 Name: hugepages-1Gi Allocatable: 134217728 Available: 134217728 Capacity: 134217728 Name: hugepages-2Mi Allocatable: 262415904768 Available: 262206189568 Capacity: 270146007040 Name: memory Type: Node",
"oc get pod <pod_name> -n <pod_namespace> -o jsonpath=\"{ .status.qosClass }\"",
"Guaranteed",
"oc get crd | grep noderesourcetopologies",
"NAME CREATED AT noderesourcetopologies.topology.node.k8s.io 2022-01-18T08:28:06Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io numaresourcesscheduler -o json | jq '.status.schedulerName'",
"topo-aware-scheduler",
"oc get noderesourcetopologies.topology.node.k8s.io",
"NAME AGE compute-0.example.com 17h compute-1.example.com 17h",
"oc get noderesourcetopologies.topology.node.k8s.io -o yaml",
"apiVersion: v1 items: - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:38Z\" generation: 63760 name: worker-0 resourceVersion: \"8450223\" uid: 8b77be46-08c0-4074-927b-d49361471590 topologyPolicies: - SingleNUMANodeContainerLevel zones: - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262352048128\" available: \"262352048128\" capacity: \"270107316224\" name: memory - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269231067136\" available: \"269231067136\" capacity: \"270573244416\" name: memory - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi type: Node - apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: annotations: k8stopoawareschedwg/rte-update: periodic creationTimestamp: \"2022-06-16T08:55:37Z\" generation: 62061 name: worker-1 resourceVersion: \"8450129\" uid: e8659390-6f8d-4e67-9a51-1ea34bba1cc3 topologyPolicies: - SingleNUMANodeContainerLevel zones: 1 - costs: - name: node-0 value: 10 - name: node-1 value: 21 name: node-0 resources: 2 - allocatable: \"38\" available: \"38\" capacity: \"40\" name: cpu - allocatable: \"6442450944\" available: \"6442450944\" capacity: \"6442450944\" name: hugepages-1Gi - allocatable: \"134217728\" available: \"134217728\" capacity: \"134217728\" name: hugepages-2Mi - allocatable: \"262391033856\" available: \"262391033856\" capacity: \"270146301952\" name: memory type: Node - costs: - name: node-0 value: 21 - name: node-1 value: 10 name: node-1 resources: - allocatable: \"40\" available: \"40\" capacity: \"40\" name: cpu - allocatable: \"1073741824\" available: \"1073741824\" capacity: \"1073741824\" name: hugepages-1Gi - allocatable: \"268435456\" available: \"268435456\" capacity: \"268435456\" name: hugepages-2Mi - allocatable: \"269192085504\" available: \"269192085504\" capacity: \"270534262784\" name: memory type: Node kind: List metadata: resourceVersion: \"\" selfLink: \"\"",
"oc get NUMAResourcesScheduler",
"NAME AGE numaresourcesscheduler 90m",
"oc delete NUMAResourcesScheduler numaresourcesscheduler",
"numaresourcesscheduler.nodetopology.openshift.io \"numaresourcesscheduler\" deleted",
"apiVersion: nodetopology.openshift.io/v1alpha1 kind: NUMAResourcesScheduler metadata: name: numaresourcesscheduler spec: imageSpec: \"registry.redhat.io/openshift4/noderesourcetopology-scheduler-container-rhel8:v4.10\" logLevel: Debug",
"oc create -f nro-scheduler-debug.yaml",
"numaresourcesscheduler.nodetopology.openshift.io/numaresourcesscheduler created",
"oc get crd | grep numaresourcesschedulers",
"NAME CREATED AT numaresourcesschedulers.nodetopology.openshift.io 2022-02-25T11:57:03Z",
"oc get numaresourcesschedulers.nodetopology.openshift.io",
"NAME AGE numaresourcesscheduler 3h26m",
"oc get pods -n openshift-numaresources",
"NAME READY STATUS RESTARTS AGE numaresources-controller-manager-d87d79587-76mrm 1/1 Running 0 46h numaresourcesoperator-worker-5wm2k 2/2 Running 0 45h numaresourcesoperator-worker-pb75c 2/2 Running 0 45h secondary-scheduler-7976c4d466-qm4sc 1/1 Running 0 21m",
"oc logs secondary-scheduler-7976c4d466-qm4sc -n openshift-numaresources",
"I0223 11:04:55.614788 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 11 items received I0223 11:04:56.609114 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received I0223 11:05:22.626818 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received I0223 11:05:31.610356 1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 7 items received I0223 11:05:31.713032 1 eventhandlers.go:186] \"Add event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\" I0223 11:05:53.461016 1 eventhandlers.go:244] \"Delete event for scheduled pod\" pod=\"openshift-marketplace/certified-operators-thtvq\"",
"oc get numaresourcesoperators.nodetopology.openshift.io numaresourcesoperator -o jsonpath=\"{.status.daemonsets[0]}\"",
"{\"name\":\"numaresourcesoperator-worker\",\"namespace\":\"openshift-numaresources\"}",
"oc get ds -n openshift-numaresources numaresourcesoperator-worker -o jsonpath=\"{.spec.selector.matchLabels}\"",
"{\"name\":\"resource-topology\"}",
"oc get pods -n openshift-numaresources -l name=resource-topology -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE numaresourcesoperator-worker-5wm2k 2/2 Running 0 2d1h 10.135.0.64 compute-0.example.com numaresourcesoperator-worker-pb75c 2/2 Running 0 2d1h 10.132.2.33 compute-1.example.com",
"oc logs -n openshift-numaresources -c resource-topology-exporter numaresourcesoperator-worker-pb75c",
"I0221 13:38:18.334140 1 main.go:206] using sysinfo: reservedCpus: 0,1 reservedMemory: \"0\": 1178599424 I0221 13:38:18.334370 1 main.go:67] === System information === I0221 13:38:18.334381 1 sysinfo.go:231] cpus: reserved \"0-1\" I0221 13:38:18.334493 1 sysinfo.go:237] cpus: online \"0-103\" I0221 13:38:18.546750 1 main.go:72] cpus: allocatable \"2-103\" hugepages-1Gi: numa cell 0 -> 6 numa cell 1 -> 1 hugepages-2Mi: numa cell 0 -> 64 numa cell 1 -> 128 memory: numa cell 0 -> 45758Mi numa cell 1 -> 48372Mi",
"Info: couldn't find configuration in \"/etc/resource-topology-exporter/config.yaml\"",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h",
"oc get kubeletconfig -o yaml",
"machineConfigPoolSelector: matchLabels: cnf-worker-tuning: enabled",
"oc get mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"oc edit mcp worker -o yaml",
"labels: machineconfiguration.openshift.io/mco-built-in: \"\" pools.operator.machineconfiguration.openshift.io/worker: \"\" cnf-worker-tuning: enabled",
"oc get configmap",
"NAME DATA AGE 0e2a6bd3.openshift-kni.io 0 6d21h kube-root-ca.crt 1 6d21h numaresourcesoperator-worker 1 5m openshift-service-ca.crt 1 6d21h topo-aware-scheduler-config 1 6d18h",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | prometheusK8s: retention: {{PROMETHEUS_RETENTION_PERIOD}} 1 nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 2 resources: requests: storage: {{PROMETHEUS_STORAGE_SIZE}} 3 alertmanagerMain: nodeSelector: node-role.kubernetes.io/infra: \"\" volumeClaimTemplate: spec: storageClassName: {{STORAGE_CLASS}} 4 resources: requests: storage: {{ALERTMANAGER_STORAGE_SIZE}} 5 metadata: name: cluster-monitoring-config namespace: openshift-monitoring",
"oc create -f cluster-monitoring-config.yaml",
"required pods per cluster / pods per node = total number of nodes needed",
"2200 / 500 = 4.4",
"2200 / 20 = 110",
"required pods per cluster / total number of nodes = expected pods per node",
"--- apiVersion: template.openshift.io/v1 kind: Template metadata: name: deployment-config-template creationTimestamp: annotations: description: This template will create a deploymentConfig with 1 replica, 4 env vars and a service. tags: '' objects: - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: deploymentconfigUSD{IDENTIFIER} spec: template: metadata: labels: name: replicationcontrollerUSD{IDENTIFIER} spec: enableServiceLinks: false containers: - name: pauseUSD{IDENTIFIER} image: \"USD{IMAGE}\" ports: - containerPort: 8080 protocol: TCP env: - name: ENVVAR1_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR2_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR3_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" - name: ENVVAR4_USD{IDENTIFIER} value: \"USD{ENV_VALUE}\" resources: {} imagePullPolicy: IfNotPresent capabilities: {} securityContext: capabilities: {} privileged: false restartPolicy: Always serviceAccount: '' replicas: 1 selector: name: replicationcontrollerUSD{IDENTIFIER} triggers: - type: ConfigChange strategy: type: Rolling - apiVersion: v1 kind: Service metadata: name: serviceUSD{IDENTIFIER} spec: selector: name: replicationcontrollerUSD{IDENTIFIER} ports: - name: serviceportUSD{IDENTIFIER} protocol: TCP port: 80 targetPort: 8080 clusterIP: '' type: ClusterIP sessionAffinity: None status: loadBalancer: {} parameters: - name: IDENTIFIER description: Number to append to the name of resources value: '1' required: true - name: IMAGE description: Image to use for deploymentConfig value: gcr.io/google-containers/pause-amd64:3.0 required: false - name: ENV_VALUE description: Value to use for environment variables generate: expression from: \"[A-Za-z0-9]{255}\" required: false labels: template: deployment-config-template",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: <bare_metal_host_name> spec: online: true bmc: address: <bmc_address> credentialsName: <secret_credentials_name> 1 disableCertificateVerification: True 2 bootMACAddress: <host_boot_mac_address>",
"oc annotate machineset <machineset> -n openshift-machine-api 'metal3.io/autoscale-to-hosts=<any_value>'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached'",
"oc annotate machineset <machineset> -n openshift-machine-api 'baremetalhost.metal3.io/detached-'",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- labels: app: hugepages-example spec: containers: - securityContext: capabilities: add: [ \"IPC_LOCK\" ] image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage - mountPath: /etc/podinfo name: podinfo resources: limits: hugepages-1Gi: 2Gi memory: \"1Gi\" cpu: \"1\" requests: hugepages-1Gi: 2Gi env: - name: REQUESTS_HUGEPAGES_1GI <.> valueFrom: resourceFieldRef: containerName: example resource: requests.hugepages-1Gi volumes: - name: hugepage emptyDir: medium: HugePages - name: podinfo downwardAPI: items: - path: \"hugepages_1G_request\" <.> resourceFieldRef: containerName: example resource: requests.hugepages-1Gi divisor: 1Gi",
"oc create -f hugepages-volume-pod.yaml",
"oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- env | grep REQUESTS_HUGEPAGES_1GI",
"REQUESTS_HUGEPAGES_1GI=2147483648",
"oc exec -it USD(oc get pods -l app=hugepages-example -o jsonpath='{.items[0].metadata.name}') -- cat /etc/podinfo/hugepages_1G_request",
"2",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: thp-workers-profile namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom tuned profile for OpenShift to turn off THP on worker nodes include=openshift-node [vm] transparent_hugepages=never name: openshift-thp-never-worker recommend: - match: - label: node-role.kubernetes.io/worker priority: 25 profile: openshift-thp-never-worker",
"oc create -f thp-disable-tuned.yaml",
"oc get profile -n openshift-cluster-node-tuning-operator",
"cat /sys/kernel/mm/transparent_hugepage/enabled",
"always madvise [never]",
"apiVersion: v1 kind: Namespace metadata: name: openshift-performance-addon-operator annotations: workload.openshift.io/allowed: management",
"oc create -f pao-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-performance-addon-operator namespace: openshift-performance-addon-operator",
"oc create -f pao-operatorgroup.yaml",
"oc get packagemanifest performance-addon-operator -n openshift-marketplace -o jsonpath='{.status.defaultChannel}'",
"4.10",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-performance-addon-operator-subscription namespace: openshift-performance-addon-operator spec: channel: \"<channel>\" 1 name: performance-addon-operator source: redhat-operators 2 sourceNamespace: openshift-marketplace",
"oc create -f pao-sub.yaml",
"oc project openshift-performance-addon-operator",
"oc get csv -n openshift-performance-addon-operator",
"oc patch operatorgroup -n openshift-performance-addon-operator openshift-performance-addon-operator --type json -p '[{ \"op\": \"remove\", \"path\": \"/spec\" }]'",
"oc describe -n openshift-performance-addon-operator og openshift-performance-addon-operator",
"oc get csv",
"VERSION REPLACES PHASE 4.10.0 performance-addon-operator.v4.10.0 Installing 4.8.0 Replacing",
"oc get csv",
"NAME DISPLAY VERSION REPLACES PHASE performance-addon-operator.v4.10.0 Performance Addon Operator 4.10.0 performance-addon-operator.v4.8.0 Succeeded",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-rt labels: machineconfiguration.openshift.io/role: worker-rt spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-rt], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-rt: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: realTimeKernel: enabled: true nodeSelector: node-role.kubernetes.io/worker-rt: \"\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-rt",
"oc describe mcp/worker-rt",
"Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt",
"oc get node -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME rt-worker-0.example.com Ready worker,worker-rt 5d17h v1.23.0 128.66.135.107 <none> Red Hat Enterprise Linux CoreOS 46.82.202008252340-0 (Ootpa) 4.18.0-305.30.1.rt7.102.el8_4.x86_64 cri-o://1.23.0-99.rhaos4.10.gitc3131de.el8 [...]",
"apiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: containers: - name: qos-demo-ctr image: <image-pull-spec> resources: limits: memory: \"200Mi\" cpu: \"1\" requests: memory: \"200Mi\" cpu: \"1\"",
"oc apply -f qos-pod.yaml --namespace=qos-example",
"oc get pod qos-demo --namespace=qos-example --output=yaml",
"spec: containers: status: qosClass: Guaranteed",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile status: runtimeClass: performance-manual",
"apiVersion: v1 kind: Pod metadata: annotations: cpu-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: v1 kind: Pod metadata: name: example spec: # nodeSelector: node-role.kubernetes.io/worker-rt: \"\"",
"apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: cpu-quota.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: globallyDisableIrqLoadBalancing: true",
"apiVersion: performance.openshift.io/v2 kind: Pod metadata: annotations: irq-load-balancing.crio.io: \"disable\" spec: runtimeClassName: performance-<profile_name>",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1",
"apiVersion: v1 kind: Pod metadata: name: dynamic-irq-pod annotations: irq-load-balancing.crio.io: \"disable\" cpu-quota.crio.io: \"disable\" spec: containers: - name: dynamic-irq-pod image: \"registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10\" command: [\"sleep\", \"10h\"] resources: requests: cpu: 2 memory: \"200M\" limits: cpu: 2 memory: \"200M\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" runtimeClassName: performance-dynamic-irq-profile",
"oc get pod -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES dynamic-irq-pod 1/1 Running 0 5h33m <ip-address> <node-name> <none> <none>",
"oc exec -it dynamic-irq-pod -- /bin/bash -c \"grep Cpus_allowed_list /proc/self/status | awk '{print USD2}'\"",
"Cpus_allowed_list: 2-3",
"oc debug node/<node-name>",
"Starting pod/<node-name>-debug To use host binaries, run `chroot /host` Pod IP: <ip-address> If you don't see a command prompt, try pressing enter. sh-4.4#",
"sh-4.4# chroot /host",
"sh-4.4#",
"cat /proc/irq/default_smp_affinity",
"33",
"find /proc/irq/ -name smp_affinity_list -exec sh -c 'i=\"USD1\"; mask=USD(cat USDi); file=USD(echo USDi); echo USDfile: USDmask' _ {} \\;",
"/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5",
"cat /proc/irq/<irq-num>/effective_affinity",
"lscpu --all --extended",
"CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000",
"cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list",
"0-4",
"cpu: isolated: 0,4 reserved: 1-3,5-7",
"\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: \"4-15\" 1 reserved: \"0-3\" 2 hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 16 node: 0 realTimeKernel: enabled: true 3 numa: 4 topologyPolicy: \"best-effort\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" 5",
"hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1",
"oc debug node/ip-10-0-141-105.ec2.internal",
"grep -i huge /proc/meminfo",
"AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##",
"oc describe node worker-0.ocp4poc.example.com | grep -i huge",
"hugepages-1g=true hugepages-###: ### hugepages-###: ###",
"spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G",
"\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"",
"oc edit -f <your_profile_name>.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" - deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,54-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" - deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"",
"oc apply -f <your_profile_name>.yaml",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true",
"ethtool -l <device>",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4",
"ethtool -l <device>",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4",
"udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0",
"apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4",
"ethtool -l ens4",
"Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1",
"INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3",
"WARNING tuned.plugins.base: instance net_test: no matching devices available",
"Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h",
"oc describe mcp worker-cnf",
"Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync",
"oc describe performanceprofiles performance",
"Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded",
"--image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.10.",
"oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.10 2",
"tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 -e PERF_TEST_PROFILE=<performance_profile> registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.focus=\"[performance]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"hwlatdetect\"",
"running /usr/bin/validationsuite -ginkgo.v -ginkgo.focus=hwlatdetect I0210 17:08:38.607699 7 request.go:668] Waited for 1.047200253s due to client-side throttling, not priority and fairness, request: GET:https://api.ocp.demo.lab:6443/apis/apps.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e validation ========================================== Random Seed: 1644512917 Will run 0 of 48 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Ran 0 of 48 Specs in 0.001 seconds SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 48 Skipped PASS Discovery mode enabled, skipping setup running /usr/bin/cnftests -ginkgo.v -ginkgo.focus=hwlatdetect I0210 17:08:41.179269 40 request.go:668] Waited for 1.046001096s due to client-side throttling, not priority and fairness, request: GET:https://api.ocp.demo.lab:6443/apis/storage.k8s.io/v1beta1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1644512920 Will run 1 of 151 specs SSSSSSS ------------------------------ [performance] Latency Test with the hwlatdetect image should succeed /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:221 STEP: Waiting two minutes to download the latencyTest image STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase Feb 10 17:10:56.045: [INFO]: found mcd machine-config-daemon-dzpw7 for node ocp-worker-0.demo.lab Feb 10 17:10:56.259: [INFO]: found mcd machine-config-daemon-dzpw7 for node ocp-worker-0.demo.lab Feb 10 17:11:56.825: [ERROR]: timed out waiting for the condition • Failure [193.903 seconds] [performance] Latency Test /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:60 with the hwlatdetect image /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:213 should succeed [It] /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:221 Log file created at: 2022/02/10 17:08:45 Running on machine: hwlatdetect-cd8b6 Binary: Built with gc go1.16.6 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0210 17:08:45.716288 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-56fabc639a679b757ebae30e5f01b2ebd38e9fde9ecae91c41be41d3e89b37f8/vmlinuz-4.18.0-305.34.2.rt7.107.el8_4.x86_64 random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ignition.platform.id=qemu ostree=/ostree/boot.0/rhcos/56fabc639a679b757ebae30e5f01b2ebd38e9fde9ecae91c41be41d3e89b37f8/0 root=UUID=56731f4f-f558-46a3-85d3-d1b579683385 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=3-5 tuned.non_isolcpus=ffffffc7 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,3-5 systemd.cpu_affinity=0,1,2,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31 + + I0210 17:08:45.716782 1 node.go:44] Environment information: kernel version 4.18.0-305.34.2.rt7.107.el8_4.x86_64 I0210 17:08:45.716861 1 main.go:50] running the hwlatdetect command with arguments [/usr/bin/hwlatdetect --threshold 1 --hardlimit 1 --duration 10 --window 10000000us --width 950000us] F0210 17:08:56.815204 1 main.go:53] failed to run hwlatdetect command; out: hwlatdetect: test duration 10 seconds detector: tracer parameters: Latency threshold: 1us 1 Sample window: 10000000us Sample width: 950000us Non-sampling period: 9050000us Output File: None Starting test test finished Max Latency: 24us 2 Samples recorded: 1 Samples exceeding threshold: 1 ts: 1644512927.163556381, inner:20, outer:24 ; err: exit status 1 goroutine 1 [running]: k8s.io/klog.stacks(0xc000010001, 0xc00012e000, 0x25b, 0x2710) /remote-source/app/vendor/k8s.io/klog/klog.go:875 +0xb9 k8s.io/klog.(*loggingT).output(0x5bed00, 0xc000000003, 0xc0000121c0, 0x53ea81, 0x7, 0x35, 0x0) /remote-source/app/vendor/k8s.io/klog/klog.go:829 +0x1b0 k8s.io/klog.(*loggingT).printf(0x5bed00, 0x3, 0x5082da, 0x33, 0xc000113f58, 0x2, 0x2) /remote-source/app/vendor/k8s.io/klog/klog.go:707 +0x153 k8s.io/klog.Fatalf(...) /remote-source/app/vendor/k8s.io/klog/klog.go:1276 main.main() /remote-source/app/cnf-tests/pod-utils/hwlatdetect-runner/main.go:53 +0x897 goroutine 6 [chan receive]: k8s.io/klog.(*loggingT).flushDaemon(0x5bed00) /remote-source/app/vendor/k8s.io/klog/klog.go:1010 +0x8b created by k8s.io/klog.init.0 /remote-source/app/vendor/k8s.io/klog/klog.go:411 +0xd8 goroutine 7 [chan receive]: k8s.io/klog/v2.(*loggingT).flushDaemon(0x5bede0) /remote-source/app/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b created by k8s.io/klog/v2.init.0 /remote-source/app/vendor/k8s.io/klog/v2/klog.go:420 +0xdf Unexpected error: <*errors.errorString | 0xc000418ed0>: { s: \"timed out waiting for the condition\", } timed out waiting for the condition occurred /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:433 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS JUnit report was created: /junit.xml/cnftests-junit.xml Summarizing 1 Failure: [Fail] [performance] Latency Test with the hwlatdetect image [It] should succeed /remote-source/app/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:433 Ran 1 of 151 Specs in 222.254 seconds FAIL! -- 0 Passed | 1 Failed | 0 Pending | 150 Skipped --- FAIL: TestTest (222.45s) FAIL",
"hwlatdetect: test duration 3600 seconds detector: tracer parameters: Latency threshold: 10us Sample window: 1000000us Sample width: 950000us Non-sampling period: 50000us Output File: None Starting test test finished Max Latency: Below threshold Samples recorded: 0",
"hwlatdetect: test duration 3600 seconds detector: tracer parameters:Latency threshold: 10usSample window: 1000000us Sample width: 950000usNon-sampling period: 50000usOutput File: None Starting tests:1610542421.275784439, inner:78, outer:81 ts: 1610542444.330561619, inner:27, outer:28 ts: 1610542445.332549975, inner:39, outer:38 ts: 1610542541.568546097, inner:47, outer:32 ts: 1610542590.681548531, inner:13, outer:17 ts: 1610543033.818801482, inner:29, outer:30 ts: 1610543080.938801990, inner:90, outer:76 ts: 1610543129.065549639, inner:28, outer:39 ts: 1610543474.859552115, inner:28, outer:35 ts: 1610543523.973856571, inner:52, outer:49 ts: 1610543572.089799738, inner:27, outer:30 ts: 1610543573.091550771, inner:34, outer:28 ts: 1610543574.093555202, inner:116, outer:63",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=10 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"cyclictest\"",
"Discovery mode enabled, skipping setup running /usr/bin//cnftests -ginkgo.v -ginkgo.focus=cyclictest I0811 15:02:36.350033 20 request.go:668] Waited for 1.049965918s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/machineconfiguration.openshift.io/v1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1628694153 Will run 1 of 138 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [performance] Latency Test with the cyclictest image should succeed /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:200 STEP: Waiting two minutes to download the latencyTest image STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase Aug 11 15:03:06.826: [INFO]: found mcd machine-config-daemon-wf4w8 for node cnfdc8.clus2.t5g.lab.eng.bos.redhat.com • Failure [22.527 seconds] [performance] Latency Test /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:84 with the cyclictest image /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:188 should succeed [It] /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:200 The current latency 27 is bigger than the expected one 20 Expected <bool>: false to be true /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:219 Log file created at: 2021/08/11 15:02:51 Running on machine: cyclictest-knk7d Binary: Built with gc go1.16.6 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0811 15:02:51.092254 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/vmlinuz-4.18.0-305.10.2.rt7.83.el8_4.x86_64 ip=dhcp random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.1/rhcos/612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/0 ignition.platform.id=openstack root=UUID=5a4ddf16-9372-44d9-ac4e-3ee329e16ab3 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=1-3 tuned.non_isolcpus=000000ff,ffffffff,ffffffff,fffffff1 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,1-3 systemd.cpu_affinity=0,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103 default_hugepagesz=1G hugepagesz=2M hugepages=128 nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0 I0811 15:02:51.092427 1 node.go:44] Environment information: kernel version 4.18.0-305.10.2.rt7.83.el8_4.x86_64 I0811 15:02:51.092450 1 main.go:48] running the cyclictest command with arguments [-D 600 -95 1 -t 10 -a 2,4,6,8,10,54,56,58,60,62 -h 30 -i 1000 --quiet] I0811 15:03:06.147253 1 main.go:54] succeeded to run the cyclictest command: # /dev/cpu_dma_latency set to 0us Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 005561 027778 037704 011987 000000 120755 238981 081847 300186 000002 587440 581106 564207 554323 577416 590635 474442 357940 513895 296033 000003 011751 011441 006449 006761 008409 007904 002893 002066 003349 003089 000004 000527 001079 000914 000712 001451 001120 000779 000283 000350 000251 More histogram entries Min Latencies: 00002 00001 00001 00001 00001 00002 00001 00001 00001 00001 Avg Latencies: 00002 00002 00002 00001 00002 00002 00001 00001 00001 00001 Max Latencies: 00018 00465 00361 00395 00208 00301 02052 00289 00327 00114 Histogram Overflows: 00000 00220 00159 00128 00202 00017 00069 00059 00045 00120 Histogram Overflow at cycle number: Thread 0: Thread 1: 01142 01439 05305 ... # 00190 others Thread 2: 20895 21351 30624 ... # 00129 others Thread 3: 01143 17921 18334 ... # 00098 others Thread 4: 30499 30622 31566 ... # 00172 others Thread 5: 145221 170910 171888 Thread 6: 01684 26291 30623 ...# 00039 others Thread 7: 28983 92112 167011 ... 00029 others Thread 8: 45766 56169 56171 ...# 00015 others Thread 9: 02974 08094 13214 ... # 00090 others",
"running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 579506 535967 418614 573648 532870 529897 489306 558076 582350 585188 583793 223781 532480 569130 472250 576043 More histogram entries Total: 000600000 000600000 000600000 000599999 000599999 000599999 000599998 000599998 000599998 000599997 000599997 000599996 000599996 000599995 000599995 000599995 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00005 00005 00004 00005 00004 00004 00005 00005 00006 00005 00004 00005 00004 00004 00005 00004 Histogram Overflows: 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 00000 Histogram Overflow at cycle number: Thread 0: Thread 1: Thread 2: Thread 3: Thread 4: Thread 5: Thread 6: Thread 7: Thread 8: Thread 9: Thread 10: Thread 11: Thread 12: Thread 13: Thread 14: Thread 15:",
"running cmd: cyclictest -q -D 10m -p 1 -t 16 -a 2,4,6,8,10,12,14,16,54,56,58,60,62,64,66,68 -h 30 -i 1000 -m Histogram 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000001 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000000 000002 564632 579686 354911 563036 492543 521983 515884 378266 592621 463547 482764 591976 590409 588145 589556 353518 More histogram entries Total: 000599999 000599999 000599999 000599997 000599997 000599998 000599998 000599997 000599997 000599996 000599995 000599996 000599995 000599995 000599995 000599993 Min Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Avg Latencies: 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 00002 Max Latencies: 00493 00387 00271 00619 00541 00513 00009 00389 00252 00215 00539 00498 00363 00204 00068 00520 Histogram Overflows: 00001 00001 00001 00002 00002 00001 00000 00001 00001 00001 00002 00001 00001 00001 00001 00002 Histogram Overflow at cycle number: Thread 0: 155922 Thread 1: 110064 Thread 2: 110064 Thread 3: 110063 155921 Thread 4: 110063 155921 Thread 5: 155920 Thread 6: Thread 7: 110062 Thread 8: 110062 Thread 9: 155919 Thread 10: 110061 155919 Thread 11: 155918 Thread 12: 155918 Thread 13: 110060 Thread 14: 110060 Thread 15: 110059 155917",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e LATENCY_TEST_RUN=true -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=worker-cnf -e LATENCY_TEST_CPUS=7 -e LATENCY_TEST_RUNTIME=600 -e MAXIMUM_LATENCY=20 registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.v -ginkgo.focus=\"oslat\"",
"running /usr/bin//validationsuite -ginkgo.v -ginkgo.focus=oslat I0829 12:36:55.386776 8 request.go:668] Waited for 1.000303471s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/authentication.k8s.io/v1?timeout=32s Running Suite: CNF Features e2e validation ========================================== Discovery mode enabled, skipping setup running /usr/bin//cnftests -ginkgo.v -ginkgo.focus=oslat I0829 12:37:01.219077 20 request.go:668] Waited for 1.050010755s due to client-side throttling, not priority and fairness, request: GET:https://api.cnfdc8.t5g.lab.eng.bos.redhat.com:6443/apis/snapshot.storage.k8s.io/v1beta1?timeout=32s Running Suite: CNF Features e2e integration tests ================================================= Random Seed: 1630240617 Will run 1 of 142 specs SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [performance] Latency Test with the oslat image should succeed /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:134 STEP: Waiting two minutes to download the latencyTest image STEP: Waiting another two minutes to give enough time for the cluster to move the pod to Succeeded phase Aug 29 12:37:59.324: [INFO]: found mcd machine-config-daemon-wf4w8 for node cnfdc8.clus2.t5g.lab.eng.bos.redhat.com • Failure [49.246 seconds] [performance] Latency Test /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:59 with the oslat image /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:112 should succeed [It] /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:134 The current latency 27 is bigger than the expected one 20 1 Expected <bool>: false to be true /go/src/github.com/openshift-kni/cnf-features-deploy/vendor/github.com/openshift-kni/performance-addon-operators/functests/4_latency/latency.go:168 Log file created at: 2021/08/29 13:25:21 Running on machine: oslat-57c2g Binary: Built with gc go1.16.6 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0829 13:25:21.569182 1 node.go:37] Environment information: /proc/cmdline: BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/vmlinuz-4.18.0-305.10.2.rt7.83.el8_4.x86_64 ip=dhcp random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.0/rhcos/612d89f4519a53ad0b1a132f4add78372661bfb3994f5fe115654971aa58a543/0 ignition.platform.id=openstack root=UUID=5a4ddf16-9372-44d9-ac4e-3ee329e16ab3 rw rootflags=prjquota skew_tick=1 nohz=on rcu_nocbs=1-3 tuned.non_isolcpus=000000ff,ffffffff,ffffffff,fffffff1 intel_pstate=disable nosoftlockup tsc=nowatchdog intel_iommu=on iommu=pt isolcpus=managed_irq,1-3 systemd.cpu_affinity=0,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103 default_hugepagesz=1G hugepagesz=2M hugepages=128 nmi_watchdog=0 audit=0 mce=off processor.max_cstate=1 idle=poll intel_idle.max_cstate=0 I0829 13:25:21.569345 1 node.go:44] Environment information: kernel version 4.18.0-305.10.2.rt7.83.el8_4.x86_64 I0829 13:25:21.569367 1 main.go:53] Running the oslat command with arguments [--duration 600 --rtprio 1 --cpu-list 4,6,52,54,56,58 --cpu-main-thread 2] I0829 13:35:22.632263 1 main.go:59] Succeeded to run the oslat command: oslat V 2.00 Total runtime: 600 seconds Thread priority: SCHED_FIFO:1 CPU list: 4,6,52,54,56,58 CPU for main thread: 2 Workload: no Workload mem: 0 (KiB) Preheat cores: 6 Pre-heat for 1 seconds Test starts Test completed. Core: 4 6 52 54 56 58 CPU Freq: 2096 2096 2096 2096 2096 2096 (Mhz) 001 (us): 19390720316 19141129810 20265099129 20280959461 19391991159 19119877333 002 (us): 5304 5249 5777 5947 6829 4971 003 (us): 28 14 434 47 208 21 004 (us): 1388 853 123568 152817 5576 0 005 (us): 207850 223544 103827 91812 227236 231563 006 (us): 60770 122038 277581 323120 122633 122357 007 (us): 280023 223992 63016 25896 214194 218395 008 (us): 40604 25152 24368 4264 24440 25115 009 (us): 6858 3065 5815 810 3286 2116 010 (us): 1947 936 1452 151 474 361 Minimum: 1 1 1 1 1 1 (us) Average: 1.000 1.000 1.000 1.000 1.000 1.000 (us) Maximum: 37 38 49 28 28 19 (us) Max-Min: 36 37 48 27 27 18 (us) Duration: 599.667 599.667 599.667 599.667 599.667 599.667 (sec)",
"podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/reportdest:<report_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh --report <report_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -v USD(pwd)/junitdest:<junit_folder_path> -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh --junit <junit_folder_path> -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e ROLE_WORKER_CNF=master registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/mirror -registry <disconnected_registry> | oc image mirror -f -",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e IMAGE_REGISTRY=\"<disconnected_registry>\" -e CNF_TESTS_IMAGE=\"cnf-tests-rhel8:v4.10\" /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e IMAGE_REGISTRY=\"<custom_image_registry>\" -e CNF_TESTS_IMAGE=\"<custom_cnf-tests_image>\" registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/test-run.sh",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"REGISTRY=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"oc create ns cnftests",
"oc policy add-role-to-user system:image-puller system:serviceaccount:cnf-features-testing:default --namespace=cnftests",
"oc policy add-role-to-user system:image-puller system:serviceaccount:performance-addon-operators-testing:default --namespace=cnftests",
"SECRET=USD(oc -n cnftests get secret | grep builder-docker | awk {'print USD1'}",
"TOKEN=USD(oc -n cnftests get secret USDSECRET -o jsonpath=\"{.data['\\.dockercfg']}\" | base64 --decode | jq '.[\"image-registry.openshift-image-registry.svc:5000\"].auth')",
"echo \"{\\\"auths\\\": { \\\"USDREGISTRY\\\": { \\\"auth\\\": USDTOKEN } }}\" > dockerauth.json",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:4.10 /usr/bin/mirror -registry USDREGISTRY/cnftests | oc image mirror --insecure=true -a=USD(pwd)/dockerauth.json -f -",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig -e DISCOVERY_MODE=true -e IMAGE_REGISTRY=image-registry.openshift-image-registry.svc:5000/cnftests cnf-tests-local:latest /usr/bin/test-run.sh -ginkgo.focus=\"\\[performance\\]\\ Latency\\ Test\"",
"[ { \"registry\": \"public.registry.io:5000\", \"image\": \"imageforcnftests:4.10\" } ]",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 /usr/bin/mirror --registry \"my.local.registry:5000/\" --images \"/kubeconfig/images.json\" | oc image mirror -f -",
"podman run -v USD(pwd)/:/kubeconfig:Z -e KUBECONFIG=/kubeconfig/kubeconfig registry.redhat.io/openshift4/cnf-tests-rhel8:v4.10 get nodes",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-topology-aware-lifecycle-manager-subscription namespace: openshift-operators spec: channel: \"stable\" name: topology-aware-lifecycle-manager source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f talm-subscription.yaml",
"oc get csv -n openshift-operators",
"NAME DISPLAY VERSION REPLACES PHASE topology-aware-lifecycle-manager.4.10.0-202206301927 Topology Aware Lifecycle Manager 4.10.0-202206301927 Succeeded",
"oc get deploy -n openshift-operators",
"NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE openshift-operators cluster-group-upgrades-controller-manager 1/1 1 1 14s",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: 1 - spoke1 enable: false managedPolicies: 2 - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: 3 canaries: 4 - spoke1 maxConcurrency: 1 5 timeout: 240 status: 6 conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-pao-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default placementBindings: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-pao-sub-policy placementRules: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-pao-sub-policy remediationPlan: - - spoke1",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: clusters: - spoke1 enable: true 1 managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 2 conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-pao-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default placementBindings: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-pao-sub-policy placementRules: - cgu-upgrade-complete-policy1-common-cluster-version-policy - cgu-upgrade-complete-policy2-common-pao-sub-policy remediationPlan: - - spoke1 status: currentBatch: 1 remediationPlanForBatch: 3 spoke1: 0",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-upgrade-complete namespace: default spec: actions: afterCompletion: deleteObjects: true 1 clusters: - spoke1 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: 2 conditions: - message: The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies reason: UpgradeCompleted status: \"True\" type: Ready managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default remediationPlan: - - spoke1 status: remediationPlanForBatch: spoke1: -2 3",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: 1 - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: 1 - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: 1 clusters: - spoke6 enable: false managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR is not enabled reason: UpgradeNotStarted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: {}",
"oc apply -f <name>.yaml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/<name> --type merge -p '{\"spec\":{\"enable\":true}}'",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-a namespace: default spec: blockingCRs: - name: cgu-c namespace: default clusters: - spoke1 - spoke2 - spoke3 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy remediationStrategy: canaries: - spoke1 maxConcurrency: 2 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-c]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default placementBindings: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy placementRules: - cgu-a-policy1-common-cluster-version-policy - cgu-a-policy2-common-pao-sub-policy - cgu-a-policy3-common-ptp-sub-policy remediationPlan: - - spoke1 - - spoke2 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-b namespace: default spec: blockingCRs: - name: cgu-a namespace: default clusters: - spoke4 - spoke5 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: 'The ClusterGroupUpgrade CR is blocked by other CRs that have not yet completed: [cgu-a]' 1 reason: UpgradeCannotStart status: \"False\" type: Ready copiedPolicies: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy2-common-pao-sub-policy namespace: default - name: policy3-common-ptp-sub-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy placementRules: - cgu-b-policy1-common-cluster-version-policy - cgu-b-policy2-common-pao-sub-policy - cgu-b-policy3-common-ptp-sub-policy - cgu-b-policy4-common-sriov-sub-policy remediationPlan: - - spoke4 - - spoke5 status: {}",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-c namespace: default spec: clusters: - spoke6 enable: true managedPolicies: - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy remediationStrategy: maxConcurrency: 1 timeout: 240 status: conditions: - message: The ClusterGroupUpgrade CR has upgrade policies that are still non compliant 1 reason: UpgradeNotCompleted status: \"False\" type: Ready copiedPolicies: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy managedPoliciesCompliantBeforeUpgrade: - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy1-common-cluster-version-policy namespace: default - name: policy4-common-sriov-sub-policy namespace: default placementBindings: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy placementRules: - cgu-c-policy1-common-cluster-version-policy - cgu-c-policy4-common-sriov-sub-policy remediationPlan: - - spoke6 status: currentBatch: 1 remediationPlanForBatch: spoke6: 0",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-1 namespace: default spec: managedPolicies: 1 - policy1-common-cluster-version-policy - policy2-common-pao-sub-policy - policy3-common-ptp-sub-policy - policy4-common-sriov-sub-policy enable: false clusters: 2 - spoke1 - spoke2 - spoke5 - spoke6 remediationStrategy: maxConcurrency: 2 3 timeout: 240 4",
"oc create -f cgu-1.yaml",
"oc get cgu --all-namespaces",
"NAMESPACE NAME AGE default cgu-1 8m55s",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", 1 \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-pao-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-pao-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"performance-addon-operator\\\",\\\"namespace\\\":\\\"openshift-performance-addon-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-pao-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-pao-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-pao-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-pao-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": {} }",
"oc get policies -A",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default cgu-policy1-common-cluster-version-policy enforce 17m 1 default cgu-policy2-common-pao-sub-policy enforce 17m default cgu-policy3-common-ptp-sub-policy enforce 17m default cgu-policy4-common-sriov-sub-policy enforce 17m default policy1-common-cluster-version-policy inform NonCompliant 15h default policy2-common-pao-sub-policy inform NonCompliant 15h default policy3-common-ptp-sub-policy inform NonCompliant 18m default policy4-common-sriov-sub-policy inform NonCompliant 18m",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-1 --patch '{\"spec\":{\"enable\":true}}' --type=merge",
"oc get cgu -n default cgu-1 -ojsonpath='{.status}' | jq",
"{ \"computedMaxConcurrency\": 2, \"conditions\": [ 1 { \"lastTransitionTime\": \"2022-02-25T15:34:07Z\", \"message\": \"The ClusterGroupUpgrade CR has upgrade policies that are still non compliant\", \"reason\": \"UpgradeNotCompleted\", \"status\": \"False\", \"type\": \"Ready\" } ], \"copiedPolicies\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-pao-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"managedPoliciesContent\": { \"policy1-common-cluster-version-policy\": \"null\", \"policy2-common-pao-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"performance-addon-operator\\\",\\\"namespace\\\":\\\"openshift-performance-addon-operator\\\"}]\", \"policy3-common-ptp-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"ptp-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-ptp\\\"}]\", \"policy4-common-sriov-sub-policy\": \"[{\\\"kind\\\":\\\"Subscription\\\",\\\"name\\\":\\\"sriov-network-operator-subscription\\\",\\\"namespace\\\":\\\"openshift-sriov-network-operator\\\"}]\" }, \"managedPoliciesForUpgrade\": [ { \"name\": \"policy1-common-cluster-version-policy\", \"namespace\": \"default\" }, { \"name\": \"policy2-common-pao-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy3-common-ptp-sub-policy\", \"namespace\": \"default\" }, { \"name\": \"policy4-common-sriov-sub-policy\", \"namespace\": \"default\" } ], \"managedPoliciesNs\": { \"policy1-common-cluster-version-policy\": \"default\", \"policy2-common-pao-sub-policy\": \"default\", \"policy3-common-ptp-sub-policy\": \"default\", \"policy4-common-sriov-sub-policy\": \"default\" }, \"placementBindings\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-pao-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"placementRules\": [ \"cgu-policy1-common-cluster-version-policy\", \"cgu-policy2-common-pao-sub-policy\", \"cgu-policy3-common-ptp-sub-policy\", \"cgu-policy4-common-sriov-sub-policy\" ], \"precaching\": { \"spec\": {} }, \"remediationPlan\": [ [ \"spoke1\", \"spoke2\" ], [ \"spoke5\", \"spoke6\" ] ], \"status\": { \"currentBatch\": 1, \"currentBatchStartedAt\": \"2022-02-25T15:54:16Z\", \"remediationPlanForBatch\": { \"spoke1\": 0, \"spoke2\": 1 }, \"startedAt\": \"2022-02-25T15:54:16Z\" } }",
"export KUBECONFIG=<cluster_kubeconfig_absolute_path>",
"oc get subs -A | grep -i <subscription_name>",
"NAMESPACE NAME PACKAGE SOURCE CHANNEL openshift-logging cluster-logging cluster-logging redhat-operators stable",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.5 True True 43s Working towards 4.9.7: 71 of 735 done (9% complete)",
"oc get subs -n <operator-namespace> <operator-subscription> -ojsonpath=\"{.status}\"",
"oc get installplan -n <subscription_namespace>",
"NAMESPACE NAME CSV APPROVAL APPROVED openshift-logging install-6khtw cluster-logging.5.3.3-4 Manual true 1",
"oc get csv -n <operator_namespace>",
"NAME DISPLAY VERSION REPLACES PHASE cluster-logging.5.4.2 Red Hat OpenShift Logging 5.4.2 Succeeded",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: du-upgrade-4918 namespace: ztp-group-du-sno spec: preCaching: true 1 clusters: - cnfdb1 - cnfdb2 enable: false managedPolicies: - du-upgrade-platform-upgrade remediationStrategy: maxConcurrency: 2 timeout: 240",
"oc apply -f clustergroupupgrades-group-du.yaml",
"oc get cgu -A",
"NAMESPACE NAME AGE ztp-group-du-sno du-upgrade-4918 10s 1",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"{ \"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is not completed (required)\", 1 \"reason\": \"PrecachingRequired\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:24Z\", \"message\": \"Precaching is required and not done\", \"reason\": \"PrecachingNotDone\", \"status\": \"False\", \"type\": \"PrecachingDone\" }, { \"lastTransitionTime\": \"2022-01-27T19:07:34Z\", \"message\": \"Pre-caching spec is valid and consistent\", \"reason\": \"PrecacheSpecIsWellFormed\", \"status\": \"True\", \"type\": \"PrecacheSpecValid\" } ], \"precaching\": { \"clusters\": [ \"cnfdb1\" 2 ], \"spec\": { \"platformImage\": \"image.example.io\"}, \"status\": { \"cnfdb1\": \"Active\"} } }",
"oc get jobs,pods -n openshift-talm-pre-cache",
"NAME COMPLETIONS DURATION AGE job.batch/pre-cache 0/1 3m10s 3m10s NAME READY STATUS RESTARTS AGE pod/pre-cache--1-9bmlr 1/1 Running 0 3m10s",
"oc get cgu -n ztp-group-du-sno du-upgrade-4918 -o jsonpath='{.status}'",
"\"conditions\": [ { \"lastTransitionTime\": \"2022-01-27T19:30:41Z\", \"message\": \"The ClusterGroupUpgrade CR has all clusters compliant with all the managed policies\", \"reason\": \"UpgradeCompleted\", \"status\": \"True\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-01-27T19:28:57Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" 1 }",
"oc delete cgu -n <ClusterGroupUpgradeCR_namespace> <ClusterGroupUpgradeCR_name>",
"oc apply -f <ClusterGroupUpgradeCR_YAML>",
"oc get cgu lab-upgrade -ojsonpath='{.spec.managedPolicies}'",
"[\"group-du-sno-validator-du-validator-policy\", \"policy2-common-pao-sub-policy\", \"policy3-common-ptp-sub-policy\"]",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-pao-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get policies --all-namespaces",
"NAMESPACE NAME REMEDIATION ACTION COMPLIANCE STATE AGE default policy1-common-cluster-version-policy inform NonCompliant 5d21h default policy2-common-pao-sub-policy inform Compliant 5d21h default policy3-common-ptp-sub-policy inform NonCompliant 5d21h default policy4-common-sriov-sub-policy inform NonCompliant 5d21h",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.example.com:6443 True Unknown 13d spoke1 true https://api.spoke1.example.com:6443 True True 13d spoke3 true https://api.spoke3.example.com:6443 True True 27h",
"oc get pod -n openshift-operators",
"NAME READY STATUS RESTARTS AGE cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp 2/2 Running 0 45m",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem",
"oc get managedclusters",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://api.hub.testlab.com:6443 True Unknown 13d spoke1 true https://api.spoke1.testlab.com:6443 True True 13d 1 spoke3 true https://api.spoke3.testlab.com:6443 True True 27h 2",
"oc get managedcluster --selector=upgrade=true 1",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"spec: clusters: - spoke1 - spoke3 clusterSelector: - upgrade2=true remediationStrategy: canaries: - spoke3 maxConcurrency: 2 timeout: 240",
"oc get cgu lab-upgrade -ojsonpath='{.spec.clusters}'",
"[\"spoke1\", \"spoke3\"]",
"oc get managedcluster --selector=upgrade=true",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE spoke1 true https://api.spoke1.testlab.com:6443 True True 13d spoke3 true https://api.spoke3.testlab.com:6443 True True 27h",
"oc get jobs,pods -n openshift-talo-pre-cache",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy}'",
"{\"maxConcurrency\":2, \"timeout\":240}",
"oc get cgu lab-upgrade -ojsonpath='{.spec.remediationStrategy.maxConcurrency}'",
"2",
"oc get cgu lab-upgrade -ojsonpath='{.status.conditions}'",
"{\"lastTransitionTime\":\"2022-02-17T22:25:28Z\", \"message\":\"The ClusterGroupUpgrade CR has managed policies that are missing:[policyThatDoesntExist]\", \"reason\":\"UpgradeCannotStart\", \"status\":\"False\", \"type\":\"Ready\"}",
"oc get cgu lab-upgrade -oyaml",
"status: ... copiedPolicies: - lab-upgrade-policy3-common-ptp-sub-policy managedPoliciesForUpgrade: - name: policy3-common-ptp-sub-policy namespace: default",
"oc get cgu lab-upgrade -ojsonpath='{.status.remediationPlan}'",
"[[\"spoke2\", \"spoke3\"]]",
"oc logs -n openshift-operators cluster-group-upgrades-controller-manager-75bcc7484d-8k8xp -c manager",
"ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"lab-upgrade\", \"namespace\": \"default\", \"error\": \"Cluster spoke5555 is not a ManagedCluster\"} 1 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem",
"oc describe mcp/worker-rt",
"Name: worker-rt Namespace: Labels: machineconfiguration.openshift.io/role=worker-rt",
"oc label mcp <mcp_name> <mcp_name>=\"\"",
"oc adm must-gather --image=<PAO_image> --dest-dir=<dir>",
"oc adm must-gather --image=registry.redhat.io/openshift4/performance-addon-operator-must-gather-rhel8:v4.10 --dest-dir=must-gather",
"tar cvaf must-gather.tar.gz must-gather/",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"podman login registry.redhat.io",
"Username: <username> Password: <password>",
"podman run --entrypoint performance-profile-creator registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 -h",
"A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --info log --must-gather-dir-path /must-gather",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency > my-performance-profile.yaml",
"cat my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - intel_idle.max_cstate=0 - idle=poll cpu: isolated: 1,3,5,7,9,11,13,15,17,19-39,41,43,45,47,49,51,53,55,57,59-79 reserved: 0,2,4,6,8,10,12,14,16,18,40,42,44,46,48,50,52,54,56,58 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: single-numa-node realTimeKernel: enabled: true",
"oc apply -f my-performance-profile.yaml",
"podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=true --must-gather-dir-path /must-gather > my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 10-39,50-79 reserved: 0-9,40-49 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true",
"vi run-perf-profile-creator.sh",
"#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" PAO_IMG=\"registry.redhat.io/openshift4/performance-addon-rhel8-operator:v4.10\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Performance Addon Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" && USD{CMD} \"USD{PAO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{PAO_IMG}\" || USD{IMG_PULL_CMD} \"USD{PAO_IMG}\" || exit_error \"Performance Addon Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) PAO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{PAO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"",
"chmod a+x run-perf-profile-creator.sh",
"./run-perf-profile-creator.sh -h",
"Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Performance Addon Operator image 1 -t path to a must-gather tarball 2 A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled",
"./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --info=log",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-acd1358917e9f98cbdb599aea622d78b True False False 3 3 3 0 22h worker-cnf rendered-worker-cnf-1d871ac76e1951d32b2fe92369879826 False True False 2 1 1 0 22h",
"./run-perf-profile-creator.sh -t /must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=2 --rt-kernel=true > my-performance-profile.yaml",
"cat my-performance-profile.yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 1-39,41-79 reserved: 0,40 nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: false",
"oc apply -f my-performance-profile.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMSw1Mi01MyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root",
"[crio.runtime.workloads.management] activation_annotation = \"target.workload.openshift.io/management\" annotation_prefix = \"resources.workload.openshift.io\" [crio.runtime.workloads.management.resources] cpushares = 0 cpuset = \"0-1, 52-53\" 1",
"{ \"management\": { \"cpuset\": \"0-1,52-53\" 1 } }",
"export ISO_IMAGE_NAME=<iso_image_name> 1",
"export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1",
"export OCP_VERSION=<ocp_version> 1",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.10/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.10/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}",
"wget http://USD(hostname)/USD{ISO_IMAGE_NAME}",
"Saving to: rhcos-4.10.1-x86_64-live.x86_64.iso rhcos-4.10.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s",
"oc edit AgentServiceConfig",
"- cpuArchitecture: x86_64 openshiftVersion: \"4.10\" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<mirror-registry>/<path>/rhcos-live.x86_64.iso",
"apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: assisted-installer labels: app: assisted-service data: ca-bundle.crt: <certificate> 1 registries.conf: | 2 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] location = <mirror_registry_url> 3 insecure = false mirror-by-digest-only = true",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: 'assisted-installer-mirror-config' osImages: - openshiftVersion: <ocp_version> rootfs: <rootfs_url> 1 url: <iso_url> 2",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json",
"oc apply -k out/argocd/deployment",
"podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v{product-version} extract /home/ztp --tar | tar x -C ./out",
"example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml",
"grep -r \"ztp-deploy-wave\" out/source-crs",
"apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson",
"export CLUSTERNS=example-sno",
"oc create namespace USDCLUSTERNS",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"<site_name>\" namespace: \"<site_name>\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" 1 clusterImageSetNameRef: \"openshift-4.10\" 2 sshPublicKey: \"ssh-rsa AAAA...\" 3 clusters: - clusterName: \"<site_name>\" networkType: \"OVNKubernetes\" clusterLabels: 4 common: true group-du-sno: \"\" sites : \"<site_name>\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" 5 nodes: - hostName: \"example-node.example.com\" 6 role: \"master\" #biosConfigRef: # filePath: \"example-hw.profile\" 7 bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ 8 bmcCredentialsName: name: \"bmh-secret\" 9 bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" 10 rootDeviceHints: wwn: \"0x11111000000asd123\" cpuset: \"0-1,52-53\" nodeNetwork: 11 interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: 12 enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"export CLUSTER=<clusterName>",
"oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Completed\")]}' | jq",
"curl -sk USD(oc get agentclusterinstall -n USDCLUSTER USDCLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'",
"oc get AgentClusterInstall -n <cluster_name>",
"oc get managedcluster",
"oc describe -n openshift-gitops application clusters",
"Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/siteconfigs/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not create extra-manifest ranSite1.extra-manifest3 stat extra-manifest3: no such file or directory 2021/11/26 17:21:40 Error: could not build the entire SiteConfig defined by /tmp/kust-plugin-config-913473579: stat extra-manifest3: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-913473579; exit status 1: exit status 1 Type: ComparisonError",
"Status: Sync: Compared To: Destination: Namespace: clusters-sub Server: https://kubernetes.default.svc Source: Path: sites-config Repo URL: https://git.com/ran-sites/siteconfigs/.git Target Revision: master Status: Unknown",
"oc delete policy -n <namespace> <policy_name>",
"oc delete -k out/argocd/deployment",
"--- apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"common\" namespace: \"ztp-common\" spec: bindingRules: common: \"true\" 1 sourceFiles: 2 - fileName: SriovSubscription.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: SriovSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: SriovOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscription.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: PtpSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: PtpOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogNS.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogSubscription.yaml policyName: \"subscriptions-policy\" - fileName: ClusterLogOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: StorageNS.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: StorageSubscription.yaml policyName: \"subscriptions-policy\" - fileName: StorageOperatorStatus.yaml policyName: \"subscriptions-policy\" - fileName: ReduceMonitoringFootprint.yaml policyName: \"config-policy\" - fileName: OperatorHub.yaml 3 policyName: \"config-policy\" - fileName: DefaultCatsrc.yaml 4 policyName: \"config-policy\" 5 metadata: name: redhat-operators spec: displayName: disconnected-redhat-operators image: registry.example.com:5000/disconnected-redhat-operators/disconnected-redhat-operator-index:v4.9 - fileName: DisconnectedICSP.yaml policyName: \"config-policy\" spec: repositoryDigestMirrors: - mirrors: - registry.example.com:5000 source: registry.redhat.io",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno\" namespace: \"ztp-group\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" sourceFiles: - fileName: PtpConfigSlave.yaml policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" phc2sysOpts: \"-a -r -n 24\"",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: group-du-ptp-config-policy namespace: groups-sub annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: group-du-ptp-config-policy-config spec: remediationAction: inform severity: low namespaceselector: exclude: - kube-* include: - '*' object-templates: - complianceType: musthave objectDefinition: apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: recommend: - match: - nodeLabel: node-role.kubernetes.io/worker-du priority: 4 profile: slave profile: - interface: ens5f0 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 ..",
"export CLUSTER=<clusterName>",
"oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[-1:]}' | jq",
"{ \"lastTransitionTime\": \"2022-11-09T07:28:09Z\", \"message\": \"The ClusterGroupUpgrade CR has upgrade policies that are still non compliant\", \"reason\": \"UpgradeNotCompleted\", \"status\": \"False\", \"type\": \"Ready\" }",
"oc get policies -n USDCLUSTER",
"NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 3h42m ztp-common.common-subscriptions-policy inform NonCompliant 3h42m ztp-group.group-du-sno-config-policy inform NonCompliant 3h42m ztp-group.group-du-sno-validator-du-policy inform NonCompliant 3h42m ztp-install.example1-common-config-policy-pjz9s enforce Compliant 167m ztp-install.example1-common-subscriptions-policy-zzd9k enforce NonCompliant 164m ztp-site.example1-config-policy inform NonCompliant 3h42m ztp-site.example1-perf-policy inform NonCompliant 3h42m",
"export NS=<namespace>",
"oc get policy -n USDNS",
"oc describe -n openshift-gitops application policies",
"Status: Conditions: Last Transition Time: 2021-11-26T17:21:39Z Message: rpc error: code = Unknown desc = `kustomize build /tmp/https___git.com/ran-sites/policies/ --enable-alpha-plugins` failed exit status 1: 2021/11/26 17:21:40 Error could not find test.yaml under source-crs/: no such file or directory Error: failure in plugin configured via /tmp/kust-plugin-config-52463179; exit status 1: exit status 1 Type: ComparisonError",
"Status: Sync: Compared To: Destination: Namespace: policies-sub Server: https://kubernetes.default.svc Source: Path: policies Repo URL: https://git.com/ran-sites/policies/.git Target Revision: master Status: Error",
"oc get policy -n USDCLUSTER",
"NAME REMEDIATION ACTION COMPLIANCE STATE AGE ztp-common.common-config-policy inform Compliant 13d ztp-common.common-subscriptions-policy inform Compliant 13d ztp-group.group-du-sno-config-policy inform Compliant 13d Ztp-group.group-du-sno-validator-du-policy inform Compliant 13d ztp-site.example-sno-config-policy inform Compliant 13d",
"oc get placementrule -n USDNS",
"oc get placementrule -n USDNS <placementRuleName> -o yaml",
"oc get ManagedCluster USDCLUSTER -o jsonpath='{.metadata.labels}' | jq",
"oc get policy -n USDCLUSTER",
"export CLUSTER=<clusterName>",
"oc get clustergroupupgrades -n ztp-install USDCLUSTER",
"oc get clustergroupupgrades -n ztp-install USDCLUSTER -o jsonpath='{.status.conditions[?(@.type==\"Ready\")]}'",
"oc delete clustergroupupgrades -n ztp-install USDCLUSTER",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10 extract /home/ztp --tar | tar x -C ./out",
"out └── argocd └── example ├── policygentemplates │ ├── common-ranGen.yaml │ ├── example-sno-site.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ ├── kustomization.yaml │ └── ns.yaml └── siteconfig ├── example-sno.yaml ├── KlusterletAddonConfigOverride.yaml └── kustomization.yaml",
"mkdir -p ./site-install",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"<site_name>\" namespace: \"<site_name>\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" 1 clusterImageSetNameRef: \"openshift-4.10\" 2 sshPublicKey: \"ssh-rsa AAAA...\" 3 clusters: - clusterName: \"<site_name>\" networkType: \"OVNKubernetes\" clusterLabels: 4 common: true group-du-sno: \"\" sites : \"<site_name>\" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 #crTemplates: # KlusterletAddonConfig: \"KlusterletAddonConfigOverride.yaml\" 5 nodes: - hostName: \"example-node.example.com\" 6 role: \"master\" #biosConfigRef: # filePath: \"example-hw.profile\" 7 bmcAddress: idrac-virtualmedia://<out_of_band_ip>/<system_id>/ 8 bmcCredentialsName: name: \"bmh-secret\" 9 bootMACAddress: \"AA:BB:CC:DD:EE:11\" bootMode: \"UEFI\" 10 rootDeviceHints: wwn: \"0x11111000000asd123\" cpuset: \"0-1,52-53\" nodeNetwork: 11 interfaces: - name: eno1 macAddress: \"AA:BB:CC:DD:EE:11\" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: 12 enabled: true address: - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254",
"podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-install:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10.1 generator install site-1-sno.yaml /output",
"site-install └── site-1-sno ├── site-1_agentclusterinstall_example-sno.yaml ├── site-1-sno_baremetalhost_example-node1.example.com.yaml ├── site-1-sno_clusterdeployment_example-sno.yaml ├── site-1-sno_configmap_example-sno.yaml ├── site-1-sno_infraenv_example-sno.yaml ├── site-1-sno_klusterletaddonconfig_example-sno.yaml ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml ├── site-1-sno_managedcluster_example-sno.yaml ├── site-1-sno_namespace_example-sno.yaml └── site-1-sno_nmstateconfig_example-node1.example.com.yaml",
"mkdir -p ./site-machineconfig",
"podman run -it --rm -v `pwd`/out/argocd/example/siteconfig:/resources:Z -v `pwd`/site-machineconfig:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10.1 generator install -E site-1-sno.yaml /output",
"site-machineconfig └── site-1-sno ├── site-1-sno_machineconfig_02-master-workload-partitioning.yaml ├── site-1-sno_machineconfig_predefined-extra-manifests-master.yaml └── site-1-sno_machineconfig_predefined-extra-manifests-worker.yaml",
"mkdir -p ./ref",
"podman run -it --rm -v `pwd`/out/argocd/example/policygentemplates:/resources:Z -v `pwd`/ref:/output:Z,U registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10.1 generator config -N . /output",
"ref └── customResource ├── common ├── example-multinode-site ├── example-sno ├── group-du-3node ├── group-du-3node-validator │ └── Multiple-validatorCRs ├── group-du-sno ├── group-du-sno-validator ├── group-du-standard └── group-du-standard-validator └── Multiple-validatorCRs",
"apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno 1 data: 2 password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 3 data: .dockerconfigjson: <pull_secret> 4 type: kubernetes.io/dockerconfigjson",
"apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: openshift-4.10.0-rc.0 1 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.10.0-x86_64 2",
"oc apply -f clusterImageSet-4.10.yaml",
"apiVersion: v1 kind: Namespace metadata: name: <cluster_name> 1 labels: name: <cluster_name> 2",
"oc apply -f cluster-namespace.yaml",
"oc apply -R ./site-install/site-sno-1",
"oc get managedcluster",
"oc get agent -n <cluster_name>",
"oc describe agent -n <cluster_name>",
"oc get agentclusterinstall -n <cluster_name>",
"oc describe agentclusterinstall -n <cluster_name>",
"oc get managedclusteraddon -n <cluster_name>",
"oc get secret -n <cluster_name> <cluster_name>-admin-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > <directory>/<cluster_name>-kubeconfig",
"oc get managedcluster",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE SNO-cluster true True True 2d19h",
"oc get clusterdeployment -n <cluster_name>",
"NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE Sno0026 agent-baremetal false Initialized 2d14h",
"oc describe agentclusterinstall -n <cluster_name> <cluster_name>",
"oc delete managedcluster <cluster_name>",
"oc delete namespace <cluster_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 02-master-workload-partitioning spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,W2NyaW8ucnVudGltZS53b3JrbG9hZHMubWFuYWdlbWVudF0KYWN0aXZhdGlvbl9hbm5vdGF0aW9uID0gInRhcmdldC53b3JrbG9hZC5vcGVuc2hpZnQuaW8vbWFuYWdlbWVudCIKYW5ub3RhdGlvbl9wcmVmaXggPSAicmVzb3VyY2VzLndvcmtsb2FkLm9wZW5zaGlmdC5pbyIKcmVzb3VyY2VzID0geyAiY3B1c2hhcmVzIiA9IDAsICJjcHVzZXQiID0gIjAtMSw1Mi01MyIgfQo= mode: 420 overwrite: true path: /etc/crio/crio.conf.d/01-workload-partitioning user: name: root - contents: source: data:text/plain;charset=utf-8;base64,ewogICJtYW5hZ2VtZW50IjogewogICAgImNwdXNldCI6ICIwLTEsNTItNTMiCiAgfQp9Cg== mode: 420 overwrite: true path: /etc/kubernetes/openshift-workload-pinning user: name: root",
"[crio.runtime.workloads.management] activation_annotation = \"target.workload.openshift.io/management\" annotation_prefix = \"resources.workload.openshift.io\" resources = { \"cpushares\" = 0, \"cpuset\" = \"0-1,52-53\" } 1",
"{ \"management\": { \"cpuset\": \"0-1,52-53\" 1 } }",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: container-mount-namespace-and-kubelet-conf-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKCmRlYnVnKCkgewogIGVjaG8gJEAgPiYyCn0KCnVzYWdlKCkgewogIGVjaG8gVXNhZ2U6ICQoYmFzZW5hbWUgJDApIFVOSVQgW2VudmZpbGUgW3Zhcm5hbWVdXQogIGVjaG8KICBlY2hvIEV4dHJhY3QgdGhlIGNvbnRlbnRzIG9mIHRoZSBmaXJzdCBFeGVjU3RhcnQgc3RhbnphIGZyb20gdGhlIGdpdmVuIHN5c3RlbWQgdW5pdCBhbmQgcmV0dXJuIGl0IHRvIHN0ZG91dAogIGVjaG8KICBlY2hvICJJZiAnZW52ZmlsZScgaXMgcHJvdmlkZWQsIHB1dCBpdCBpbiB0aGVyZSBpbnN0ZWFkLCBhcyBhbiBlbnZpcm9ubWVudCB2YXJpYWJsZSBuYW1lZCAndmFybmFtZSciCiAgZWNobyAiRGVmYXVsdCAndmFybmFtZScgaXMgRVhFQ1NUQVJUIGlmIG5vdCBzcGVjaWZpZWQiCiAgZXhpdCAxCn0KClVOSVQ9JDEKRU5WRklMRT0kMgpWQVJOQU1FPSQzCmlmIFtbIC16ICRVTklUIHx8ICRVTklUID09ICItLWhlbHAiIHx8ICRVTklUID09ICItaCIgXV07IHRoZW4KICB1c2FnZQpmaQpkZWJ1ZyAiRXh0cmFjdGluZyBFeGVjU3RhcnQgZnJvbSAkVU5JVCIKRklMRT0kKHN5c3RlbWN0bCBjYXQgJFVOSVQgfCBoZWFkIC1uIDEpCkZJTEU9JHtGSUxFI1wjIH0KaWYgW1sgISAtZiAkRklMRSBdXTsgdGhlbgogIGRlYnVnICJGYWlsZWQgdG8gZmluZCByb290IGZpbGUgZm9yIHVuaXQgJFVOSVQgKCRGSUxFKSIKICBleGl0CmZpCmRlYnVnICJTZXJ2aWNlIGRlZmluaXRpb24gaXMgaW4gJEZJTEUiCkVYRUNTVEFSVD0kKHNlZCAtbiAtZSAnL15FeGVjU3RhcnQ9LipcXCQvLC9bXlxcXSQvIHsgcy9eRXhlY1N0YXJ0PS8vOyBwIH0nIC1lICcvXkV4ZWNTdGFydD0uKlteXFxdJC8geyBzL15FeGVjU3RhcnQ9Ly87IHAgfScgJEZJTEUpCgppZiBbWyAkRU5WRklMRSBdXTsgdGhlbgogIFZBUk5BTUU9JHtWQVJOQU1FOi1FWEVDU1RBUlR9CiAgZWNobyAiJHtWQVJOQU1FfT0ke0VYRUNTVEFSVH0iID4gJEVOVkZJTEUKZWxzZQogIGVjaG8gJEVYRUNTVEFSVApmaQo= mode: 493 path: /usr/local/bin/extractExecStart - contents: source: data:text/plain;charset=utf-8;base64,IyEvYmluL2Jhc2gKbnNlbnRlciAtLW1vdW50PS9ydW4vY29udGFpbmVyLW1vdW50LW5hbWVzcGFjZS9tbnQgIiRAIgo= mode: 493 path: /usr/local/bin/nsenterCmns systemd: units: - contents: | [Unit] Description=Manages a mount namespace that both kubelet and crio can use to share their container-specific mounts [Service] Type=oneshot RemainAfterExit=yes RuntimeDirectory=container-mount-namespace Environment=RUNTIME_DIRECTORY=%t/container-mount-namespace Environment=BIND_POINT=%t/container-mount-namespace/mnt ExecStartPre=bash -c \"findmnt USD{RUNTIME_DIRECTORY} || mount --make-unbindable --bind USD{RUNTIME_DIRECTORY} USD{RUNTIME_DIRECTORY}\" ExecStartPre=touch USD{BIND_POINT} ExecStart=unshare --mount=USD{BIND_POINT} --propagation slave mount --make-rshared / ExecStop=umount -R USD{RUNTIME_DIRECTORY} enabled: true name: container-mount-namespace.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART}\" name: 90-container-mount-namespace.conf name: crio.service - dropins: - contents: | [Unit] Wants=container-mount-namespace.service After=container-mount-namespace.service [Service] ExecStartPre=/usr/local/bin/extractExecStart %n /%t/%N-execstart.env ORIG_EXECSTART EnvironmentFile=-/%t/%N-execstart.env ExecStart= ExecStart=bash -c \"nsenter --mount=%t/container-mount-namespace/mnt USD{ORIG_EXECSTART} --housekeeping-interval=30s\" name: 90-container-mount-namespace.conf - contents: | [Service] Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\" Environment=\"OPENSHIFT_EVICTION_MONITORING_PERIOD_DURATION=30s\" name: 30-kubelet-interval-tuning.conf name: kubelet.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: load-sctp-module spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:, verification: {} filesystem: root mode: 420 path: /etc/modprobe.d/sctp-blacklist.conf - contents: source: data:text/plain;charset=utf-8,sctp filesystem: root mode: 420 path: /etc/modules-load.d/sctp-load.conf",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 04-accelerated-container-startup-master spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,#!/bin/bash
#
# Temporarily reset the core system processes's CPU affinity to be unrestricted to accelerate startup and shutdown
#
# The defaults below can be overridden via environment variables
#

# The default set of critical processes whose affinity should be temporarily unbound:
CRITICAL_PROCESSES=${CRITICAL_PROCESSES:-"systemd ovs crio kubelet NetworkManager conmon dbus"}

# Default wait time is 600s = 10m:
MAXIMUM_WAIT_TIME=${MAXIMUM_WAIT_TIME:-600}

# Default steady-state threshold = 2%
# Allowed values:
#  4  - absolute pod count (+/-)
#  4% - percent change (+/-)
#  -1 - disable the steady-state check
STEADY_STATE_THRESHOLD=${STEADY_STATE_THRESHOLD:-2%}

# Default steady-state window = 60s
# If the running pod count stays within the given threshold for this time
# period, return CPU utilization to normal before the maximum wait time has
# expires
STEADY_STATE_WINDOW=${STEADY_STATE_WINDOW:-60}

# Default steady-state allows any pod count to be "steady state"
# Increasing this will skip any steady-state checks until the count rises above
# this number to avoid false positives if there are some periods where the
# count doesn't increase but we know we can't be at steady-state yet.
STEADY_STATE_MINIMUM=${STEADY_STATE_MINIMUM:-0}

#######################################################

KUBELET_CPU_STATE=/var/lib/kubelet/cpu_manager_state
FULL_CPU_STATE=/sys/fs/cgroup/cpuset/cpuset.cpus
unrestrictedCpuset() {
  local cpus
  if [[ -e $KUBELET_CPU_STATE ]]; then
      cpus=$(jq -r '.defaultCpuSet' <$KUBELET_CPU_STATE)
  fi
  if [[ -z $cpus ]]; then
    # fall back to using all cpus if the kubelet state is not configured yet
    [[ -e $FULL_CPU_STATE ]] || return 1
    cpus=$(<$FULL_CPU_STATE)
  fi
  echo $cpus
}

restrictedCpuset() {
  for arg in $(</proc/cmdline); do
    if [[ $arg =~ ^systemd.cpu_affinity= ]]; then
      echo ${arg#*=}
      return 0
    fi
  done
  return 1
}

getCPUCount () {
  local cpuset="$1"
  local cpulist=()
  local cpus=0
  local mincpus=2

  if [[ -z $cpuset || $cpuset =~ [^0-9,-] ]]; then
    echo $mincpus
    return 1
  fi

  IFS=',' read -ra cpulist <<< $cpuset

  for elm in "${cpulist[@]}"; do
    if [[ $elm =~ ^[0-9]+$ ]]; then
      (( cpus++ ))
    elif [[ $elm =~ ^[0-9]+-[0-9]+$ ]]; then
      local low=0 high=0
      IFS='-' read low high <<< $elm
      (( cpus += high - low + 1 ))
    else
      echo $mincpus
      return 1
    fi
  done

  # Return a minimum of 2 cpus
  echo $(( cpus > $mincpus ? cpus : $mincpus ))
  return 0
}

resetOVSthreads () {
  local cpucount="$1"
  local curRevalidators=0
  local curHandlers=0
  local desiredRevalidators=0
  local desiredHandlers=0
  local rc=0

  curRevalidators=$(ps -Teo pid,tid,comm,cmd | grep -e revalidator | grep -c ovs-vswitchd)
  curHandlers=$(ps -Teo pid,tid,comm,cmd | grep -e handler | grep -c ovs-vswitchd)

  # Calculate the desired number of threads the same way OVS does.
  # OVS will set these thread count as a one shot process on startup, so we
  # have to adjust up or down during the boot up process. The desired outcome is
  # to not restrict the number of thread at startup until we reach a steady
  # state.  At which point we need to reset these based on our restricted  set
  # of cores.
  # See OVS function that calculates these thread counts:
  # https://github.com/openvswitch/ovs/blob/master/ofproto/ofproto-dpif-upcall.c#L635
  (( desiredRevalidators=$cpucount / 4 + 1 ))
  (( desiredHandlers=$cpucount - $desiredRevalidators ))


  if [[ $curRevalidators -ne $desiredRevalidators || $curHandlers -ne $desiredHandlers ]]; then

    logger "Recovery: Re-setting OVS revalidator threads: ${curRevalidators} -> ${desiredRevalidators}"
    logger "Recovery: Re-setting OVS handler threads: ${curHandlers} -> ${desiredHandlers}"

    ovs-vsctl set \
      Open_vSwitch . \
      other-config:n-handler-threads=${desiredHandlers} \
      other-config:n-revalidator-threads=${desiredRevalidators}
    rc=$?
  fi

  return $rc
}

resetAffinity() {
  local cpuset="$1"
  local failcount=0
  local successcount=0
  logger "Recovery: Setting CPU affinity for critical processes \"$CRITICAL_PROCESSES\" to $cpuset"
  for proc in $CRITICAL_PROCESSES; do
    local pids="$(pgrep $proc)"
    for pid in $pids; do
      local tasksetOutput
      tasksetOutput="$(taskset -apc "$cpuset" $pid 2>&1)"
      if [[ $? -ne 0 ]]; then
        echo "ERROR: $tasksetOutput"
        ((failcount++))
      else
        ((successcount++))
      fi
    done
  done

  resetOVSthreads "$(getCPUCount ${cpuset})"
  if [[ $? -ne 0 ]]; then
    ((failcount++))
  else
    ((successcount++))
  fi

  logger "Recovery: Re-affined $successcount pids successfully"
  if [[ $failcount -gt 0 ]]; then
    logger "Recovery: Failed to re-affine $failcount processes"
    return 1
  fi
}

setUnrestricted() {
  logger "Recovery: Setting critical system processes to have unrestricted CPU access"
  resetAffinity "$(unrestrictedCpuset)"
}

setRestricted() {
  logger "Recovery: Resetting critical system processes back to normally restricted access"
  resetAffinity "$(restrictedCpuset)"
}

currentAffinity() {
  local pid="$1"
  taskset -pc $pid | awk -F': ' '{print $2}'
}

within() {
  local last=$1 current=$2 threshold=$3
  local delta=0 pchange
  delta=$(( current - last ))
  if [[ $current -eq $last ]]; then
    pchange=0
  elif [[ $last -eq 0 ]]; then
    pchange=1000000
  else
    pchange=$(( ( $delta * 100) / last ))
  fi
  echo -n "last:$last current:$current delta:$delta pchange:${pchange}%: "
  local absolute limit
  case $threshold in
    *%)
      absolute=${pchange##-} # absolute value
      limit=${threshold%%%}
      ;;
    *)
      absolute=${delta##-} # absolute value
      limit=$threshold
      ;;
  esac
  if [[ $absolute -le $limit ]]; then
    echo "within (+/-)$threshold"
    return 0
  else
    echo "outside (+/-)$threshold"
    return 1
  fi
}

steadystate() {
  local last=$1 current=$2
  if [[ $last -lt $STEADY_STATE_MINIMUM ]]; then
    echo "last:$last current:$current Waiting to reach $STEADY_STATE_MINIMUM before checking for steady-state"
    return 1
  fi
  within $last $current $STEADY_STATE_THRESHOLD
}

waitForReady() {
  logger "Recovery: Waiting ${MAXIMUM_WAIT_TIME}s for the initialization to complete"
  local lastSystemdCpuset="$(currentAffinity 1)"
  local lastDesiredCpuset="$(unrestrictedCpuset)"
  local t=0 s=10
  local lastCcount=0 ccount=0 steadyStateTime=0
  while [[ $t -lt $MAXIMUM_WAIT_TIME ]]; do
    sleep $s
    ((t += s))
    # Re-check the current affinity of systemd, in case some other process has changed it
    local systemdCpuset="$(currentAffinity 1)"
    # Re-check the unrestricted Cpuset, as the allowed set of unreserved cores may change as pods are assigned to cores
    local desiredCpuset="$(unrestrictedCpuset)"
    if [[ $systemdCpuset != $lastSystemdCpuset || $lastDesiredCpuset != $desiredCpuset ]]; then
      resetAffinity "$desiredCpuset"
      lastSystemdCpuset="$(currentAffinity 1)"
      lastDesiredCpuset="$desiredCpuset"
    fi

    # Detect steady-state pod count
    ccount=$(crictl ps | wc -l)
    if steadystate $lastCcount $ccount; then
      ((steadyStateTime += s))
      echo "Steady-state for ${steadyStateTime}s/${STEADY_STATE_WINDOW}s"
      if [[ $steadyStateTime -ge $STEADY_STATE_WINDOW ]]; then
        logger "Recovery: Steady-state (+/- $STEADY_STATE_THRESHOLD) for ${STEADY_STATE_WINDOW}s: Done"
        return 0
      fi
    else
      if [[ $steadyStateTime -gt 0 ]]; then
        echo "Resetting steady-state timer"
        steadyStateTime=0
      fi
    fi
    lastCcount=$ccount
  done
  logger "Recovery: Recovery Complete Timeout"
}

main() {
  if ! unrestrictedCpuset >&/dev/null; then
    logger "Recovery: No unrestricted Cpuset could be detected"
    return 1
  fi

  if ! restrictedCpuset >&/dev/null; then
    logger "Recovery: No restricted Cpuset has been configured.  We are already running unrestricted."
    return 0
  fi

  # Ensure we reset the CPU affinity when we exit this script for any reason
  # This way either after the timer expires or after the process is interrupted
  # via ^C or SIGTERM, we return things back to the way they should be.
  trap setRestricted EXIT

  logger "Recovery: Recovery Mode Starting"
  setUnrestricted
  waitForReady
}

if [[ "${BASH_SOURCE[0]}" = "${0}" ]]; then
  main "${@}"
  exit $?
fi
 mode: 493 path: /usr/local/bin/accelerated-container-startup.sh systemd: units: - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container startup [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold = 2% # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=2%% # Steady-state window = 120s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=120 # Steady-state minimum = 40 # Increasing this will skip any steady-state checks until the count rises above # this number to avoid false positives if there are some periods where the # count doesn't increase but we know we can't be at steady-state yet. Environment=STEADY_STATE_MINIMUM=40 [Install] WantedBy=multi-user.target enabled: true name: accelerated-container-startup.service - contents: | [Unit] Description=Unlocks more CPUs for critical system processes during container shutdown DefaultDependencies=no [Service] Type=simple ExecStart=/usr/local/bin/accelerated-container-startup.sh # Maximum wait time is 600s = 10m: Environment=MAXIMUM_WAIT_TIME=600 # Steady-state threshold # Allowed values: # 4 - absolute pod count (+/-) # 4% - percent change (+/-) # -1 - disable the steady-state check # Note: '%' must be escaped as '%%' in systemd unit files Environment=STEADY_STATE_THRESHOLD=-1 # Steady-state window = 60s # If the running pod count stays within the given threshold for this time # period, return CPU utilization to normal before the maximum wait time has # expires Environment=STEADY_STATE_WINDOW=60 [Install] WantedBy=shutdown.target reboot.target halt.target enabled: true name: accelerated-container-shutdown.service",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 06-kdump-enable-master spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: kdump.service kernelArguments: - crashkernel=512M",
"apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-local-storage --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-local-storage namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-logging --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging spec: targetNamespaces: - openshift-logging --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management labels: openshift.io/cluster-monitoring: \"true\" name: openshift-ptp --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp --- apiVersion: v1 kind: Namespace metadata: annotations: workload.openshift.io/allowed: management name: openshift-sriov-network-operator --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging spec: channel: \"stable\" 1 name: cluster-logging source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual 2 --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"stable\" installPlanApproval: Automatic name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"stable\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"stable\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging 1 metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd curation: type: \"curator\" curator: schedule: \"30 3 * * *\" managementState: Managed --- apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder 2 metadata: name: instance namespace: openshift-logging spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test 3 pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile 1 spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - \"efi=runtime\" 2 cpu: isolated: 2-51,54-103 3 reserved: 0-1,52-53 4 hugepages: defaultHugepagesSize: 1G pages: - count: 32 5 size: 1G 6 node: 1 7 machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true 8",
"apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: du-ptp-slave namespace: openshift-ptp spec: profile: - interface: ens5f0 1 name: slave phc2sysOpts: -a -r -n 24 ptp4lConf: | [global] # # Default Data Set # twoStepFlag 1 slaveOnly 0 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison ieee1588 G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 1 unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval 0 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type OC network_transport L2 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 ptp4lOpts: -2 -s --summary_interval -4 recommend: - match: - nodeLabel: node-role.kubernetes.io/master priority: 4 profile: slave",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-51,54-103 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: node-role.kubernetes.io/master: \"\" disableDrain: true enableInjector: true enableOperatorWebhook: true --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-mh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_mh vlan: 150 1 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-mh namespace: openshift-sriov-network-operator spec: deviceType: vfio-pci 2 isRdma: false nicSelector: pfNames: - ens7f0 3 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 4 priority: 10 resourceName: du_mh --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-nw-du-fh namespace: openshift-sriov-network-operator spec: networkNamespace: openshift-sriov-network-operator resourceName: du_fh vlan: 140 5 --- apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriov-nnp-du-fh namespace: openshift-sriov-network-operator spec: deviceType: netdevice 6 isRdma: true nicSelector: pfNames: - ens5f0 7 nodeSelector: node-role.kubernetes.io/master: \"\" numVfs: 8 8 priority: 10 resourceName: du_fh",
"apiVersion: operator.openshift.io/v1 kind: Console metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"false\" include.release.openshift.io/self-managed-high-availability: \"false\" include.release.openshift.io/single-node-developer: \"false\" release.openshift.io/create-only: \"true\" name: cluster spec: logLevel: Normal managementState: Removed operatorLogLevel: Normal",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: disableNetworkDiagnostics: true",
"spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\"",
"spec: profile: - name: performance-patch # The 'include' line must match the associated PerformanceProfile name # And the cmdline_crash CPU set must match the 'isolated' set in the associated PerformanceProfile data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-51,54-103 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable",
"OCP_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}')",
"DTK_IMAGE=USD(oc adm release info --image-for=driver-toolkit quay.io/openshift-release-dev/ocp-release:USDOCP_VERSION-x86_64)",
"podman run --rm USDDTK_IMAGE rpm -qa | grep 'kernel-rt-core-' | sed 's#kernel-rt-core-##'",
"4.18.0-305.49.1.rt7.121.el8_4.x86_64",
"oc debug node/<node_name>",
"sh-4.4# uname -r",
"4.18.0-305.49.1.rt7.121.el8_4.x86_64",
"oc get operatorhub cluster -o yaml",
"spec: disableAllDefaultSources: true",
"oc get catalogsource -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.target\\.workload\\.openshift\\.io/management}{\"\\n\"}{end}'",
"certified-operators -- {\"effect\": \"PreferredDuringScheduling\"} community-operators -- {\"effect\": \"PreferredDuringScheduling\"} ran-operators 1 redhat-marketplace -- {\"effect\": \"PreferredDuringScheduling\"} redhat-operators -- {\"effect\": \"PreferredDuringScheduling\"}",
"oc get namespaces -A -o jsonpath='{range .items[*]}{.metadata.name}{\" -- \"}{.metadata.annotations.workload\\.openshift\\.io/allowed}{\"\\n\"}{end}'",
"default -- openshift-apiserver -- management openshift-apiserver-operator -- management openshift-authentication -- management openshift-authentication-operator -- management",
"oc get -n openshift-logging ClusterLogForwarder instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: \"2022-07-19T21:51:41Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"1030342\" uid: 8c1a842d-80c5-447a-9150-40350bdf40f0 spec: inputs: - infrastructure: {} name: infra-logs outputs: - name: kafka-open type: kafka url: tcp://10.46.55.190:9092/test pipelines: - inputRefs: - audit name: audit-logs outputRefs: - kafka-open - inputRefs: - infrastructure name: infrastructure-logs outputRefs: - kafka-open",
"oc get -n openshift-logging clusterloggings.logging.openshift.io instance -o yaml",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: creationTimestamp: \"2022-07-07T18:22:56Z\" generation: 1 name: instance namespace: openshift-logging resourceVersion: \"235796\" uid: ef67b9b8-0e65-4a10-88ff-ec06922ea796 spec: collection: logs: fluentd: {} type: fluentd curation: curator: schedule: 30 3 * * * type: curator managementState: Managed",
"oc get consoles.operator.openshift.io cluster -o jsonpath=\"{ .spec.managementState }\"",
"Removed",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# systemctl status chronyd",
"● chronyd.service - NTP client/server Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:chronyd(8) man:chrony.conf(5)",
"PTP_POD_NAME=USD(oc get pods -n openshift-ptp -l app=linuxptp-daemon -o name)",
"oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'",
"sending: GET PORT_DATA_SET 3cecef.fffe.7a7020-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2 3cecef.fffe.7a7020-2 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 3cecef.fffe.7a7020-2 portState LISTENING logMinDelayReqInterval 0 peerMeanPathDelay 0 logAnnounceInterval 1 announceReceiptTimeout 3 logSyncInterval 0 delayMechanism 1 logMinPdelayReqInterval 0 versionNumber 2",
"oc -n openshift-ptp rsh -c linuxptp-daemon-container USD{PTP_POD_NAME} pmc -u -f /var/run/ptp4l.0.config -b 0 'GET TIME_STATUS_NP'",
"sending: GET TIME_STATUS_NP 3cecef.fffe.7a7020-0 seq 0 RESPONSE MANAGEMENT TIME_STATUS_NP master_offset 10 1 ingress_time 1657275432697400530 cumulativeScaledRateOffset +0.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true 2 gmIdentity 3c2c30.ffff.670e00",
"oc logs USDPTP_POD_NAME -n openshift-ptp -c linuxptp-daemon-container",
"phc2sys[56020.341]: [ptp4l.1.config] CLOCK_REALTIME phc offset -1731092 s2 freq -1546242 delay 497 ptp4l[56020.390]: [ptp4l.1.config] master offset -2 s2 freq -5863 path delay 541 ptp4l[56020.390]: [ptp4l.0.config] master offset -8 s2 freq -10699 path delay 533",
"oc get sriovoperatorconfig -n openshift-sriov-network-operator default -o jsonpath=\"{.spec.disableDrain}{'\\n'}\"",
"true",
"oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o jsonpath=\"{.items[*].status.syncStatus}{'\\n'}\"",
"Succeeded",
"oc get SriovNetworkNodeStates -n openshift-sriov-network-operator -o yaml",
"apiVersion: v1 items: - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState status: interfaces: - Vfs: - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.0 vendor: \"8086\" vfID: 0 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.1 vendor: \"8086\" vfID: 1 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.2 vendor: \"8086\" vfID: 2 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.3 vendor: \"8086\" vfID: 3 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.4 vendor: \"8086\" vfID: 4 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.5 vendor: \"8086\" vfID: 5 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.6 vendor: \"8086\" vfID: 6 - deviceID: 154c driver: vfio-pci pciAddress: 0000:3b:0a.7 vendor: \"8086\" vfID: 7",
"oc get PerformanceProfile openshift-node-performance-profile -o yaml",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: creationTimestamp: \"2022-07-19T21:51:31Z\" finalizers: - foreground-deletion generation: 1 name: openshift-node-performance-profile resourceVersion: \"33558\" uid: 217958c0-9122-4c62-9d4d-fdc27c31118c spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 - efi=runtime cpu: isolated: 2-51,54-103 reserved: 0-1,52-53 hugepages: defaultHugepagesSize: 1G pages: - count: 32 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true status: conditions: - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Available - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"True\" type: Upgradeable - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Progressing - lastHeartbeatTime: \"2022-07-19T21:51:31Z\" lastTransitionTime: \"2022-07-19T21:51:31Z\" status: \"False\" type: Degraded runtimeClass: performance-openshift-node-performance-profile tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-openshift-node-performance-profile",
"oc get performanceprofile openshift-node-performance-profile -o jsonpath=\"{range .status.conditions[*]}{ @.type }{' -- '}{@.status}{'\\n'}{end}\"",
"Available -- True Upgradeable -- True Progressing -- False Degraded -- False",
"oc get tuneds.tuned.openshift.io -n openshift-cluster-node-tuning-operator performance-patch -o yaml",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: creationTimestamp: \"2022-07-18T10:33:52Z\" generation: 1 name: performance-patch namespace: openshift-cluster-node-tuning-operator resourceVersion: \"34024\" uid: f9799811-f744-4179-bf00-32d4436c08fd spec: profile: - data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-openshift-node-performance-profile [bootloader] cmdline_crash=nohz_full=2-23,26-47 1 [sysctl] kernel.timer_migration=1 [scheduler] group.ice-ptp=0:f:10:*:ice-ptp.* [service] service.stalld=start,enable service.chronyd=stop,disable name: performance-patch recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: master priority: 19 profile: performance-patch",
"oc get networks.operator.openshift.io cluster -o jsonpath='{.spec.disableNetworkDiagnostics}'",
"true",
"oc describe machineconfig container-mount-namespace-and-kubelet-conf-master | grep OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION",
"Environment=\"OPENSHIFT_MAX_HOUSEKEEPING_INTERVAL_DURATION=60s\"",
"oc get configmap cluster-monitoring-config -n openshift-monitoring -o jsonpath=\"{ .data.config\\.yaml }\"",
"grafana: enabled: false alertmanagerMain: enabled: false prometheusK8s: retention: 24h",
"oc get route -n openshift-monitoring alertmanager-main",
"oc get route -n openshift-monitoring grafana",
"oc get performanceprofile -o jsonpath=\"{ .items[0].spec.cpu.reserved }\"",
"0-1,52-53",
"siteconfig ├── site1-sno-du.yaml ├── site2-standard-du.yaml └── extra-manifest └── 01-example-machine-config.yaml",
"clusters: - clusterName: \"example-sno\" networkType: \"OVNKubernetes\" extraManifestPath: extra-manifest",
"apiVersion: ran.openshift.io/v1 kind: SiteConfig metadata: name: \"site1-sno-du\" namespace: \"site1-sno-du\" spec: baseDomain: \"example.com\" pullSecretRef: name: \"assisted-deployment-pull-secret\" clusterImageSetNameRef: \"openshift-4.10\" sshPublicKey: \"<ssh_public_key>\" clusters: - clusterName: \"site1-sno-du\" extraManifests: filter: exclude: - 03-sctp-machine-config-worker.yaml",
"- clusterName: \"site1-sno-du\" extraManifests: filter: inclusionDefault: exclude",
"clusters: - clusterName: \"site1-sno-du\" extraManifestPath: \"<custom_manifest_folder>\" 1 extraManifests: filter: inclusionDefault: exclude 2 include: - custom-sctp-machine-config-worker.yaml",
"siteconfig ├── site1-sno-du.yaml └── user-custom-manifest └── custom-sctp-machine-config-worker.yaml",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v{product-version}.1 extract /home/ztp --tar | tar x -C ./out",
"apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: USDname annotations: ran.openshift.io/ztp-deploy-wave: \"10\" spec: additionalKernelArgs: - \"idle=poll\" - \"rcupdate.rcu_normal_after_boot=0\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" realTimeKernel: enabled: true",
"- fileName: PerformanceProfile.yaml policyName: \"config-policy\" metadata: name: openshift-node-performance-profile spec: cpu: # These must be tailored for the specific hardware platform isolated: \"2-19,22-39\" reserved: \"0-1,20-21\" hugepages: defaultHugepagesSize: 1G pages: - size: 1G count: 10 globallyDisableIrqLoadBalancing: false",
"--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - idle=poll - rcupdate.rcu_normal_after_boot=0 cpu: isolated: 2-19,22-39 reserved: 0-1,20-21 globallyDisableIrqLoadBalancing: false hugepages: defaultHugepagesSize: 1G pages: - count: 10 size: 1G machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/master: \"\" net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/master: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true",
"spec: bindingRules: group-du-standard: \"\" mcp: \"worker\"",
"ztp-update/ ├── example-cr1.yaml ├── example-cr2.yaml └── ztp-update.in",
"FROM registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.10 ADD example-cr2.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/ ADD example-cr1.yaml /kustomize/plugin/ran.openshift.io/v1/policygentemplate/source-crs/",
"podman build -t ztp-site-generate-rhel8-custom:v4.10-custom-1",
"podman push localhost/ztp-site-generate-rhel8-custom:v4.10-custom-1 registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.10-custom-1",
"oc patch -n openshift-gitops argocd openshift-gitops --type=json -p '[{\"op\": \"replace\", \"path\":\"/spec/repo/initContainers/0/image\", \"value\": \"registry.example.com:5000/ztp-site-generate-rhel8-custom:v4.10-custom-1\"} ]'",
"oc get pods -n openshift-gitops | grep openshift-gitops-repo-server",
"openshift-gitops-server-7df86f9774-db682 1/1 Running 1 28s",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"group-du-sno-validator\" 1 namespace: \"ztp-group\" 2 spec: bindingRules: group-du-sno: \"\" 3 bindingExcludedRules: ztp-done: \"\" 4 mcp: \"master\" 5 sourceFiles: - fileName: validatorCRs/informDuValidator.yaml remediationAction: inform 6 policyName: \"du-policy\" 7",
"#AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: PtpOperatorConfigForEvent.yaml policyName: \"config-policy\"",
"- fileName: PtpConfigSlave.yaml 1 policyName: \"config-policy\" metadata: name: \"du-ptp-slave\" spec: profile: - name: \"slave\" interface: \"ens5f1\" 2 ptp4lOpts: \"-2 -s --summary_interval -4\" 3 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 4 ptpClockThreshold: 5 holdOverTimeout: 30 #secs maxOffsetThreshold: 100 #nano secs minOffsetThreshold: -100 #nano secs",
"- fileName: AmqInstance.yaml policyName: \"config-policy\"",
"AMQ interconnect operator for fast events - fileName: AmqSubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: AmqSubscription.yaml policyName: \"subscriptions-policy\" Bare Metal Event Rely operator - fileName: BareMetalEventRelaySubscriptionNS.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" - fileName: BareMetalEventRelaySubscription.yaml policyName: \"subscriptions-policy\"",
"- fileName: AmqInstance.yaml policyName: \"config-policy\"",
"- fileName: HardwareEvent.yaml policyName: \"config-policy\" spec: nodeSelector: {} transportHost: \"amqp://<amq_interconnect_name>.<amq_interconnect_namespace>.svc.cluster.local\" 1 logLevel: \"info\"",
"oc -n openshift-bare-metal-events create secret generic redfish-basic-auth --from-literal=username=<bmc_username> --from-literal=password=<bmc_password> --from-literal=hostaddr=\"<bmc_host_ip_addr>\"",
"imageContentSources: - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - mirror-ocp-registry.ibmcloud.io.cpak:5000/openshift-release-dev/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"OCP_RELEASE_NUMBER=<release_version>",
"ARCHITECTURE=<server_architecture>",
"DIGEST=\"USD(oc adm release info quay.io/openshift-release-dev/ocp-release:USD{OCP_RELEASE_NUMBER}-USD{ARCHITECTURE} | sed -n 's/Pull From: .*@//p')\"",
"DIGEST_ALGO=\"USD{DIGEST%%:*}\"",
"DIGEST_ENCODED=\"USD{DIGEST#*:}\"",
"SIGNATURE_BASE64=USD(curl -s \"https://mirror.openshift.com/pub/openshift-v4/signatures/openshift/release/USD{DIGEST_ALGO}=USD{DIGEST_ENCODED}/signature-1\" | base64 -w0 && echo)",
"cat >checksum-USD{OCP_RELEASE_NUMBER}.yaml <<EOF USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} EOF",
"curl -s https://api.openshift.com/api/upgrades_info/v1/graph?channel=stable-4.10 -o ~/upgrade-graph_stable-4.10",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: ImageSignature.yaml 1 policyName: \"platform-upgrade-prep\" binaryData: USD{DIGEST_ALGO}-USD{DIGEST_ENCODED}: USD{SIGNATURE_BASE64} 2 - fileName: DisconnectedICSP.yaml policyName: \"platform-upgrade-prep\" metadata: name: disconnected-internal-icsp-for-ocp spec: repositoryDigestMirrors: 3 - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-release - mirrors: - quay-intern.example.com/ocp4/openshift-release-dev source: quay.io/openshift-release-dev/ocp-v4.0-art-dev - fileName: ClusterVersion.yaml 4 policyName: \"platform-upgrade-prep\" metadata: name: version annotations: ran.openshift.io/ztp-deploy-wave: \"1\" spec: channel: \"stable-4.10\" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.10 - fileName: ClusterVersion.yaml 5 policyName: \"platform-upgrade\" metadata: name: version spec: channel: \"stable-4.10\" upstream: http://upgrade.example.com/images/upgrade-graph_stable-4.10 desiredUpdate: version: 4.10.4 status: history: - version: 4.10.4 state: \"Completed\"",
"oc get policies -A | grep platform-upgrade",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: true",
"oc apply -f cgu-platform-upgrade-prep.yml",
"oc get policies --all-namespaces",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false",
"oc apply -f cgu-platform-upgrade.yml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge",
"oc get cgu cgu-platform-upgrade -o jsonpath='{.status.precaching.status}'",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-platform-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge",
"oc get policies --all-namespaces",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"operator-catsrc-policy\" metadata: name: redhat-operators spec: displayName: Red Hat Operators Catalog image: registry.example.com:5000/olm/redhat-operators:v4.10 1 updateStrategy: 2 registryPoll: interval: 1h",
"apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: \"du-upgrade\" namespace: \"ztp-group-du-sno\" spec: bindingRules: group-du-sno: \"\" mcp: \"master\" remediationAction: inform sourceFiles: ... - fileName: DefaultCatsrc.yaml remediationAction: inform policyName: \"fec-catsrc-policy\" metadata: name: certified-operators spec: displayName: Intel SRIOV-FEC Operator image: registry.example.com:5000/olm/far-edge-sriov-fec:v4.10 updateStrategy: registryPoll: interval: 10m - fileName: AcceleratorsSubscription.yaml policyName: \"subscriptions-fec-policy\" spec: channel: \"stable\" source: certified-operators",
"oc get policies -A | grep -E \"catsrc-policy|subscription\"",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade-prep namespace: default spec: clusters: - spoke1 enable: true managedPolicies: - du-upgrade-operator-catsrc-policy remediationStrategy: maxConcurrency: 1",
"oc apply -f cgu-operator-upgrade-prep.yml",
"oc get policies -A | grep -E \"catsrc-policy\"",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-operator-upgrade namespace: default spec: managedPolicies: - du-upgrade-operator-catsrc-policy 1 - common-subscriptions-policy 2 preCaching: false clusters: - spoke1 remediationStrategy: maxConcurrency: 1 enable: false",
"oc apply -f cgu-operator-upgrade.yml",
"oc get policy common-subscriptions-policy -n <policy_namespace>",
"NAME REMEDIATION ACTION COMPLIANCE STATE AGE common-subscriptions-policy inform NonCompliant 27d",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge",
"oc get cgu cgu-operator-upgrade -o jsonpath='{.status.precaching.status}'",
"oc get cgu -n default cgu-operator-upgrade -ojsonpath='{.status.conditions}' | jq",
"[ { \"lastTransitionTime\": \"2022-03-08T20:49:08.000Z\", \"message\": \"The ClusterGroupUpgrade CR is not enabled\", \"reason\": \"UpgradeNotStarted\", \"status\": \"False\", \"type\": \"Ready\" }, { \"lastTransitionTime\": \"2022-03-08T20:55:30.000Z\", \"message\": \"Precaching is completed\", \"reason\": \"PrecachingCompleted\", \"status\": \"True\", \"type\": \"PrecachingDone\" } ]",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-operator-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge",
"oc get policies --all-namespaces",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-platform-operator-upgrade-prep namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade-prep - du-upgrade-operator-catsrc-policy clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 10 enable: true",
"oc apply -f cgu-platform-operator-upgrade-prep.yml",
"oc get policies --all-namespaces",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: cgu-du-upgrade namespace: default spec: managedPolicies: - du-upgrade-platform-upgrade 1 - du-upgrade-operator-catsrc-policy 2 - common-subscriptions-policy 3 preCaching: true clusterSelector: - group-du-sno remediationStrategy: maxConcurrency: 1 enable: false",
"oc apply -f cgu-platform-operator-upgrade.yml",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"preCaching\": true}}' --type=merge",
"oc get jobs,pods -n openshift-talm-pre-cache",
"oc get cgu cgu-du-upgrade -ojsonpath='{.status.conditions}'",
"oc --namespace=default patch clustergroupupgrade.ran.openshift.io/cgu-du-upgrade --patch '{\"spec\":{\"enable\":true, \"preCaching\": false}}' --type=merge",
"oc get policies --all-namespaces",
"- fileName: PaoSubscriptionNS.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscriptionOperGroup.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave - fileName: PaoSubscription.yaml policyName: \"subscriptions-policy\" complianceType: mustnothave",
"oc get policy -n ztp-common common-subscriptions-policy",
"apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: generation: 1 name: spoke1 namespace: ztp-install ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: ManagedCluster name: spoke1 uid: 98fdb9b2-51ee-4ee7-8f57-a84f7f35b9d5 resourceVersion: \"46666836\" uid: b8be9cd2-764f-4a62-87d6-6b767852c7da spec: actions: afterCompletion: addClusterLabels: ztp-done: \"\" 1 deleteClusterLabels: ztp-running: \"\" deleteObjects: true beforeEnable: addClusterLabels: ztp-running: \"\" 2 clusters: - spoke1 enable: true managedPolicies: - common-spoke1-config-policy - common-spoke1-subscriptions-policy - group-spoke1-config-policy - spoke1-config-policy - group-spoke1-validator-du-policy preCaching: false remediationStrategy: maxConcurrency: 1 timeout: 240",
"mkdir -p ./update",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v{product-version} extract /home/ztp --tar | tar x -C ./update",
"oc get managedcluster -l 'local-cluster!=true'",
"oc label managedcluster -l 'local-cluster!=true' ztp-done=",
"oc delete -f update/argocd/deployment/clusters-app.yaml",
"oc patch -f policies-app.yaml -p '{\"metadata\": {\"finalizers\": [\"resources-finalizer.argocd.argoproj.io\"]}}' --type merge",
"oc delete -f update/argocd/deployment/policies-app.yaml",
"├── policygentemplates │ ├── site1-ns.yaml │ ├── site1.yaml │ ├── site2-ns.yaml │ ├── site2.yaml │ ├── common-ns.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen-ns.yaml │ ├── group-du-sno-ranGen.yaml │ └── kustomization.yaml └── siteconfig ├── site1.yaml ├── site2.yaml └── kustomization.yaml",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - common-ranGen.yaml - group-du-sno-ranGen.yaml - site1.yaml - site2.yaml resources: - common-ns.yaml - group-du-sno-ranGen-ns.yaml - site1-ns.yaml - site2-ns.yaml",
"apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization generators: - site1.yaml - site2.yaml",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file update/argocd/deployment/argocd-openshift-gitops-patch.json",
"oc apply -k update/argocd/deployment"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/scalability_and_performance/index |
Chapter 1. Compatibility Matrix for Red Hat Ceph Storage 8.0 | Chapter 1. Compatibility Matrix for Red Hat Ceph Storage 8.0 The following tables list products and their versions compatible with Red Hat Ceph Storage 8.0. Host Operating System Version Red Hat Enterprise Linux 9.4, 9.5 Standard lifecycle RHEL is included in the product. Important All nodes in the cluster and their clients must use the supported OS version(s) to ensure that the version of the ceph package is the same on all nodes. Using different versions of the ceph package is not supported. Note For Client RPM packages, Red Hat Ceph Storage 8.0 only supports Red Hat Enterprise Linux 9. The cluster bootstrap node must be Red Hat Enterprise Linux 9. Product Version Notes Ansible Supported in a limited capacity. Supported for upgrade and conversion to Cephadm and for other minimal playbooks. Red Hat OpenShift Data Foundation See the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker for detailed external mode version compatibility. Red Hat OpenStack Platform 18.0.3 or later Supports Red Hat Ceph Storage when externally deployed. Red Hat Satellite 6.x Only registering with the Content Delivery Network (CDN) is supported. Registering with Red Hat Network (RHN) is deprecated and not supported. Client Connector Version Notes S3A 2.8.x, 3.2.x, and trunk Red Hat Ceph Storage as a backup target Version Notes CommVault Cloud Data Management v11 IBM Spectrum Protect Plus 10.1.5 IBM Spectrum Protect server 8.1.8 NetApp AltaVault 4.3.2 and 4.4 Rubrik Cloud Data Management (CDM) 3.2 onwards Trilio, TrilioVault 3.0 S3 target Veeam (object storage) Veeam Availability Suite 9.5 Update 4 Supported on Red Hat Ceph Storage object storage with the S3 protocol Veritas NetBackup for Symantec OpenStorage (OST) cloud backup 7.7 and 8.0 Independent Software vendors Version Notes IBM Spectrum Discover 2.0.3 WekaIO 3.12.2 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/compatibility_guide/compatibility-matrix-for-red-hat-ceph-storage-8.0 |
Chapter 3. Workspaces overview | Chapter 3. Workspaces overview Red Hat CodeReady Workspaces provides developer workspaces with everything needed to a code, build, test, run, and debug applications. To allow that, the developer workspaces provide four main components: The source code of a project. A web-based IDE. Tool dependencies, needed by developers to work on a project Application runtime: a replica of the environment where the application runs in production Pods manage each component of a CodeReady Workspaces workspace. Therefore, everything running in a CodeReady Workspaces workspace is running inside containers. This makes a CodeReady Workspaces workspace highly portable. The embedded browser-based IDE is the point of access for everything running in a CodeReady Workspaces workspace. This makes a CodeReady Workspaces workspace easily shareable. Important By default, it is possible to run only one workspace at a time. To change the default value, see: {link-limits-for-user-workspaces}. Table 3.1. Features and benefits Features Traditional IDE workspaces Red Hat CodeReady Workspaces workspaces Configuration and installation required Yes. No. Embedded tools Partial. IDE plug-ins need configuration. Dependencies need installation and configuration. Example: JDK, Maven, Node. Yes. Plug-ins provide their dependencies. Application runtime provided No. Developers have to manage that separately. Yes. Application runtime is replicated in the workspace. Shareable No. Or not easily Yes. Developer workspaces are shareable with a URL. Versionable No Yes. Devfiles exist with project source code. Accessible from anywhere No. Installation is needed. Yes. Only requires a browser. To start a CodeReady Workspaces workspace, following options are available: Creating and configuring a new workspace using the Dashboard Configuring a workspace using a devfile Use the Dashboard to discover CodeReady Workspaces 2.1: Creating a workspace from code sample Creating a workspace by importing source code of a project Use a devfile as the preferred way to start a CodeReady Workspaces 2.1 workspace: Making a workspace portable using a devfile Converting a CodeReady Workspaces 1.x workspace to a devfile Importing a OpenShift application into a workspace Use the browser-based IDE as the preferred way to interact with a CodeReady Workspaces 2.1 workspace. For an alternative way to interact with a CodeReady Workspaces 2.1 workspace, see: Remotely accessing workspaces . 3.1. Configuring a workspace using a devfile To quickly and easily configure a CodeReady Workspaces workspace, use a devfile. For an introduction to devfiles and instructions for their use, see the instructions in this section. 3.1.1. What is a devfile A devfile is a file that describes and define a development environment: the source code the development components (browser IDE tools and application runtimes) a list of pre-defined commands projects to clone Devfiles are YAML files that CodeReady Workspaces consumes and transforms into a cloud workspace composed of multiple containers. The devfile can be saved in the root folder of a Git repository, a feature branch of a Git repository, a publicly accessible destination, or as a separate, locally stored artifact. When creating a workspace, CodeReady Workspaces uses that definition to initiate everything and run all the containers for the required tools and application runtimes. CodeReady Workspaces also mounts file-system volumes to make source code available to the workspace. Devfiles can be versioned with the project source code. When there is a need for a workspace to fix an old maintenance branch, the project devfile provides a definition of the workspace with the tools and the exact dependencies to start working on the old branch. Use it to instantiate workspaces on demand. CodeReady Workspaces maintains the devfile up-to-date with the tools used in the workspace: Projects of the workspace (path, Git location, branch) Commands to perform daily tasks (build, run, test, debug) Runtime environment (container images to run the application) Che-Theia plug-ins with tools, IDE features, and helpers that a developer would use in the workspace (Git, Java support, SonarLint, Pull Request) 3.1.2. Disambiguation between stacks and devfiles This section describes differences between stacks in CodeReady Workspaces 2.0 and devfiles in CodeReady Workspaces 2.1 Starting with CodeReady Workspaces 2.1: A stack is a pre-configured CodeReady Workspaces workspace. A devfile is a configuration YAML file that CodeReady Workspaces consumes and transforms in a cloud workspace composed of multiple containers. In CodeReady Workspaces 2.0, stacks were defined by a stacks.json file that was included with the che server . In contrast, in CodeReady Workspaces 2.1, the stacks.json file does not exist. Instead, a stack is defined in the devfile registry, which is a separate service. Every single devfile in the registry corresponds to a stack. Note that in CodeReady Workspaces 2.0, stacks and workspaces were defined using two different formats. However, with CodeReady Workspaces 2.1, the devfile format is used to define both the stacks and the workspaces. Nevertheless, a user opening the user dashboard does not notice any difference: in CodeReady Workspaces 2.1, a list of stacks is still present to choose from as a starting point to create a workspace. 3.1.3. Creating a workspace from the default branch of a Git repository A CodeReady Workspaces workspace can be created by pointing to a devfile that is stored in a Git source repository. The CodeReady Workspaces instance then uses the discovered devfile.yaml file to build a workspace using the /f?url= API. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts . The devfile.yaml file in the root folder of a Git repository available over HTTPS. See Making a workspace portable using a devfile for detailed information about creating and using devfiles. Procedure Run the workspace by opening the following URL: https://codeready-<openshift_deployment_name>.<domain_name>/f?url=https:// <GitRepository> Example 3.1.4. Creating a workspace from a feature branch of a Git repository A CodeReady Workspaces workspace can be created by pointing to devfile that is stored in a Git source repository on a feature branch of the user's choice. The CodeReady Workspaces instance then uses the discovered devfile to build a workspace. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts . The devfile.yaml file in the root folder of a Git repository on a specific branch of the user's choice available over HTTPS. See Making a workspace portable using a devfile for detailed information about creating and using devfiles. Procedure Execute the workspace by opening the following URL: https://codeready-<openshift_deployment_name>.<domain_name>/f?url= <GitHubBranch> Example Use following URL format to open an experimental quarkus-quickstarts branch hosted on che.openshift.io . 3.1.5. Creating a workspace from a publicly accessible standalone devfile using HTTP A workspace can be created using a devfile, the URL of which is pointing to the raw content of the devfile. The CodeReady Workspaces instance then uses the discovered devfile to build a workspace. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts . The publicly-accessible standalone devfile.yaml file. See Making a workspace portable using a Devfile for detailed information about creating and using devfiles. Procedure Execute the workspace by opening the following URL: {prod-fun}/f?url=https:// <yourhosturl> /devfile.yaml Example 3.1.6. Overriding devfile values using factory parameters Values in the following sections of a remote devfile can be overridden using specially constructed additional factory parameters: apiVersion metadata projects attributes Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts . A publicly accessible standalone devfile.yaml file. See Making a workspace portable using a Devfile for detailed information about creating and using devfiles. Procedure Open the workspace by navigating to the following URL: https://codeready-<openshift_deployment_name>.<domain_name>/f?url=https:// <hostURL> /devfile.yaml&override. <parameter.path> = <value> Example of overriding the generateName property Consider the following initial devfile: To add or override generateName value, the following factory URL can be used: The resulting workspace will have the following devfile model: Example of overriding project source branch property Consider the following initial devfile: To add or override source branch value, the following factory URL can be used: The resulting workspace will have the following devfile model: Example of overriding or creating an attribute value Consider the following initial devfile: To add or override persistVolumes attribute value, the following factory URL can be used: The resulting workspace will have the following devfile model: When overriding attributes, everything that follows the attributes keyword treat as an attribute name, so it's possible to use dot-separated names: The resulting workspace will have the following devfile model: 3.1.7. Creating a workspace using crwctl and a local devfile A CodeReady Workspaces workspace can be created by pointing the crwctl tool to a locally stored devfile. The CodeReady Workspaces instance then uses the discovered devfile to build a workspace. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts . The CodeReady Workspaces CLI management tool. See the CodeReady Workspaces 2.1 Installation GuideInstalling the crwctl management tool . The devfile is available on the local filesystem in the current working directory. See Making a workspace portable using a Devfile for detailed information about creating and using devfiles. Example Download the devfile.yaml file from the GitHub repository to the current working directory. Procedure Run a workspace from a devfile using the workspace:start parameter with the crwctl tool as follows: Additional resources Making a workspace portable using a Devfile 3.2. Making a workspace portable using a devfile To transfer a configured CodeReady Workspaces workspace, create and export the devfile of the workspace and load the devfile on a different host to initialize a new instance of the workspace. For detailed instructions on how to create such a devfile, see below. 3.2.1. What is a devfile A devfile is a file that describes and define a development environment: the source code the development components (browser IDE tools and application runtimes) a list of pre-defined commands projects to clone Devfiles are YAML files that CodeReady Workspaces consumes and transforms into a cloud workspace composed of multiple containers. The devfile can be saved in the root folder of a Git repository, a feature branch of a Git repository, a publicly accessible destination, or as a separate, locally stored artifact. When creating a workspace, CodeReady Workspaces uses that definition to initiate everything and run all the containers for the required tools and application runtimes. CodeReady Workspaces also mounts file-system volumes to make source code available to the workspace. Devfiles can be versioned with the project source code. When there is a need for a workspace to fix an old maintenance branch, the project devfile provides a definition of the workspace with the tools and the exact dependencies to start working on the old branch. Use it to instantiate workspaces on demand. CodeReady Workspaces maintains the devfile up-to-date with the tools used in the workspace: Projects of the workspace (path, Git location, branch) Commands to perform daily tasks (build, run, test, debug) Runtime environment (container images to run the application) Che-Theia plug-ins with tools, IDE features, and helpers that a developer would use in the workspace (Git, Java support, SonarLint, Pull Request) 3.2.2. A minimal devfile The following is the minimum content required in a devfile.yaml file: apiVersion metadata name apiVersion: 1.0.0 metadata: name: che-in-che-out For a complete devfile example, see Red Hat CodeReady Workspaces in CodeReady Workspaces devfile.yaml . name or generateName must be defined Both name and generateName are optional parameters, but at least one of them must be defined. See Section 3.2.3, "Generating workspace names" . 3.2.3. Generating workspace names To specify a prefix for automatically generated workspace names, set the generateName parameter in the devfile.yaml file: apiVersion: 1.0.0 metadata: generateName: che- The workspace name will be in the <generateName>YYYYY format (for example, che-2y7kp ). Y is random [a-z0-9] character. The following naming rules apply when creating workspaces: When name is defined, it is used as the workspace name: <name> When only generateName is defined, it is used as the base of the generated name: <generateName>YYYYY Note For workspaces created using a factory, defining name or generateName has the same effect. The defined value is used as the name prefix: <name>YYYYY or <generateName>YYYYY . When both generateName and name are defined, generateName takes precedence. 3.2.4. Writing a devfile for a project This section describes how to create a minimal devfile for your project and how to include more than one projects in a devfile. 3.2.4.1. Preparing a minimal devfile A minimal devfile sufficient to run a workspace consists of the following parts: Specification version Name Example of a minimal devfile with no project apiVersion: 1.0.0 metadata: name: minimal-workspace Without any further configuration, a workspace with the default editor is launched along with its default plug-ins, which are configured on the CodeReady Workspaces Server. Che-Theia is configured as the default editor along with the CodeReady Workspaces Machine Exec plug-in. When launching a workspace within a Git repository using a factory, the project from the given repository and branch is be created by default. The project name then matches the repository name. Add the following parts for a more functional workspace: List of components: Development components and user runtimes List of projects: Source code repositories List of commands: Actions to manage the workspace components, such as running the development tools, starting the runtime environments, and others Example of a minimal devfile with a project apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/spring-projects/spring-petclinic.git' components: - type: chePlugin id: redhat/java/latest 3.2.4.2. Specifying multiple projects in a devfile A single devfile can specify multiple projects. For each project, specify the type of the source repository, its location, and, optionally, the directory the project is cloned to. Example of a devfile with two projects apiVersion: 1.0.0 metadata: name: example-devfile projects: - name: frontend source: type: git location: https://github.com/acmecorp/frontend.git - name: backend clonePath: src/github.com/acmecorp/backend source: type: git location: https://github.com/acmecorp/backend.git In the preceding example, there are two projects defined, frontend and backend . Each project is located in its own repository. The backend project has a specific requirement to be cloned into the src/github.com/acmecorp/backend/ directory under the source root (implicitly defined by the CodeReady Workspaces runtime) while the frontend project will be cloned into the frontend/ directory under the source root. Additional resources For a detailed explanation of all devfile component assignments and possible values, see: Specification repository Detailed json-schema documentation These sample devfiles are a good source of inspiration: Sample devfiles for Red Hat CodeReady Workspaces workspaces used by default in the user interface . Sample devfiles for Red Hat CodeReady Workspaces workspaces from Red Hat Developer program . 3.2.5. Devfile reference This section contains devfile reference and instructions on how to use the various elements that devfiles consist of. 3.2.5.1. Adding projects to a devfile Usually a devfile contains one or more projects. A workspace is created to develop those projects. Projects are added in the projects section of devfiles. Each project in a single devfile must have: Unique name Source specified Project source consists of two mandatory values: type and location . type The kind of project-source provider. location The URL of project source. CodeReady Workspaces supports the following project types: git Projects with sources in Git. The location points to a clone link. github Same as git but for projects hosted on GitHub only. Use git for projects that do not use GitHub-specific features. zip Projects with sources in a ZIP archive. Location points to a ZIP file. 3.2.5.1.1. Project-source type: git source: type: git location: https://github.com/eclipse/che.git startPoint: master 1 tag: 7.2.0 commitId: 36fe587 branch: master sparseCheckoutDir: wsmaster 2 1 startPoint is the general value for tag , commitId , and branch . The startPoint , tag , commitId , and branch parameters are mutually exclusive. When more than one is supplied, the following order is used: startPoint , tag , commitId , branch . 2 sparseCheckoutDir the template for the sparse checkout Git feature. This is useful when only a part of a project (typically only a single directory) is needed. Example 3.1. sparseCheckoutDir parameter settings Set to /my-module/ to create only the root my-module directory (and its content). Omit the leading slash ( my-module/ ) to create all my-module directories that exist in the project. Including, for example, /addons/my-module/ . The trailing slash indicates that only directories with the given name (including their content) are created. Use wildcards to specify more than one directory name. For example, setting module-* checks out all directories of the given project that start with module- . For more information, see Sparse checkout in Git documentation . 3.2.5.1.2. Project-source type: zip source: type: zip location: http://host.net/path/project-src.zip 3.2.5.1.3. Project clone-path parameter: clonePath The clonePath parameter specifies the path into which the project is to be cloned. The path must be relative to the /projects/ directory, and it cannot leave the /projects/ directory. The default value is the project name. Example devfile with projects apiVersion: 1.0.0 metadata: name: my-project-dev projects: - name: my-project-resourse clonePath: resources/my-project source: type: zip location: http://host.net/path/project-res.zip - name: my-project source: type: git location: https://github.com/my-org/project.git branch: develop 3.2.5.2. Adding components to a devfile Each component in a single devfile must have a unique name. 3.2.5.2.1. Component type: cheEditor Describes the editor used in the workspace by defining its id . A devfile can only contain one component of the cheEditor type. components: - alias: theia-editor type: cheEditor id: eclipse/che-theia/ When cheEditor is missing, a default editor is provided along with its default plug-ins. The default plug-ins are also provided for an explicitly defined editor with the same id as the default one (even if it is a different version). Che-Theia is configured as default editor along with the CodeReady Workspaces Machine Exec plug-in. To specify that a workspace requires no editor, use the editorFree:true attribute in the devfile attributes. 3.2.5.2.2. Component type: chePlugin Describes plug-ins in a workspace by defining their id . It is allowed to have several chePlugin components. components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 Both types above use an ID, which is slash-separated publisher, name and version of plug-in from the CodeReady Workspaces Plug-in registry. List of available CodeReady Workspaces plug-ins and more information about registry can be found in the CodeReady Workspaces plug-in registry GitHub repository. 3.2.5.2.3. Specifying an alternative component registry To specify an alternative registry for the cheEditor and chePlugin component types, use the registryUrl parameter: components: - alias: exec-plugin type: chePlugin registryUrl: https://my-customregistry.com id: eclipse/che-machine-exec-plugin/0.0.1 3.2.5.2.4. Specifying a component by linking to its descriptor An alternative way of specifying cheEditor or chePlugin , instead of using the editor or plug-in id (and optionally an alternative registry), is to provide a direct link to the component descriptor (typically named meta.yaml ) by using the reference field: components: - alias: exec-plugin type: chePlugin reference: https://raw.githubusercontent.com.../plugin/1.0.1/meta.yaml Note It is impossible to mix the id and reference fields in a single component definition; they are mutually exclusive. 3.2.5.2.5. Tuning chePlugin component configuration A chePlugin component may need to be precisely tuned, and in such case, component preferences can be used. The example shows how to configure JVM using plug-in preferences. id: redhat/java/0.38.0 type: chePlugin preferences: java.jdt.ls.vmargs: '-noverify -Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication' Preferences may also be specified as an array: id: redhat/java/0.38.0 type: chePlugin preferences: go.lintFlags: ["--enable-all", "--new"] 3.2.5.2.6. Component type: kubernetes A complex component type that allows to apply configuration from a list of OpenShift components. The content can be provided through the reference attribute, which points to the file with the component content. components: - alias: mysql type: kubernetes reference: petclinic.yaml selector: app.kubernetes.io/name: mysql app.kubernetes.io/component: database app.kubernetes.io/part-of: petclinic Alternatively, to post a devfile with such components to REST API, the contents of the OpenShift list can be embedded into the devfile using the referenceContent field: components: - alias: mysql type: kubernetes reference: petclinic.yaml referenceContent: | kind: List items: - apiVersion: v1 kind: Pod metadata: name: ws spec: containers: ... etc 3.2.5.2.7. Overriding container entrypoints As with the understood by OpenShift). There can be more containers in the list (contained in Pods or Pod templates of deployments). To select which containers to apply the entrypoint changes to. The entrypoints can be defined as follows: components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml entrypoints: - parentName: mysqlServer command: ['sleep'] args: ['infinity'] - parentSelector: app: prometheus args: ['-f', '/opt/app/prometheus-config.yaml'] The entrypoints list contains constraints for picking the containers along with the command and args parameters to apply to them. In the example above, the constraint is parentName: mysqlServer , which will cause the command to be applied to all containers defined in any parent object called mysqlServer . The parent object is assumed to be a top level object in the list defined in the referenced file, which is app-deployment.yaml in the example above. Other types of constraints (and their combinations) are possible: containerName the name of the container parentName the name of the parent object that (indirectly) contains the containers to override parentSelector the set of labels the parent object needs to have A combination of these constraints can be used to precisely locate the containers inside the referenced OpenShift list. 3.2.5.2.8. Overriding container environment variables To provision or override entrypoints in a OpenShift or OpensShift component, configure it in the following way: components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml env: - name: ENV_VAR value: value This is useful for temporary content or without access to editing the referenced content. The specified environment variables are provisioned into each init container and containers inside all Pods and Deployments. 3.2.5.2.9. Specifying mount-source option To specify a project sources directory mount into container(s), use the mountSources parameter: components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml mountSources: true If enabled, project sources mounts will be applied to every container of the given component. This parameter is also applicable for chePlugin type components. 3.2.5.2.10. Component type: dockerimage A component type that allows to define a container image-based configuration of a container in a workspace. A devfile can only contain one component of the dockerimage type. The dockerimage type of component brings in custom tools into the workspace. The component is identified by its image. components: - alias: maven type: dockerimage image: eclipe/maven-jdk8:latest volumes: - name: mavenrepo containerPath: /root/.m2 env: - name: ENV_VAR value: value endpoints: - name: maven-server port: 3101 attributes: protocol: http secure: 'true' public: 'true' discoverable: 'false' memoryLimit: 1536M command: ['tail'] args: ['-f', '/dev/null'] Example of a minimal dockerimage component apiVersion: 1.0.0 metadata: name: MyDevfile components: type: dockerimage image: golang memoryLimit: 512Mi command: ['sleep', 'infinity'] It specifies the type of the component, dockerimage and the image attribute names the image to be used for the component using the usual Docker naming conventions, that is, the above type attribute is equal to docker.io/library/golang:latest . A dockerimage component has many features that enable augmenting the image with additional resources and information needed for meaningful integration of the tool provided by the image with Red Hat CodeReady Workspaces. 3.2.5.2.10.1. Mounting project sources For the dockerimage component to have access to the project sources, you must set the mountSources attribute to true . apiVersion: 1.0.0 metadata: name: MyDevfile components: type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] The sources is mounted on a location stored in the CHE_PROJECTS_ROOT environment variable that is made available in the running container of the image. This location defaults to /projects . 3.2.5.2.10.2. Container Entrypoint The command attribute of the dockerimage along with other arguments, is used to modify the entrypoint command of the container created from the image. In Red Hat CodeReady Workspaces the container is needed to run indefinitely so that you can connect to it and execute arbitrary commands in it at any time. Because the availability of the sleep command and the support for the infinity argument for it is different and depends on the base image used in the particular images, CodeReady Workspaces cannot insert this behavior automatically on its own. However, you can take advantage of this feature to, for example, start necessary servers with modified configurations, etc. 3.2.5.2.11. Persistent Storage Components of any type can specify the custom volumes to be mounted on specific locations within the image. Note that the volume names are shared across all components and therefore this mechanism can also be used to share file systems between components. Example specifying volumes for dockerimage type: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumes: - name: cache containerPath: /.cache Example specifying volumes for cheEditor / chePlugin type: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/ env: - name: HOME value: USD(CHE_PROJECTS_ROOT) volumes: - name: cache containerPath: /.cache Example specifying volumes for kubernetes / openshift type: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: openshift alias: mongo reference: mongo-db.yaml volumes: - name: mongo-persistent-storage containerPath: /data/db 3.2.5.2.12. Specifying container memory limit for components To specify a container(s) memory limit for dockerimage , chePlugin , cheEditor , use the memoryLimit parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 memoryLimit: 1Gi - type: dockerimage image: eclipe/maven-jdk8:latest memoryLimit: 512M This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, RAM limits can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults will be applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__MEMORY__LIMIT__MB system property). 3.2.5.2.13. Specifying container memory request for components To specify a container(s) memory request for chePlugin or cheEditor , use the memoryRequest parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 memoryLimit: 1Gi memoryRequest: 512M - type: dockerimage image: eclipe/maven-jdk8:latest memoryLimit: 512M memoryRequest: 256M This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, RAM requests can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults are applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__MEMORY__REQUEST__MB system property). 3.2.5.2.14. Specifying container CPU limit for components To specify a container(s) CPU limit for chePlugin , cheEditor or dockerimage use the cpuLimit parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 cpuLimit: 1.5 - type: dockerimage image: eclipe/maven-jdk8:latest cpuLimit: 750m This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, CPU limits can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults are applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__CPU__LIMIT__CORES system property). 3.2.5.2.15. Specifying container CPU request for components To specify a container(s) CPU request for chePlugin , cheEditor or dockerimage use the cpuRequest parameter: components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 cpuLimit: 1.5 cpuRequest: 0.225 - type: dockerimage image: eclipe/maven-jdk8:latest cpuLimit: 750m cpuRequest: 450m This limit will be applied to every container of the given component. For the cheEditor and chePlugin component types, CPU requests can be described in the plug-in descriptor file, typically named meta.yaml . If none of them are specified, system-wide defaults are applied (see description of CHE_WORKSPACE_SIDECAR_DEFAULT__CPU__REQUEST__CORES system property). 3.2.5.2.16. Environment variables Red Hat CodeReady Workspaces allows you to configure Docker containers by modifying the environment variables available in component's configuration. Environment variables are supported by the following component types: dockerimage , chePlugin , cheEditor , kubernetes , openshift . In case component has multiple containers, environment variables will be provisioned to each container. apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - type: cheEditor alias: theia-editor id: eclipse/che-theia/ memoryLimit: 2Gi env: - name: HOME value: USD(CHE_PROJECTS_ROOT) Note The variable expansion works between the environment variables, and it uses the OpenShift convention for the variable references. The predefined variables are available for use in custom definitions. The following environment variables are pre-set by the CodeReady Workspaces server: CHE_PROJECTS_ROOT : The location of the projects directory (note that if the component does not mount the sources, the projects will not be accessible). CHE_WORKSPACE_LOGS_ROOT__DIR : The location of the logs common to all the components. If the component chooses to put logs into this directory, the log files are accessible from all other components. CHE_API_INTERNAL : The URL to the CodeReady Workspaces server API endpoint used for communication with the CodeReady Workspaces server. CHE_WORKSPACE_ID : The ID of the current workspace. CHE_WORKSPACE_NAME : The name of the current workspace. CHE_WORKSPACE_NAMESPACE : The CodeReady Workspaces namespace of the current workspace. This environment variable is the name of the user or organization that the workspace belongs to. Note that this is different from the OpenShift namespace or OpenShift project to which the workspace is deployed. CHE_MACHINE_TOKEN : The token used to authenticate the request against the CodeReady Workspaces server. CHE_MACHINE_AUTH_SIGNATURE PUBLIC KEY : The public key used to secure the communication with the CodeReady Workspaces server. CHE_MACHINE_AUTH_SIGNATURE__ALGORITHM : The encryption algorithm used in the secured communication with the CodeReady Workspaces server. A devfiles may only need the CHE_PROJECTS_ROOT environment variable to locate the cloned projects in the component's container. More advanced devfiles might use the CHE_WORKSPACE_LOGS_ROOT__DIR environment variable to read the logs (for example as part of a devfile command). The environment variables used to securely access the CodeReady Workspaces server are mostly out of scope for devfiles and are present only for advanced use cases that are usually handled by the CodeReady Workspaces plug-ins. 3.2.5.2.17. Endpoints Components of any type can specify the endpoints that the Docker image exposes. These endpoints can be made accessible to the users if the CodeReady Workspaces cluster is running using a OpenShift ingress or an OpenShift route and to the other components within the workspace. You can create an endpoint for your application or database, if your application or database server is listening on a port and you want to be able to directly interact with it yourself or you want other components to interact with it. Endpoints have several properties as shown in the following example: apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache endpoints: - name: web port: 8080 attributes: discoverable: false public: true protocol: http - type: dockerimage image: postgres memoryLimit: 512Mi env: - name: POSTGRES_USER value: user - name: POSTGRES_PASSWORD value: password - name: POSTGRES_DB value: database endpoints: - name: postgres port: 5432 attributes: discoverable: true public: false Here, there are two Docker images, each defining a single endpoint. Endpoint is an accessible port that can be made accessible inside the workspace or also publicly (example, from the UI). Each endpoint has a name and port, which is the port on which certain server running inside the container is listening. The following are a few attributes that you can set on the endpoint: discoverable : If an endpoint is discoverable, it means that it can be accessed using its name as the host name within the workspace containers (in the OpenShift parlance, a service is created for it with the provided name). 55 public : The endpoint will be accessible outside of the workspace, too (such endpoint can be accessed from the CodeReady Workspaces user interface). Such endpoints are publicized always on port 80 or 443 (depending on whether tls is enabled in CodeReady Workspaces). protocol : For public endpoints the protocol is a hint to the UI on how to construct the URL for the endpoint access. Typical values are http , https , ws , wss . secure : A boolean (defaulting to false ) specifying whether the endpoint is put behind a JWT proxy requiring a JWT workspace token to grant access. The JWT proxy is deployed in the same Pod as the server and assumes the server listens solely on the local loopback interface, such as 127.0.0.1 . Warning Listening on any other interface than the local loopback poses a security risk because such server is accessible without the JWT authentication within the cluster network on the corresponding IP addresses. path : The URL of the endpoint. unsecuredPaths : A comma-separated list of endpoint paths that are to stay unsecured even if the secure attribute is set to true . cookiesAuthEnabled : When set to true (the default is false ), the JWT workspace token is automatically fetched and included in a workspace-specific cookie to allow requests to pass through the JWT proxy. Warning This setting potentially allows a CSRF attack when used in conjunction with a server using POST requests. When starting a new server within a component, CodeReady Workspaces autodetects this, and the UI offers to automatically expose this port as a public port. This is useful for debugging a web application, for example. It is impossible to do this for servers that autostart with the container (for example, a database server). For such components, specify the endpoints explicitly. Example specifying endpoints for kubernetes / openshift and chePlugin / cheEditor types: apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/ endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: chePlugin id: redhat/php/latest memoryLimit: 1Gi endpoints: - name: 'php-endpoint' port: 7777 - type: chePlugin alias: theia-editor id: eclipse/che-theia/ endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: openshift alias: webapp reference: webapp.yaml endpoints: - name: 'web' port: 8080 attributes: discoverable: false public: true protocol: http - type: openshift alias: mongo reference: mongo-db.yaml endpoints: - name: 'mongo-db' port: 27017 attributes: discoverable: true public: false 3.2.5.2.18. OpenShift resources Complex deployments can be described using OpenShift resource lists that can be referenced in the devfile. This makes them a part of the workspace. Important Because a CodeReady Workspaces workspace is internally represented as a single deployment, all resources from the OpenShift list are merged into that single deployment. Be careful when designing such lists because this can result in name conflicts and other problems. Only the following subset of the OpenShift objects are supported: deployments , pods , services , persistent volume claims , secrets , and config maps . Kubernetes Ingresses are ignored, but OpenShift routes are supported. A workspace created from a devfile using any other object types fails to start. When running CodeReady Workspaces on a OpenShift cluster, only OpenShift lists are supported. When running CodeReady Workspaces on an OpenShift cluster, both OpenShift lists are supported. apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml The preceding component references a file that is relative to the location of the devfile itself. Meaning, this devfile is only loadable by a CodeReady Workspaces factory to which you supply the location of the devfile and therefore it is able to figure out the location of the referenced OpenShift resource list. The following is an example of the postgres.yaml file. apiVersion: v1 kind: List items: - apiVersion: v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: template: metadata: name: postgres app: name: postgres spec: containers: - image: postgres name: postgres ports: - name: postgres containerPort: 5432 volumeMounts: - name: pg-storage mountPath: /var/lib/postgresql/data volumes: - name: pg-storage persistentVolumeClaim: claimName: pg-storage - apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres name: postgres spec: ports: - port: 5432 targetPort: 5432 selector: app: postgres - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pg-storage labels: app: postgres spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi For a basic example of a devfile with an associated OpenShift list, see web-nodejs-with-db-sample on redhat-developer GitHub. If you use generic or large resource lists from which you will only need a subset of resources, you can select particular resources from the list using a selector (which, as the usual OpenShift selectors, works on the labels of the resources in the list). apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml selector: app: postgres Additionally, it is also possible to modify the entrypoints (command and arguments) of the containers present in the resource list. For details of the advanced use case, see the reference (TODO: link). 3.2.5.3. Adding commands to a devfile A devfile allows to specify commands to be available for execution in a workspace. Every command can contain a subset of actions, which are related to a specific component in whose container it will be executed. commands: - name: build actions: - type: exec component: mysql command: mvn clean workdir: /projects/spring-petclinic You can use commands to automate the workspace. You can define commands for building and testing your code, or cleaning the database. The following are two kinds of commands: CodeReady Workspaces specific commands: You have full control over what component executes the command. Editor specific commands: You can use the editor-specific command definitions (example: tasks.json and launch.json in Che-Theia, which is equivalent to how these files work in VS Code). 3.2.5.3.1. CodeReady Workspaces-specific commands Each CodeReady Workspaces-specific command features: An action attribute that is a command to execute. A component attribute that specifies the container in which to execute the command. apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang alias: go-cli memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache commands: - name: compile and run actions: - type: exec component: go-cli command: "go get -d && go run main.go" workdir: "USD{CHE_PROJECTS_ROOT}/src/github.com/acme/my-go-project" + Note If a component to be used in a command must have an alias. This alias is used to reference the component in the command definition. Example: alias: go-cli in the component definition and component: go-cli in the command definition. This ensures that Red Hat CodeReady Workspaces can find the correct container to run the command in. A command can have only one action. 3.2.5.3.2. Editor-specific commands If the editor in the workspace supports it, the devfile can specify additional configuration in the editor-specific format. This is dependent on the integration code present in the workspace editor itself and so is not a generic mechanism. However, the default Che-Theia editor within Red Hat CodeReady Workspaces is equipped to understand the tasks.json and launch.json files provided in the devfile. apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git commands: - name: tasks actions: - type: vscode-task referenceContent: > { "version": "2.0.0", "tasks": [ { "label": "create test file", "type": "shell", "command": "touch USD{workspaceFolder}/test.file" } ] } This example shows association of a tasks.json file with a devfile. Notice the vscode-task type that instructs the Che-Theia editor to interpret this command as a tasks definition and referenceContent attribute that contains the contents of the file itself. You can also save this file separately from the devfile and use reference attribute to specify a relative or absolute URL to it. In addition to the vscode-task commands, the Che-Theia editor understands vscode-launch type using which you can specify the launch configurations. 3.2.5.3.3. Command preview URL It is possible to specify a preview URL for commands that expose web UI. This URL is offered for opening when the command is executed. commands: - name: tasks previewUrl: port: 8080 1 path: /myweb 2 actions: - type: exec component: go-cli command: "go run webserver.go" workdir: USD{CHE_PROJECTS_ROOT}/webserver 1 TCP port where the application listens. Mandatory parameter. 2 The path part of the URL to the UI. Optional parameter. The default is root ( / ). The example above opens http://__<server-domain>__/myweb , where <server-domain> is the URL to the dynamically created OpenShift Ingress or OpenShift Route. 3.2.5.3.3.1. Setting the default way of opening preview URLs By default, a notification that asks the user about the URL opening preference is displayed. To specify the preferred way of previewing a service URL: Open CodeReady Workspaces preferences in File Settings Open Preferences and find che.task.preview.notifications in the CodeReady Workspaces section. Choose from the list of possible values: on - enables a notification for asking the user about the URL opening preferences alwaysPreview - the preview URL opens automatically in the Preview panel as soon as a task is running alwaysGoTo - the preview URL opens automatically in a separate browser tab as soon as a task is running off - disables opening the preview URL (automatically and with a notification) 3.2.5.4. Devfile attributes Devfile attributes can be used to configure various features. 3.2.5.4.1. Attribute: editorFree When an editor is not specified in a devfile, a default is provided. When no editor is needed, use the editorFree attribute. The default value of false means that the devfile requests the provisioning of the default editor. Example of a devfile without an editor apiVersion: 1.0.0 metadata: name: petclinic-dev-environment components: - alias: myApp type: kubernetes local: my-app.yaml attributes: editorFree: true 3.2.5.4.2. Attribute: persistVolumes (ephemeral mode) By default, volumes and PVCs specified in a devfile are bound to a host folder to persist data even after a container restart. To disable data persistence to make the workspace faster, such as when the volume back end is slow, modify the persistVolumes attribute in the devfile. The default value is true . Set to false to use emptyDir for configured volumes and PVC. Example of a devfile with ephemeral mode enabled apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: persistVolumes: false 3.2.6. Objects supported in Red Hat CodeReady Workspaces 2.1 The following table lists the objects that are partially supported in Red Hat CodeReady Workspaces 2.1: Object API OpenShift Infra OpenShift Infra Notes Pod OpenShift Yes Yes - Deployment OpenShift Yes Yes - ConfigMap OpenShift Yes Yes - PVC OpenShift Yes Yes - Secret OpenShift Yes Yes - Service OpenShift Yes Yes - Ingress OpenShift Yes No Minishift allows you to create Ingress and it works when the host is specified (OpenShift creates a route for it). But, the loadBalancer IP is not provisioned. To add Ingress support for the OpenShift infrastructure node, generate routes based on the provided Ingress. Route OpenShift No Yes The OpenShift recipe must be made compatible with the OpenShift Infrastructure and, instead of the provided route, generate Ingress. Template OpenShift Yes Yes The OpenShift API does not support templates. A workspace with a template in the recipe starts successfully and the default parameters are resolved. Additional resources Devfile specifications 3.3. Converting a CodeReady Workspaces 1.x workspace to a devfile This section describes how to manually convert a CodeReady Workspaces 1.x workspace configuration to a CodeReady Workspaces 2.x devfile. The following are the benefits of using a devfile: Using a portable file that works with any installation of CodeReady Workspaces; nothing needs to be changed on the server to start a workspace. Configuration can be stored in project repository and automatically used by CodeReady Workspaces to start a workspace. To start a workspace, specify a devfile using the following format: <che-instance-domain> /f?url=path , for example: This creates and starts a new workspace based on the devfile defined in the URL attribute. A human-readable YAML format for all content. 3.3.1. Comparing CodeReady Workspaces 1.x workspace configuration to a devfile Below, there is a comparison of a CodeReady Workspaces 1.x workspace configuration and a CodeReady Workspaces 2.x devfile . Both are Java Vert.x stacks with a default project and default settings: CodeReady Workspaces 1.x configuration file { "defaultEnv": "default", "environments": { "default": { "machines": { "dev-machine": { "attributes": { "memoryLimitBytes": "2147483648" }, "servers": { "8080/tcp": { "attributes": {}, "port": "8080", "protocol": "http" } }, "volumes": {}, "installers": [ "com.redhat.oc-login", "com.redhat.bayesian.lsp", "org.eclipse.che.ls.java", "org.eclipse.che.ws-agent", "org.eclipse.che.exec", "org.eclipse.che.terminal" ], "env": {} } }, "recipe": { "type": "dockerimage", "content": "quay.io/openshiftio/che-vertx" } } }, "projects": [ { "links": [], "name": "vertx-http-booster", "attributes": { "language": [ "java" ] }, "type": "maven", "source": { "location": "https://github.com/openshiftio-vertx-boosters/vertx-http-booster", "type": "git", "parameters": {} }, "path": "/vertx-http-booster", "description": "HTTP Vert.x Booster", "problems": [], "mixins": [] } ], "name": "wksp-jhwp", "commands": [ { "commandLine": "scl enable rh-maven33 'mvn compile vertx:debug -f USD{current.project.path} -Dvertx.disableDnsResolver=true'", "name": "debug", "attributes": { "goal": "Debug", "previewUrl": "USD{server.8080/tcp}" }, "type": "custom" }, { "commandLine": "scl enable rh-maven33 'mvn compile vertx:run -f USD{current.project.path} -Dvertx.disableDnsResolver=true'", "name": "run", "attributes": { "goal": "Run", "previewUrl": "USD{server.8080/tcp}" }, "type": "custom" }, { "commandLine": "scl enable rh-maven33 'mvn clean install -f USD{current.project.path}'", "name": "build", "attributes": { "goal": "Build", "previewUrl": "" }, "type": "mvn" }, { "commandLine": "mvn -Duser.home=USD{HOME} -f USD{CHE_PROJECTS_ROOT}/vertx-http-booster clean package", "name": "vertx-http-booster:build", "attributes": { "goal": "Build", "previewUrl": "" }, "type": "mvn" }, { "commandLine": "mvn -Duser.home=USD{HOME} -f USD{CHE_PROJECTS_ROOT}/vertx-http-booster vertx:run", "name": "vertx-http-booster:run", "attributes": { "goal": "Run", "previewUrl": "USD{server.8080/tcp}" }, "type": "mvn" } ], "links": [] } CodeReady Workspaces 2.x devfile metadata: name: testing-workspace projects: - name: java-web-vertx source: location: 'https://github.com/che-samples/web-java-vertx' type: git components: - id: redhat/java/latest type: chePlugin - mountSources: true endpoints: - name: 8080/tcp port: 8080 memoryLimit: 512Mi type: dockerimage volumes: - name: m2 containerPath: /home/user/.m2 alias: maven image: 'quay.io/eclipse/che-java8-maven:nightly' apiVersion: 1.0.0 commands: - name: maven build actions: - workdir: 'USD{CHE_PROJECTS_ROOT}/java-web-vertx' type: exec command: 'mvn -Duser.home=USD{HOME} clean install' component: maven - name: run app actions: - workdir: 'USD{CHE_PROJECTS_ROOT}/java-web-vertx' type: exec command: > JDBC_URL=jdbc:h2:/tmp/db \ java -jar -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 \ ./target/*fat.jar component: maven - name: Debug remote java application actions: - referenceContent: | { "version": "0.2.0", "configurations": [ { "type": "java", "name": "Debug (Attach) - Remote", "request": "attach", "hostName": "localhost", "port": 5005 }] } type: vscode-launch 3.3.2. Converting a CodeReady Workspaces 1.x workspace to a basic devfile This section describes how to convert a CodeReady Workspaces 1.x workspace to a devfile. The result is a basic CodeReady Workspaces 2.x devfile that can be used for further workspace creation. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure To convert a CodeReady Workspaces 1.x workspace to a devfile: Open a CodeReady Workspaces 1.x configuration file to identify which CodeReady Workspaces 1.x stack is used in the workspace. Below, there is a detailed guide for Section 3.3.3, "Accessing a CodeReady Workspaces 1.x workspace configuration" . Create a new workspace from the CodeReady Workspaces 2.x devfile that corresponds to the CodeReady Workspaces 1.x stack. Table 3.2. CodeReady Workspaces 1.x stacks and their corresponding CodeReady Workspaces 2.x devfiles CodeReady Workspaces 1.x stacks CodeReady Workspaces 2.x devfile Apache Camel based projects, Apache Camel based projects on CodeReady Workspaces 2.x Apache Camel based on Spring Boot .NET, .NET Core with Che-Theia IDE .NET Core Go, CentOS Go, Go with Che-Theia IDE Go Java Gradle Java Gradle Blank, Java, Java-MySQL, Red Hat CodeReady Workspaces, Java CentOS Java Maven Node, CentOS Node.js Node.js Express Web Application Python, Python with Che-Theia IDE Python Eclipse Vert.x Java Vert.x PHP PHP Simple Spring Boot Java Spring Boot By default, the example project is added to the workspace. To remove the default project, click the Remove button: To import a custom project that was used in CodeReady Workspaces 1.x workspace, click the Add or Import Project and select Git or GitHub option: Various commands can be added to devfiles of imported projects, for example, run , build , and test . The commands are then accessible from the IDE when a workspace is started. Custom commands and other devfile components can be added in the Devfile configuration. Click the Create & Proceed Editing button. Select the Devfile tab to update the configuration. Machine servers in CodeReady Workspaces 1.x workspaces can be specified as components endpoints in a Devfile and CodeReady Workspaces 1.x installers as components of the chePlugin type. See the Devfile specification for detailed information about the supported properties and attributes. Once the Devfile configuration is completed, click the Open button to start a newly created CodeReady Workspaces 2.x workspace. 3.3.3. Accessing a CodeReady Workspaces 1.x workspace configuration CodeReady Workspaces 1.x workspace configuration is not supported in CodeReady Workspaces 2.x, but it can be accessed for converting it to a devfile. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure To access the CodeReady Workspaces 1.x workspace configuration: In the Dashboard , click the Workspaces menu to open the workspaces list and locate the workspace to migrate to CodeReady Workspaces 2.x. In the Actions column, click the Configure workspace icon. The raw workspace configuration is available under the Config tab. 3.4. Creating and configuring a new CodeReady Workspaces 2.1 workspace 3.4.1. Creating a new workspace from the dashboard This procedure describes how to create and edit a new CodeReady Workspaces devfile using the Dashboard . Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . Procedure To edit the devfile: In the Workspaces window, click the Add Workspace button. In the SELECT STACK list, select one of the default stacks. Click the Create & Proceed Editing button. The Workspaces Configs page is shown. Change the workspace name and click the Devfile tab. Delete all the components and commands in the devfile to get an empty devfile. 3.4.2. Adding projects to your workspace Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure To add a project to your workspace: Click the Projects tab, and then click the Add Project button. Select the type of the project. Choose from: Samples , Blank , Git , GitHub , or Zip . Specify the required details for the project type that you selected, and click the Add button. To add another project to the workspace, click the Add Project button. After configuring the project for the workspace, check the change in the devfile, which is the configuration file of the workspace, by opening the Devfile tab. 3.4.3. Configuring the workspace and adding tools 3.4.3.1. Adding plug-ins CodeReady Workspaces 2.1 plug-ins replace CodeReady Workspaces 2.0 installers. The following table lists the CodeReady Workspaces 2.1 plug-ins that have replaced CodeReady Workspaces 2.0 installers. Table 3.3. CodeReady Workspaces 2.1 plug-ins that have replaced CodeReady Workspaces 2.0 installers CodeReady Workspaces 2.0 installer CodeReady Workspaces 2.1 plug-in org.eclipse.che.ws-agent Deprecated and not necessary org.eclipse.che.terminal Deprecated and not necessary anymore- org.eclipse.che.exec CodeReady Workspaces machine-exec Service org.eclipse.che.ls.java Language Support for Java Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure To add plug-ins to your workspace: Click the Plugins tab. Enable the plug-in that you want to add and click the Save button. 3.4.3.2. Defining the workspace editor Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure To define the editor to use with the workspace: Click the Editors tab. Note The recommended editor for CodeReady Workspaces 2.1 is Che-Theia. Enable the editor to add and click the Save button. Click the Devfile tab to view the changes. 3.4.3.3. Defining specific container images Procedure To add a new container image: Copy the following section from the devfile into components : - mountSources: true command: - sleep args: - infinity memoryLimit: 1Gi alias: maven3-jdk11 type: dockerimage endpoints: - name: 8080/tcp port: 8080 volumes: - name: projects containerPath: /projects image: 'maven:3.6.0-jdk-11' When using type: kubernetes or type: openshift , you must: Use separate recipe files. Note To use separate recipe files, the paths can be relative or absolute. For example: ... type: kubernetes reference: deploy_k8s.yaml ... ... type: openshift reference: deploy_openshift.yaml ... Alternatively, add the content as referenceContent (the referenceContent field replaces the CodeReady Workspaces 2.0 recipe content). Add a CodeReady Workspaces 2.0 recipe content to the CodeReady Workspaces 2.1 devfile as referenceContent : Click the Containers tab ( Workspace Details Containers ). Copy the CodeReady Workspaces 2.0 recipe, and paste it into the separate CodeReady Workspaces 2.1 component as a referenceContent . Set the type from the original CodeReady Workspaces 2.0 configuration. The following is an example of the resulting file: type: kubernetes referenceContent: | apiVersion: v1 kind: Pod metadata: name: ws spec: containers: - image: 'rhche/centos_jdk8:latest' name: dev resources: limits: memory: 512Mi Copy the required fields from the old workspace ( image , volumes , endpoints ). For example: Table 3.4. She 6 and She 7 equivalence table CodeReady Workspaces 2.0 workspace configuration CodeReady Workspaces 2.1 workspace devfile environments['defaultEnv'].machines['target'].servers components[n].endpoints environments['defaultEnv'].machines['machineName'].volumes components[n].volumes environments['defaultEnv'].recipe.type components[n].type environments['defaultEnv'].recipe.content components[n].image Change the memoryLimit and alias variables, if needed. Here, the field alias is used to set a name for the component. It is generated automatically from the image field, if not set. image: 'maven:3.6.0-jdk-11' alias: maven3-jdk11 Change the memoryLimit , memoryRequest , or both fields to specify the RAM required for the component. alias: maven3-jdk11 memoryLimit: 256M memoryRequest: 128M Open the Devfile tab to see the changes. Repeat the steps to add additional container images. 3.4.3.4. Adding commands to your workspace The following is a comparison between workspace configuration commands in CodeReady Workspaces 2.0 (Figure 1) and CodeReady Workspaces 2.1 (Figure 2): Figure 3.1. An example of the Workspace configuration commands in CodeReady Workspaces 2.0 Figure 3.2. An example of the Workspace configuration commands in CodeReady Workspaces 2.1 Table 3.5. She 6 and She 7 equivalence table CodeReady Workspaces 2.0 workspace configuration CodeReady Workspaces 2.1 workspace devfile environments['defaultEnv'].commands[n].name commands[n].name environments['defaultEnv'].commands[n].actions.command components[n].commandLine Procedure To define commands to your workspace, edit the workspace devfile: Add (or replace) the commands section with the first command. Change the name and the command fields from the original workspace configuration (see the preceding equivalence table). commands: - name: build actions: - type: exec command: mvn clean install Copy the following YAML code into the commands section to add a new command. Change the name and the command fields from the original workspace configuration (see the preceding equivalence table). - name: build and run actions: - type: exec command: mvn clean install && java -jar Optionally, add the component field into actions . This indicates the component alias where the command will be performed. Repeat step 2 to add more commands to the devfile. Click the Devfile tab to view the changes. Save changes and start the new CodeReady Workspaces 2.1 workspace. 3.5. Importing a OpenShift application into a workspace This section describes how to import a OpenShift application into a CodeReady Workspaces workspace. For demonstration purposes, the section uses a sample OpenShift application having the following two Pods: A Node.js application specified by this nodejs-app.yaml A MongoDB Pod specified by this mongo-db.yaml To run the application on a OpenShift cluster: To deploy a new instance of this application in a CodeReady Workspaces workspace, use one of the following three scenarios: Starting from scratch: Writing a new devfile Modifying an existing workspace: Using the Dashboard user interface From a running application: Generating a devfile with crwctl 3.5.1. Including a OpenShift application in a workspace devfile definition This procedure demonstrates how to define the CodeReady Workspaces 2.1 workspace devfile by OpenShift application. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . crwctl management tool is installed. See the CodeReady Workspaces 2.1 Installation GuideInstalling the crwctl management tool The devfile format is used to define a CodeReady Workspaces workspace, and its format is described in the Making a workspace portable using a devfile section. The following is an example of the simplest devfile: apiVersion: 1.0.0 metadata: name: minimal-workspace Only the name ( minimal-workspace ) is specified. After the CodeReady Workspaces server processes this devfile, the devfile is converted to a minimal CodeReady Workspaces workspace that only has the default editor (Che-Theia) and the default editor plug-ins (example: the terminal). Use the OpenShift type of components in the devfile to add OpenShift applications to a workspace. For example, the user can embed the NodeJS-Mongo application in the minimal-workspace defined in this paragraph by adding a components section. apiVersion: 1.0.0 metadata: name: minimal-workspace components: - type: kubernetes reference: https://raw.githubusercontent.com/.../mongo-db.yaml - alias: nodejs-app type: kubernetes reference: https://raw.githubusercontent.com/.../nodejs-app.yaml entrypoints: - command: ['sleep'] args: ['infinity'] Note that the sleep infinity command is added as the entrypoint of the Node.js application. This prevents the application from starting at the workspace start phase. It allows the user to start it when needed for testing or debugging purposes. To make it easier for a developer to test the application, add the commands in the devfile: apiVersion: 1.0.0 metadata: name: minimal-workspace components: - type: kubernetes reference: https://raw.githubusercontent.com/.../mongo-db.yaml - alias: nodejs-app type: kubernetes reference: https://raw.githubusercontent.com/.../nodejs-app.yaml entrypoints: - command: ['sleep'] args: ['infinity'] commands: - name: run actions: - type: exec component: nodejs-app command: cd USD{CHE_PROJECTS_ROOT}/nodejs-mongo-app/EmployeeDB/ && npm install && sed -i -- ''s/localhost/mongo/g'' app.js && node app.js Use this devfile to create and start a workspace with the crwctl command: The run command added to the devfile is available as a task in Che-Theia from the command palette. When executed, the command starts the Node.JS application. 3.5.2. Adding a OpenShift application to an existing workspace using the dashboard This procedure demonstrates how to modify an existing workspace and import the OpenShift application using the newly created devfile. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure After the creation of a workspace, use the Workspace menu and then the Configure workspace icon to manage the workspace. To modify the workspace details, use the Devfile tab. The workspace details are displayed in this tab in the devfile format. To add a OpenShift component, use the Devfile editor on the dashboard. For the changes to take effect, save the devfile and restart the CodeReady Workspaces workspace. 3.5.3. Generating a devfile from an existing OpenShift application This procedure demonstrates how to generate a devfile from an existing OpenShift application using the crwctl tool. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . crwctl management tool is installed. See the CodeReady Workspaces 2.1 Installation GuideInstalling the crwctl management tool Procedure Use the crwctl devfile:generate command to generate a devfile: The user can also use the crwctl devfile:generate command to generate a devfile from, for example, the NodeJS-MongoDB application. The following example generates a devfile that includes the NodeJS component: The Node.js application YAML definition is included in the devfile, inline, using the referenceContent attribute. To include support for a language, use the --language parameter: Use the generated devfile to start a CodeReady Workspaces workspace with crwctl . 3.6. Remotely accessing workspaces This section describes how to remotely access CodeReady Workspaces workspaces outside of the browser. CodeReady Workspaces workspaces exist as containers and are, by default, modified from a browser window. In addition to this, there are the following methods of interacting with a CodeReady Workspaces workspace: Opening a command line in the workspace container using the OpenShift command-line tool, kubectl Uploading and downloading files using the kubectl tool 3.6.1. Remotely accessing workspaces using the OpenShift command-line tool To access CodeReady Workspaces workspaces remotely using OpenShift command-line tool ( kubectl ), follow the instructions in this section. Note The kubectl tool is used in this section to open a shell and manage files in a CodeReady Workspaces workspace. Alternatively, it is possible to use the oc OpenShift command-line tool. Prerequisites The kubectl binary file from the OpenShift website . Verify the installation of kubectl using the oc version command: For versions 1.5.0 or higher, proceed with the steps in this section. Procedure Use the exec command to open a remote shell. To find the name of the OpenShift namespace and the Pod that runs the CodeReady Workspaces workspace: In the example above, the Pod name is workspace7b2wemdf3hx7s3ln.maven-74885cf4d5-kf2q4 , and the namespace is codeready . To find the name of the container: When you have the namespace, pod name, and the name of the container, use the kubectl command to open a remote shell: From the container, execute the build and run commands (as if from the CodeReady Workspaces workspace terminal): Additional resources For more about kubectl , see the OpenShift documentation . 3.6.2. Downloading and uploading a file to a workspace using the command-line interface This procedure describes how to use the kubectl tool to download or upload files remotely from or to an Red Hat CodeReady Workspaces workspace. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . Remote access to the CodeReady Workspaces workspace you intend to modify. For instructions see Section 3.6.1, "Remotely accessing workspaces using the OpenShift command-line tool" . The kubectl binary file from the OpenShift website . Verify the installation of kubectl using the oc version command: Procedure To download a local file named downloadme.txt from a workspace container to the current home directory of the user, use the following in the CodeReady Workspaces remote shell. To upload a local file named uploadme.txt to a workspace container in the /projects directory: Using the preceding steps, the user can also download and upload directories. 3.7. Creating a workspace from code sample Every stack includes a sample codebase, which is defined by the devfile of the stack. This section explains how to create a workspace from this code sample in a sequence of three procedures. Creating a workspace from the user dashboard: Using the Get Started view . Using the Custom Workspace view . Changing the configuration of the workspace to add code sample. Running an existing workspace from the user dashboard . For more information about devfiles, see Configuring a CodeReady Workspaces workspace using a devfile . 3.7.1. Creating a workspace from Get Started view of User Dashboard This section describes how to create a workspace from the User Dashboard. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts Procedure Navigate to the CodeReady Workspaces Dashboard. See Navigating CodeReady Workspaces using the Dashboard . In the left navigation panel, go to Get Started . Click the Get Started tab. In the gallery, there is list of samples that may be used to build and run projects. Changing resource limits Changing the memory requirements is only possible from the devfile . Start the workspace: click the chosen stack card. New workspace name Workspace name can be auto-generated based on the underlying devfile of the stack. Generated names always consist of the devfile metadata.generateName property as the prefix and four random characters. 3.7.2. Creating a workspace from Custom Workspace view of User Dashboard This section describes how to create a workspace from the User Dashboard. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces quick-starts Procedure Navigate to the CodeReady Workspaces Dashboard. See Navigating CodeReady Workspaces using the Dashboard . In the left navigation panel, go to Get Started . Click the Custom Workspace tab. Define a Name for the workspace. New workspace name Workspace name can be auto-generated based on the underlying devfile of the stack. Generated names always consist of the devfile metadata.generateName property as the prefix and four random characters. In the Devfile section, select the devfile template that will be used to build and run projects. Changing resource limits Changing the memory requirements is only possible from the devfile . Start the workspace: click the Create & Open button at the bottom of the form: 3.7.3. Changing the configuration of an existing workspace This section describes how to change the configuration of an existing workspace from the User Dashboard. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure Navigate to the CodeReady Workspaces Dashboard. See Navigating CodeReady Workspaces using the dashboard . In the left navigation panel, go to Workspaces . Click the name of a workspace to navigate to the configuration overview page. Click the Overview tab and execute following actions: Change the Workspace name . Toggle Ephemeral mode . Export the workspace configuration to a file or private cloud. Delete the workspace. In the Projects section, choose the projects to integrate in the workspace. Click the Add Project button and do one of the following: Enter the project Git repository URL to integrate in the workspace: Connect your GitHub account and select projects to integrate: Click the Add button. In the Plugins section, choose the plug-ins to integrate in the workspace. Example Start with a generic Java-based stack, then add support for Node.js or Python. In the Editors section, choose the editors to integrate in the workspace. The CodeReady Workspaces 2.1 editor is based on Che-Theia. Example: Switch to the CodeReady Workspaces 1.x editor To switch to the CodeReady Workspaces 1.x editor, select the GWT IDE. From the Devfile tab, edit YAML configuration of the workspace. See the Devfile reference . Example: add commands Example: add a project To add a project into the workspace, add or edit the following section: projects: - name: che source: type: git location: 'https://github.com/eclipse/che.git' 3.7.4. Running an existing workspace from the User Dashboard This section describes how to run an existing workspace from the User Dashboard. 3.7.4.1. Running an existing workspace from the User Dashboard with the Run button This section describes how to run an existing workspace from the User Dashboard using the Run button. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure Navigate to the CodeReady Workspaces Dashboard. See Navigating CodeReady Workspaces using the dashboard . In the left navigation panel, navigate to Workspaces . Click on the name of a non-running workspace to navigate to the overview page. Click on the Run button in the top right corner of the page. The workspace is started. The browser does not navigates to the workspace. 3.7.4.2. Running an existing workspace from the User Dashboard using the Open button This section describes how to run an existing workspace from the User Dashboard using the Open button. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure Navigate to the CodeReady Workspaces Dashboard. See Navigating CodeReady Workspaces using the dashboard . In the left navigation panel, navigate to Workspaces . Click on the name of a non-running workspace to navigate to the overview page. Click on the Open button in the top right corner of the page. The workspace is started. The browser navigates to the workspace. 3.7.4.3. Running an existing workspace from the User Dashboard using the Recent Workspaces This section describes how to run an existing workspace from the User Dashboard using the Recent Workspaces. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . Procedure Navigate to the CodeReady Workspaces Dashboard. See Navigating CodeReady Workspaces using the dashboard . In the left navigation panel, in the Recent Workspaces section, right-click the name of a non-running workspace and click Run in the contextual menu to start it. 3.8. Creating a workspace by importing the source code of a project This section describes how to create a new workspace to edit an existing codebase. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . An existing workspace with plug-ins related to your development environment defined on this instance of Red Hat CodeReady Workspaces Creating a workspace from user dashboard . There are two ways to do that before starting a workspace: Select a stack from the Dashboard , then change the devfile to include your project Add a devfile to a git repository and start the workspace using crwctl or a factory To create a new workspace to edit an existing codebase, use one of the following three methods after you have started the workspace: Import from the Dashboard into an existing workspace Import to a running workspace using the git clone command Import to a running workspace using git clone in a terminal 3.8.1. Importing from the Dashboard into an existing workspace Import the project. There are at least two ways to import a project using the Dashboard . From the Dashboard , select Workspaces , then select your workspace by clicking on its name. This will link you to the workspace's Overview tab. Or, use the gear icon. This will link to the Devfile tab where you can enter your own YAML configuration. Click the Projects tab. Click Add Project . You can then import project by a repository Git URL or from GitHub. Note You can add a project to a non-running workspace, but you must start the workspace to delete it. 3.8.1.1. Editing an existing repository To edit an existing repository: Choose the Git project or zip file, and CodeReady Workspaces will load it into your workspace. To open the workspace, click the Open button. 3.8.1.2. Editing the commands after importing a project After you have a project in your workspace, you can add commands to it. Adding commands to your projects allows you to run, debug, or launch your application in a browser. To add commands to the project: Open the workspace configuration in the Dashboard , then select the Devfile tab. Open the workspace. To run a command, select Terminal > Run Task from the main menu. To configure commands, select Terminal > Configure Tasks from the main menu. 3.8.2. Importing to a running workspace using the Git: Clone command To import to a running workspace using the Git: Clone command: Start a workspace, then use the Git: Clone command from the command palette or the Welcome screen to import a project to a running workspace. Open the command palette using F1 or CTRL-SHIFT-P , or from the link in the Welcome screen. Enter the path to the project you want to clone. 3.8.3. Importing to a running workspace with git clone in a terminal In addition to the approaches above, you can also start a workspace, open a Terminal , and type git clone to pull code. Note Importing or deleting workspace projects in the terminal does not update the workspace configuration, and the change is not reflected in the Project and Devfile tabs in the dashboard. Similarly, if you add a project using the Dashboard , then delete it with rm -fr myproject , it may still appear in the Projects or Devfile tab. 3.9. Configuring workspace exposure strategies The following section describes how to configure workspace exposure strategies of a CodeReady Workspaces server and ensure that applications running inside are not vulnerable to outside attacks. The workspace exposure strategy is configured per CodeReady Workspaces server, using the che.infra.kubernetes.server_strategy configuration property or the CHE_INFRA_KUBERNETES_SERVER__STRATEGY environment variable. The supported values for che.infra.kubernetes.server_strategy are: multi-host For the multi-host strategy, set the che.infra.kubernetes.ingress.domain (or the CHE_INFRA_KUBERNETES_INGRESS_DOMAIN environment variable) configuration property to the domain name that will host workspace component subdomains. 3.9.1. Workspace exposure strategies Specific components of workspaces need to be made accessible outside of the OpenShift cluster. This is typically the user interface of the workspace's IDE, but it can also be the web UI of the application being developed. This enables developers to interact with the application during the development process. CodeReady Workspaces supports three ways to make workspace components available to the users, also referred to as strategies : multi-host strategy The strategies define whether new subdomains are created for components of the workspace, and what hosts these components are available on. 3.9.1.1. Multi-host strategy With this strategy, each workspace component is assigned a new subdomain of the main domain configured for the CodeReady Workspaces server. On OpenShift, this is the only possible strategy, and manual configuration of the workspace exposure strategy is therefore always ignored. This strategy is the easiest to understand from the perspective of component deployment because any paths present in the URL to the component are received as they are by the component. On a CodeReady Workspaces server secured using the Transport Layer Security (TLS) protocol, creating new subdomains for each component of each workspace requires a wildcard certificate to be available for all such subdomains for the CodeReady Workspaces deployment to be practical. 3.9.2. Security considerations This section explains the security impact of using different CodeReady Workspaces workspace exposure strategies. All the security-related considerations in this section are only applicable to CodeReady Workspaces in multiuser mode. The single user mode does not impose any security restrictions. 3.9.2.1. JSON web token (JWT) proxy All CodeReady Workspaces plug-ins, editors, and components can require authentication of the user accessing them. This authentication is performed using a JSON web token (JWT) proxy that functions as a reverse proxy of the corresponding component, based on its configuration, and performs the authentication on behalf of the component. The authentication uses a redirect to a special page on the CodeReady Workspaces server that propagates the workspace and user-specific authentication token (workspace access token) back to the originally requested page. The JWT proxy accepts the workspace access token from the following places in the incoming requests, in the following order: The token query parameter The Authorization header in the bearer-token format The access_token cookie 3.9.2.2. Secured plug-ins and editors CodeReady Workspaces users do not need to secure workspace plug-ins and workspace editors (such as Che-Theia). This is because the JWT proxy authentication is transparent to the user and is governed by the plug-in or editor definition in their meta.yaml descriptors. 3.9.2.3. Secured container-image components Container-image components can define custom endpoints for which the devfile author can require CodeReady Workspaces-provided authentication, if needed. This authentication is configured using two optional attributes of the endpoint: secure - A boolean attribute that instructs the CodeReady Workspaces server to put the JWT proxy in front of the endpoint. Such endpoints have to be provided with the workspace access token in one of the several ways explained in Section 3.9.2.1, "JSON web token (JWT) proxy" . The default value of the attribute is false . cookiesAuthEnabled - A boolean attribute that instructs the CodeReady Workspaces server to automatically redirect the unauthenticated requests for current user authentication as described in Section 3.9.2.1, "JSON web token (JWT) proxy" . Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false . 3.9.2.4. Cross-site request forgery attacks Cookie-based authentication can make an application secured by a JWT proxy prone to Cross-site request forgery (CSRF) attacks. See the Cross-site request forgery Wikipedia page and other resources to ensure your application is not vulnerable. 3.9.2.5. Phishing attacks An attacker who is able to create an Ingress or route inside the cluster with the workspace that shares the host with some services behind a JWT proxy, the attacker may be able to create a service and a specially forged Ingress object. When such a service or Ingress is accessed by a legitimate user that was previously authenticated with a workspace, it can lead to the attacker stealing the workspace access token from the cookies sent by the legitimate user's browser to the forged URL. To eliminate this attack vector, configure OpenShift to disallow setting the host of an Ingress. 3.10. Mounting a secret as a file or an environment variable into a workspace container Secrets are OpenShift objects that store sensitive data such as user names, passwords, authentication tokens, and configurations in an encrypted form. Users can mount a secret that contains sensitive data in a workspace container. This reapplies the stored data from the secret automatically for every newly created workspace. As a result, the user does not have to provide these credentials and configuration settings manually. The following section describes how to automatically mount a OpenShift secret in a workspace container and create permanent mount points for components such as: Maven configuration, the settings.xml file SSH key pairs AWS authorization tokens A OpenShift secret can be mounted into a workspace container as: A file - This creates automatically mounted Maven settings that will be applied to every new workspace with Maven capabilities. An environment variable - This uses SSH key pairs and AWS authorization tokens for automatic authentication. Note SSH key pairs can also be mounted as a file, but this format is primarily aimed at the settings of the Maven configuration. The mounting process uses the standard OpenShift mounting mechanism, but it requires additional annotations and labeling for a proper bound of a secret with the required CodeReady Workspaces workspace container. 3.10.1. Mounting a secret as a file into a workspace container Warning Red Hat CodeReady Workspaces uses OpenShift VolumeMount subPath feature to mount files into containers. This is supported and enabled by default since OpenShift v1.15 and OpenShift 4. This section describes how to mount a secret from the user's namespace as a file in single-workspace or multiple-workspace containers of CodeReady Workspaces. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . Procedure Create a new OpenShift secret in the OpenShift namespace where a CodeReady Workspaces workspace will be created. The labels of the secret that is about to be created must match the set of labels configured in che.workspace.provision.secret.labels property of CodeReady Workspaces. The default labels are: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret : Note Note that the following example describes variations in the usage of the target-container annotation in versions 2.1 and 2.2 of Red Hat CodeReady Workspaces. Example: apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret ... Annotations must indicate the given secret is mounted as a file, provide the mount path, and, optionally, specify the name of the container in which the secret is mounted. If there is no target-container annotation, the secret will be mounted into all user containers of the CodeReady Workspaces workspace, but this is applicable only for the CodeReady Workspaces version 2.1 . apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/target-container: maven che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels: ... Since the CodeReady Workspaces version 2.2 , the target-container annotation is deprecated and automount-workspace-secret annotation with Boolean values is introduced. Its purpose is to define the default secret mounting behavior, with the ability to be overridden in a devfile. The true value enables the automatic mounting into all workspace containers. In contrast, the false value disables the mounting process until it is explicitly requested in a devfile component using the automountWorkspaceSecrets:true property. apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels: ... Data of the Kubernetes secret may contain several items, whose names must match the desired file name mounted into the container. apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file data: settings.xml: <base64 encoded data content here> This results in a file named settings.xml being mounted at the /home/user/.m2/ path of all workspace containers. The secret-s mount path can be overridden for specific components of the workspace using devfile. To change mount path, an additional volume should be declared in a component of the devfile, with name matching overridden secret name, and desired mount path. apiVersion: 1.0.0 metadata: ... components: - type: dockerimage alias: maven image: maven:3.11 volumes: - name: <secret-name> containerPath: /my/new/path ... Note that for this kind of overrides, components must declare an alias to be able to distinguish containers which belong to them and apply override path exclusively for those containers. 3.10.2. Mounting a secret as an environment variable into a workspace container The following section describes how to mount a OpenShift secret from the user's namespace as an environment variable, or variables, into single-workspace or multiple-workspace containers of CodeReady Workspaces. Prerequisites A running instance of Red Hat CodeReady Workspaces. To install an instance of Red Hat CodeReady Workspaces, see the CodeReady Workspaces 2.1 Installation GuideCodeReady Workspaces 'quick-starts' . Procedure Create a new OpenShift secret in the k8s namespace where a CodeReady Workspaces workspace will be created. The labels of the secret that is about to be created must match the set of labels configured in che.workspace.provision.secret.labels property of CodeReady Workspaces. By default, it is a set of two labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret : Note Note that the following example describes variations in the usage of the target-container annotation in versions 2.1 and 2.2 of Red Hat CodeReady Workspaces. Example: apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret ... Annotations must indicate the given secret is mounted as a file, provide the mount path, and, optionally, specify the name of the container in which the secret is mounted. If there is no target-container annotation, the secret will be mounted into all user containers of the CodeReady Workspaces workspace, but this is applicable only for the CodeReady Workspaces version 2.1 . apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/target-container: maven che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels: ... Since the CodeReady Workspaces version 2.2 , the target-container annotation is deprecated and automount-workspace-secret annotation with Boolean values is introduced. Its purpose is to define the default secret mounting behavior, with the ability to be overridden in a devfile. The true value enables the automatic mounting into all workspace containers. In contrast, the false value disables the mounting process until it is explicitly requested in a devfile component using the automountWorkspaceSecrets:true property. apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels: ... Data of the Kubernetes secret may contain several items, whose names must match the desired file name mounted into the container. apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file data: settings.xml: <base64 encoded data content here> This results in a file named settings.xml being mounted at the /home/user/.m2/ path of all workspace containers. The secret-s mount path can be overridden for specific components of the workspace using devfile. To change mount path, an additional volume should be declared in a component of the devfile, with name matching overridden secret name, and desired mount path. apiVersion: 1.0.0 metadata: ... components: - type: dockerimage alias: maven image: maven:3.11 volumes: - name: <secret-name> containerPath: /my/new/path ... Note that for this kind of overrides, components must declare an alias to be able to distinguish containers which belong to them and apply override path exclusively for those containers. 3.10.3. The use of annotations in the process of mounting a secret into a workspace container OpenShift annotations and labels are tools used by libraries, tools, and other clients, to attach arbitrary non-identifying metadata to OpenShift native objects. Labels select objects and connect them to a collection that satisfies certain conditions, where annotations are used for non-identifying information that is not used by OpenShift objects internally. This section describes OpenShift annotation values used in the process of OpenShift secret mounting in a CodeReady Workspaces workspace. Annotations must contain items that help identify the proper mounting configuration. These items are: che.eclipse.org/target-container : Valid till the version 2.1 . The name of the mounting container. If the name is not defined, the secret mounts into all user's containers of the CodeReady Workspaces workspace. che.eclipse.org/automount-workspace-secret : Introduced in the version 2.2. . The main mount selector. When set to true , the secret mounts into all user's containers of the CodeReady Workspaces workspace. When set to false , the secret does not mount into containers by default. The value of this attribute can be overridden in devfile components, using the automountWorkspaceSecrets boolean property that gives more flexibility to workspace owners. This property requires an alias to be defined for the component that uses it. che.eclipse.org/env-name : The name of the environment variable that is used to mount a secret. che.eclipse.org/mount-as : This item describes if a secret will be mounted as an environmental variable or a file. Options: env or file . che.eclipse.org/ <mykeyName> -env-name: FOO_ENV : The name of the environment variable used when data contains multiple items. mykeyName is used as an example. | [
"https://che.openshift.io/f?url=https://github.com/eclipse/che",
"https://che.openshift.io/f?url=https://github.com/maxandersen/quarkus-quickstarts/tree/che",
"https://che.openshift.io/f?url=https://gist.githubusercontent.com/themr0c/ef8e59a162748a8be07e900b6401e6a8/raw/8802c20743cde712bbc822521463359a60d1f7a9/devfile.yaml",
"--- apiVersion: 1.0.0 metadata: generateName: golang- projects:",
"https://che.openshift.io/f?url=https://gist.githubusercontent.com/themr0c/ef8e59a162748a8be07e900b6401e6a8/raw/8802c20743cde712bbc822521463359a60d1f7a9/devfile.yaml&override.metadata.generateName=myprefix",
"--- apiVersion: 1.0.0 metadata: generateName: myprefix projects:",
"--- apiVersion: 1.0.0 metadata: generateName: java-mysql- projects: - name: web-java-spring-petclinic source: type: git location: \"https://github.com/spring-projects/spring-petclinic.git\"",
"https://che.openshift.io/f?url=https://gist.githubusercontent.com/themr0c/ef8e59a162748a8be07e900b6401e6a8/raw/8802c20743cde712bbc822521463359a60d1f7a9/devfile.yaml&override.projects.web-java-spring-petclinic.source.branch=1.0.x",
"apiVersion: 1.0.0 metadata: generateName: java-mysql- projects: - name: web-java-spring-petclinic source: type: git location: \"https://github.com/spring-projects/spring-petclinic.git\" branch: 1.0.x",
"--- apiVersion: 1.0.0 metadata: generateName: golang- attributes: persistVolumes: false projects:",
"https://che.openshift.io/f?url=https://gist.githubusercontent.com/themr0c/ef8e59a162748a8be07e900b6401e6a8/raw/8802c20743cde712bbc822521463359a60d1f7a9/devfile.yaml&override.attributes.persistVolumes=true",
"--- apiVersion: 1.0.0 metadata: generateName: golang- attributes: persistVolumes: true projects:",
"https://che.openshift.io/f?url=https://gist.githubusercontent.com/themr0c/ef8e59a162748a8be07e900b6401e6a8/raw/8802c20743cde712bbc822521463359a60d1f7a9/devfile.yaml&override.attributes.dot.name.format.attribute=true",
"--- apiVersion: 1.0.0 metadata: generateName: golang- attributes: dot.name.format.attribute: true projects:",
"crwctl workspace:start --devfile=devfile.yaml",
"apiVersion: 1.0.0 metadata: name: che-in-che-out",
"apiVersion: 1.0.0 metadata: generateName: che-",
"apiVersion: 1.0.0 metadata: name: minimal-workspace",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/spring-projects/spring-petclinic.git' components: - type: chePlugin id: redhat/java/latest",
"apiVersion: 1.0.0 metadata: name: example-devfile projects: - name: frontend source: type: git location: https://github.com/acmecorp/frontend.git - name: backend clonePath: src/github.com/acmecorp/backend source: type: git location: https://github.com/acmecorp/backend.git",
"source: type: git location: https://github.com/eclipse/che.git startPoint: master 1 tag: 7.2.0 commitId: 36fe587 branch: master sparseCheckoutDir: wsmaster 2",
"source: type: zip location: http://host.net/path/project-src.zip",
"apiVersion: 1.0.0 metadata: name: my-project-dev projects: - name: my-project-resourse clonePath: resources/my-project source: type: zip location: http://host.net/path/project-res.zip - name: my-project source: type: git location: https://github.com/my-org/project.git branch: develop",
"components: - alias: theia-editor type: cheEditor id: eclipse/che-theia/next",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1",
"components: - alias: exec-plugin type: chePlugin registryUrl: https://my-customregistry.com id: eclipse/che-machine-exec-plugin/0.0.1",
"components: - alias: exec-plugin type: chePlugin reference: https://raw.githubusercontent.com.../plugin/1.0.1/meta.yaml",
"id: redhat/java/0.38.0 type: chePlugin preferences: java.jdt.ls.vmargs: '-noverify -Xmx1G -XX:+UseG1GC -XX:+UseStringDeduplication'",
"id: redhat/java/0.38.0 type: chePlugin preferences: go.lintFlags: [\"--enable-all\", \"--new\"]",
"components: - alias: mysql type: kubernetes reference: petclinic.yaml selector: app.kubernetes.io/name: mysql app.kubernetes.io/component: database app.kubernetes.io/part-of: petclinic",
"components: - alias: mysql type: kubernetes reference: petclinic.yaml referenceContent: | kind: List items: - apiVersion: v1 kind: Pod metadata: name: ws spec: containers: ... etc",
"components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml entrypoints: - parentName: mysqlServer command: ['sleep'] args: ['infinity'] - parentSelector: app: prometheus args: ['-f', '/opt/app/prometheus-config.yaml']",
"components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml env: - name: ENV_VAR value: value",
"components: - alias: appDeployment type: kubernetes reference: app-deployment.yaml mountSources: true",
"components: - alias: maven type: dockerimage image: eclipe/maven-jdk8:latest volumes: - name: mavenrepo containerPath: /root/.m2 env: - name: ENV_VAR value: value endpoints: - name: maven-server port: 3101 attributes: protocol: http secure: 'true' public: 'true' discoverable: 'false' memoryLimit: 1536M command: ['tail'] args: ['-f', '/dev/null']",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: type: dockerimage image: golang memoryLimit: 512Mi command: ['sleep', 'infinity']",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity']",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] volumes: - name: cache containerPath: /.cache",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/next env: - name: HOME value: USD(CHE_PROJECTS_ROOT) volumes: - name: cache containerPath: /.cache",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: openshift alias: mongo reference: mongo-db.yaml volumes: - name: mongo-persistent-storage containerPath: /data/db",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 memoryLimit: 1Gi - type: dockerimage image: eclipe/maven-jdk8:latest memoryLimit: 512M",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 memoryLimit: 1Gi memoryRequest: 512M - type: dockerimage image: eclipe/maven-jdk8:latest memoryLimit: 512M memoryRequest: 256M",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 cpuLimit: 1.5 - type: dockerimage image: eclipe/maven-jdk8:latest cpuLimit: 750m",
"components: - alias: exec-plugin type: chePlugin id: eclipse/che-machine-exec-plugin/0.0.1 cpuLimit: 1.5 cpuRequest: 0.225 - type: dockerimage image: eclipe/maven-jdk8:latest cpuLimit: 750m cpuRequest: 450m",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - type: cheEditor alias: theia-editor id: eclipse/che-theia/next memoryLimit: 2Gi env: - name: HOME value: USD(CHE_PROJECTS_ROOT)",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache endpoints: - name: web port: 8080 attributes: discoverable: false public: true protocol: http - type: dockerimage image: postgres memoryLimit: 512Mi env: - name: POSTGRES_USER value: user - name: POSTGRES_PASSWORD value: password - name: POSTGRES_DB value: database endpoints: - name: postgres port: 5432 attributes: discoverable: true public: false",
"apiVersion: 1.0.0 metadata: name: MyDevfile components: - type: cheEditor alias: theia-editor id: eclipse/che-theia/next endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: chePlugin id: redhat/php/latest memoryLimit: 1Gi endpoints: - name: 'php-endpoint' port: 7777 - type: chePlugin alias: theia-editor id: eclipse/che-theia/next endpoints: - name: 'theia-extra-endpoint' port: 8880 attributes: discoverable: true public: true - type: openshift alias: webapp reference: webapp.yaml endpoints: - name: 'web' port: 8080 attributes: discoverable: false public: true protocol: http - type: openshift alias: mongo reference: mongo-db.yaml endpoints: - name: 'mongo-db' port: 27017 attributes: discoverable: true public: false",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml",
"apiVersion: v1 kind: List items: - apiVersion: v1 kind: Deployment metadata: name: postgres labels: app: postgres spec: template: metadata: name: postgres app: name: postgres spec: containers: - image: postgres name: postgres ports: - name: postgres containerPort: 5432 volumeMounts: - name: pg-storage mountPath: /var/lib/postgresql/data volumes: - name: pg-storage persistentVolumeClaim: claimName: pg-storage - apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres name: postgres spec: ports: - port: 5432 targetPort: 5432 selector: app: postgres - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pg-storage labels: app: postgres spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: kubernetes reference: ../relative/path/postgres.yaml selector: app: postgres",
"commands: - name: build actions: - type: exec component: mysql command: mvn clean workdir: /projects/spring-petclinic",
"The commands are run using the default shell in the container.",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git components: - type: dockerimage image: golang alias: go-cli memoryLimit: 512Mi mountSources: true command: ['sleep', 'infinity'] env: - name: GOPATH value: USD(CHE_PROJECTS_ROOT)/go - name: GOCACHE value: /tmp/go-cache commands: - name: compile and run actions: - type: exec component: go-cli command: \"go get -d && go run main.go\" workdir: \"USD{CHE_PROJECTS_ROOT}/src/github.com/acme/my-go-project\"",
"apiVersion: 1.0.0 metadata: name: MyDevfile projects: - name: my-go-project clonePath: go/src/github.com/acme/my-go-project source: type: git location: https://github.com/acme/my-go-project.git commands: - name: tasks actions: - type: vscode-task referenceContent: > { \"version\": \"2.0.0\", \"tasks\": [ { \"label\": \"create test file\", \"type\": \"shell\", \"command\": \"touch USD{workspaceFolder}/test.file\" } ] }",
"commands: - name: tasks previewUrl: port: 8080 1 path: /myweb 2 actions: - type: exec component: go-cli command: \"go run webserver.go\" workdir: USD{CHE_PROJECTS_ROOT}/webserver",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment components: - alias: myApp type: kubernetes local: my-app.yaml attributes: editorFree: true",
"apiVersion: 1.0.0 metadata: name: petclinic-dev-environment projects: - name: petclinic source: type: git location: 'https://github.com/che-samples/web-java-spring-petclinic.git' attributes: persistVolumes: false",
"https://che.openshift.io/f?url=https://raw.githubusercontent.com/redhat-developer/devfile/master/getting-started/vertx/devfile.yaml",
"{ \"defaultEnv\": \"default\", \"environments\": { \"default\": { \"machines\": { \"dev-machine\": { \"attributes\": { \"memoryLimitBytes\": \"2147483648\" }, \"servers\": { \"8080/tcp\": { \"attributes\": {}, \"port\": \"8080\", \"protocol\": \"http\" } }, \"volumes\": {}, \"installers\": [ \"com.redhat.oc-login\", \"com.redhat.bayesian.lsp\", \"org.eclipse.che.ls.java\", \"org.eclipse.che.ws-agent\", \"org.eclipse.che.exec\", \"org.eclipse.che.terminal\" ], \"env\": {} } }, \"recipe\": { \"type\": \"dockerimage\", \"content\": \"quay.io/openshiftio/che-vertx\" } } }, \"projects\": [ { \"links\": [], \"name\": \"vertx-http-booster\", \"attributes\": { \"language\": [ \"java\" ] }, \"type\": \"maven\", \"source\": { \"location\": \"https://github.com/openshiftio-vertx-boosters/vertx-http-booster\", \"type\": \"git\", \"parameters\": {} }, \"path\": \"/vertx-http-booster\", \"description\": \"HTTP Vert.x Booster\", \"problems\": [], \"mixins\": [] } ], \"name\": \"wksp-jhwp\", \"commands\": [ { \"commandLine\": \"scl enable rh-maven33 'mvn compile vertx:debug -f USD{current.project.path} -Dvertx.disableDnsResolver=true'\", \"name\": \"debug\", \"attributes\": { \"goal\": \"Debug\", \"previewUrl\": \"USD{server.8080/tcp}\" }, \"type\": \"custom\" }, { \"commandLine\": \"scl enable rh-maven33 'mvn compile vertx:run -f USD{current.project.path} -Dvertx.disableDnsResolver=true'\", \"name\": \"run\", \"attributes\": { \"goal\": \"Run\", \"previewUrl\": \"USD{server.8080/tcp}\" }, \"type\": \"custom\" }, { \"commandLine\": \"scl enable rh-maven33 'mvn clean install -f USD{current.project.path}'\", \"name\": \"build\", \"attributes\": { \"goal\": \"Build\", \"previewUrl\": \"\" }, \"type\": \"mvn\" }, { \"commandLine\": \"mvn -Duser.home=USD{HOME} -f USD{CHE_PROJECTS_ROOT}/vertx-http-booster clean package\", \"name\": \"vertx-http-booster:build\", \"attributes\": { \"goal\": \"Build\", \"previewUrl\": \"\" }, \"type\": \"mvn\" }, { \"commandLine\": \"mvn -Duser.home=USD{HOME} -f USD{CHE_PROJECTS_ROOT}/vertx-http-booster vertx:run\", \"name\": \"vertx-http-booster:run\", \"attributes\": { \"goal\": \"Run\", \"previewUrl\": \"USD{server.8080/tcp}\" }, \"type\": \"mvn\" } ], \"links\": [] }",
"metadata: name: testing-workspace projects: - name: java-web-vertx source: location: 'https://github.com/che-samples/web-java-vertx' type: git components: - id: redhat/java/latest type: chePlugin - mountSources: true endpoints: - name: 8080/tcp port: 8080 memoryLimit: 512Mi type: dockerimage volumes: - name: m2 containerPath: /home/user/.m2 alias: maven image: 'quay.io/eclipse/che-java8-maven:nightly' apiVersion: 1.0.0 commands: - name: maven build actions: - workdir: 'USD{CHE_PROJECTS_ROOT}/java-web-vertx' type: exec command: 'mvn -Duser.home=USD{HOME} clean install' component: maven - name: run app actions: - workdir: 'USD{CHE_PROJECTS_ROOT}/java-web-vertx' type: exec command: > JDBC_URL=jdbc:h2:/tmp/db java -jar -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 ./target/*fat.jar component: maven - name: Debug remote java application actions: - referenceContent: | { \"version\": \"0.2.0\", \"configurations\": [ { \"type\": \"java\", \"name\": \"Debug (Attach) - Remote\", \"request\": \"attach\", \"hostName\": \"localhost\", \"port\": 5005 }] } type: vscode-launch",
"- mountSources: true command: - sleep args: - infinity memoryLimit: 1Gi alias: maven3-jdk11 type: dockerimage endpoints: - name: 8080/tcp port: 8080 volumes: - name: projects containerPath: /projects image: 'maven:3.6.0-jdk-11'",
"type: kubernetes reference: deploy_k8s.yaml",
"type: openshift reference: deploy_openshift.yaml",
"type: kubernetes referenceContent: | apiVersion: v1 kind: Pod metadata: name: ws spec: containers: - image: 'rhche/centos_jdk8:latest' name: dev resources: limits: memory: 512Mi",
"image: 'maven:3.6.0-jdk-11' alias: maven3-jdk11",
"alias: maven3-jdk11 memoryLimit: 256M memoryRequest: 128M",
"commands: - name: build actions: - type: exec command: mvn clean install",
"- name: build and run actions: - type: exec command: mvn clean install && java -jar",
"node=https://raw.githubusercontent.com/redhat-developer/devfile/master/samples/web-nodejs-with-db-sample/nodejs-app.yaml && mongo=https://raw.githubusercontent.com/redhat-developer/devfile/master/samples/web-nodejs-with-db-sample/mongo-db.yaml && apply -f USD{mongo} && apply -f USD{node}",
"apiVersion: 1.0.0 metadata: name: minimal-workspace",
"apiVersion: 1.0.0 metadata: name: minimal-workspace components: - type: kubernetes reference: https://raw.githubusercontent.com/.../mongo-db.yaml - alias: nodejs-app type: kubernetes reference: https://raw.githubusercontent.com/.../nodejs-app.yaml entrypoints: - command: ['sleep'] args: ['infinity']",
"apiVersion: 1.0.0 metadata: name: minimal-workspace components: - type: kubernetes reference: https://raw.githubusercontent.com/.../mongo-db.yaml - alias: nodejs-app type: kubernetes reference: https://raw.githubusercontent.com/.../nodejs-app.yaml entrypoints: - command: ['sleep'] args: ['infinity'] commands: - name: run actions: - type: exec component: nodejs-app command: cd USD{CHE_PROJECTS_ROOT}/nodejs-mongo-app/EmployeeDB/ && npm install && sed -i -- ''s/localhost/mongo/g'' app.js && node app.js",
"crwctl worspace:start --devfile <devfile-path>",
"crwctl devfile:generate",
"crwctl devfile:generate --selector=\"app=nodejs\" apiVersion: 1.0.0 metadata: name: crwctl-generated components: - type: kubernetes alias: app=nodejs referenceContent: | kind: List apiVersion: v1 metadata: name: app=nodejs items: - apiVersion: apps/v1 kind: Deployment metadata: labels: app: nodejs name: web (...)",
"crwctl devfile:generate --selector=\"app=nodejs\" --language=\"typescript\" apiVersion: 1.0.0 metadata: name: crwctl-generated components: - type: kubernetes alias: app=nodejs referenceContent: | kind: List apiVersion: v1 (...) - type: chePlugin alias: typescript-ls id: che-incubator/typescript/latest",
"oc version Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.0\", GitCommit:\"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529\", GitTreeState:\"clean\", BuildDate:\"2019-06-19T16:40:16Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"darwin/amd64\"} Server Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.0\", GitCommit:\"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529\", GitTreeState:\"clean\", BuildDate:\"2019-06-19T16:32:14Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}",
"oc get pod -l che.workspace_id --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE che workspace7b2wemdf3hx7s3ln.maven-74885cf4d5-kf2q4 4/4 Running 0 6m4s",
"NAMESPACE=che POD=workspace7b2wemdf3hx7s3ln.maven-74885cf4d5-kf2q4 oc get pod USD{POD} -o custom-columns=CONTAINERS:.spec.containers[*].name CONTAINERS maven,che-machine-execpau,theia-ide6dj,vscode-javaw92",
"NAMESPACE=che POD=workspace7b2wemdf3hx7s3ln.maven-74885cf4d5-kf2q4 CONTAINER=maven oc exec -ti -n USD{NAMESPACE} USD{POD} -c USD{CONTAINER} bash user@workspace7b2wemdf3hx7s3ln USD",
"user@workspace7b2wemdf3hx7s3ln USD mvn clean install [INFO] Scanning for projects (...)",
"REMOTE_FILE_PATH=/projects/downloadme.txt NAMESPACE=che POD=workspace7b2wemdf3hx7s3ln.maven-74885cf4d5-kf2q4 CONTAINER=maven oc cp USD{NAMESPACE}/USD{POD}:USD{REMOTE_FILE_PATH} ~/downloadme.txt -c USD{CONTAINER}",
"LOCAL_FILE_PATH=./uploadme.txt NAMESPACE=che POD=workspace7b2wemdf3hx7s3ln.maven-74885cf4d5-kf2q4 CONTAINER=maven oc cp USD{LOCAL_FILE_PATH} USD{NAMESPACE}/USD{POD}:/projects -c USD{CONTAINER}",
"projects: - name: che source: type: git location: 'https://github.com/eclipse/che.git'",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/target-container: maven che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels:",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels:",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file data: settings.xml: <base64 encoded data content here>",
"apiVersion: 1.0.0 metadata: components: - type: dockerimage alias: maven image: maven:3.11 volumes: - name: <secret-name> containerPath: /my/new/path",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/target-container: maven che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels:",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file labels:",
"apiVersion: v1 kind: Secret metadata: name: mvn-settings-secret labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspace-secret annotations: che.eclipse.org/automount-workspace-secret: true che.eclipse.org/mount-path: /home/user/.m2/ che.eclipse.org/mount-as: file data: settings.xml: <base64 encoded data content here>",
"apiVersion: 1.0.0 metadata: components: - type: dockerimage alias: maven image: maven:3.11 volumes: - name: <secret-name> containerPath: /my/new/path"
] | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/end-user_guide/workspaces-overview_crw |
Chapter 7. Enabling the Red Hat Virtualization Manager Repositories | Chapter 7. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable the pki-deps module. # dnf module -y enable pki-deps Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream The Red Hat Virtualization Manager has been migrated to a self-hosted engine setup. The Manager is now operating on a virtual machine on the new self-hosted engine node. The hosts will be running in the new environment, but cannot host the Manager virtual machine. You can convert some or all of these hosts to self-hosted engine nodes. | [
"subscription-manager register",
"subscription-manager list --available",
"subscription-manager attach --pool= pool_id",
"subscription-manager list --consumed",
"dnf repolist",
"subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms",
"subscription-manager release --set=8.6",
"dnf module -y enable pki-deps",
"dnf module -y enable postgresql:12",
"dnf module -y enable nodejs:14",
"dnf distro-sync --nobest"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/enabling_the_red_hat_virtualization_manager_repositories_migrating_to_she |
Chapter 6. LVM Troubleshooting | Chapter 6. LVM Troubleshooting This chapter provide instructions for troubleshooting a variety of LVM issues. 6.1. Troubleshooting Diagnostics If a command is not working as expected, you can gather diagnostics in the following ways: Use the -v , -vv , -vvv , or -vvvv argument of any command for increasingly verbose levels of output. If the problem is related to the logical volume activation, set 'activation = 1' in the 'log' section of the configuration file and run the command with the -vvvv argument. After you have finished examining this output be sure to reset this parameter to 0, to avoid possible problems with the machine locking during low memory situations. Run the lvmdump command, which provides an information dump for diagnostic purposes. For information, see the lvmdump (8) man page. Execute the lvs -v , pvs -a or dmsetup info -c command for additional system information. Examine the last backup of the metadata in the /etc/lvm/backup file and archived versions in the /etc/lvm/archive file. Check the current configuration information by running the lvm dumpconfig command. Check the .cache file in the /etc/lvm directory for a record of which devices have physical volumes on them. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/troubleshooting |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.